Gartner Blog Network


Facebook’s New Data Center Network Design

by Andrew Lerner  |  November 17, 2014  |  Submit a Comment

For all the talk about the hyper scale data centers, there hasn’t actually been a ton of  detailed information about how they do the network stuff they do. This week, it was interesting to see some detail come out via blog/video from Facebook Network Engineer Alexey Andreyev regarding the design of the network for the new Facebook Altoona, IA data center. While most mainstream enterprise don’t relate to the scale, budget, and skillset/personnel of an organization like Facebook, there are some tangible takeaways. In other words, while a lot of their design constructs are built for scale, many of these principles can apply to smaller data center networks. Here are some of my key takeaways, and also food for thought the next time you “like” something or “tag” somebody…

Keeping it Simple

From the Facebook blog:  “Our goal is to make deploying and operating our networks easier and faster over time…” I couldn’t agree more, and improving/simplifying Network Operations is a key area when we evaluate solutions as part of the Data Center Networking Magic Quadrant.  One of the specific ways Facebook simplifies things is to automate wherever they can, which reduces manual error and scales much better (here’s a related blog on network automation).

Less is More

Facebook uses smaller, simpler and cheaper network switching infrastructure. This has direct applicability in the mainstream and we’ve published on it here:  Rightsizing the Enterprise Data Center Network. Here’s Facebook’s take on it: “…it requires only basic mid-size switches to aggregate the TORs. The smaller port density of the fabric switches makes their internal architecture very simple, modular, and robust, and there are several easy-to-find options available from multiple sources.

Network Pods

They refer to their design as a Core/Pod architecture, with a Pod containing 48 server racks, built in a leaf/spine architecture. Interconnectivity between pods is 40G and not oversubscribed, also in a leaf/spine architecture.  The pod approach is modular, and allows them to evolve and iterate their network design as requirements change and technological capabilities change/advance.

Traffic Flows

While I think many network practitioners now realize that traffic is shifting from traditional North/South (app-to-user) patterns to East/West (app-to-app) the blog includes a powerful data point:  “What happens inside the Facebook data centers – “machine to machine” traffic – is several orders of magnitude larger than what goes out to the Internet.” Similarly, Cisco recently reported a study that intra-data center traffic was 77% in 2013, and will remain high thru 2018. We’ve been seeing this trend for several years due to changing application architectures among other things. Net net, we recommend that new data center network builds should be a 1- or 2- tier Ethernet fabric that is optimized for both north/south and east/west with deterministic latency between any two points.  We’ve published several pieces of research on data center fabrics including: Competitive Landscape: Data Center Ethernet Fabric and Technology Overview for Ethernet Switching Fabric.

Vendors?

The blog post doesn’t mention specific vendors, but Facebook has previously blogged about using disaggregated switching approaches with their own software (FBOSS) running on white-box style hardware. Here’s some more information on the topic of disaggregation, with additional research coming soon…

But Wait, no SDN?

While there’s no explicit mention of SDN controllers per say, they’re doing some interesting stuff.  They run L3 ECMP using BGP, but there is a centralized BGP controller with “override” capability which sounds a bit SDN-ish to me. This wasn’t lost on other readers as well and Tech Target News Director Shamus McGillicuddy (@shamusTT) captured the sentiment very well via a tweet:  “i want more details on the home-grown BGP controller that has “override” capability over the distributed BGP control plane.”

Regards, Andrew

Category: devops  in-the-news  networking  

Tags: cisco  disaggregation  fabrics  facebook  hyperscale  ipv6  magic-quadrant  white-box  

Andrew Lerner
Research Vice President
4 years at Gartner
19 years IT Industry

Andrew Lerner is a Vice President in Gartner Research. He covers enterprise networking, including data center, campus and WAN with a focus on emerging technologies (SDN, SD-WAN, and Intent-based networking). Read Full Bio




Leave a Reply

Your email address will not be published. Required fields are marked *

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.