Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Saturday, February 5, 2011

The Two Roles of the Data Center Network


By Juniper

For years, the primary role of the data center network was to connect users to applications.  Over time—even as these networks evolved from terminal networks (Bisync) to SNA, Token Ring, DECnet and, finally, to Ethernet—their role remained largely the same.  And because there was typically a human at the far end of the network, a certain amount of latency could be tolerated.

With the evolution to SOA-based applications and shared storage, however, the data center network has taken on a new role, becoming an extension of the server and its memory hierarchy. 

A modern RISC microprocessor is designed to process one instruction every clock cycle, assuming the required instruction and data are located in L1 cache.  When it is not (a cache miss), the server will look elsewhere in its memory hierarchy, beginning with L2 cache up through main memory.  While this fetch takes place, the core is stalled and the application process ceases to progress. If the required bits are not resident in memory (a page fault), then the server must look to another source; this will be a storage device or another server or device which generally is external to the server.  These devices are connected via some form of a network, generally Ethernet, Fibre Channel, or Infiniband. 

In the meantime, even if the core attempts to “context switch” to another process, the initial application process remains stalled.  The longer it takes to return the instructions or data, the longer the application process remains idle and the slower the application performance.  To avoid these delays, the new data center network should optimally deliver consistently low latency (low jitter). This is best achieved when connected devices are adjacent in the network—that is, they should be as close as possible, ideally just one network “hop” away.

To fulfill its traditional role, hierarchal “tree” network topologies introduced into the data center to interconnect devices.  With 95% of data center traffic moving north and south in the tree between users and servers, this architecture worked fine in the early days.  However, today’s traffic patterns find most network traffic—up to 75%—is east to west, traveling between devices  within the data center. Before it can travel east to west, though, the traffic must first move up and down—that is, north and south—in the network “tree.” As a result, traversing the data center requires as many as five network hops, adding both latency and jitter, which directly impacts application performance.

The solution, as I have discussed in the past, is to flatten the network and create a data center-wide fabric, where every device is directly connected to every other device and is just one network hop away. This architecture not only reduces latency and jitter, enabling optimal application behavior, it is also the ideal topology for cloud computing.  That’s because—since all external elements in the memory hierarchy are always one hope away, regardless of physical location— it enables application processing to occur anywhere within the cloud (fabric).  The challenge will be to build fabrics that can scale to encompass all resources within the data center without adding latency or cost and without introducing undo complexity. 

This is the mission of Juniper’s Stratus project: To fulfill both roles of the network in the modern data center. Stay tuned for more details.

No comments:

Post a Comment

My Blog List

Networking Domain Jobs