August 18, 2009 By Blake Harris
Photo: Amin Vahdat directs the Center for Network Systems at UC San Diego.
Computer scientists at the University of California, San Diego have set out to develop software that will allow data centers to function as single, plug-and-play networks, but that will still scale to the massive size required of modern data center networks. And now with the deployment of software they have dubbed "PortLand," the seem to have achieved this.
According to a news release, the software system is a fault-tolerant, layer 2 data center network fabric capable of scaling to 100,000 nodes and beyond. PortLand is also fully compatible with existing hardware and routing protocols, provides support for virtual machines and migration, and could dramatically reducing administrative overhead. Critically, it removes the reliance on a single spanning tree, natively leveraging multipath routing and improving fault tolerance.
"With PortLand, we came up with a set of algorithms and protocols that combine the best of layer 2 and layer 3 network fabrics," explained Amin Vahdat, a computer science professor at UC San Diego's Jacobs School of Engineering. "Today, the largest data centers contain over 100,000 servers. Ideally, we would like to have the flexibility to run any application on any server while minimizing the amount of required network configuration and state."
Looking for ways to improve data center networking, Vahdat and his team of graduate students from the Jacobs School of Engineering revisited the long-standing trade-offs between layer 2 or Ethernet networks - which route on MAC addresses - and layer 3 networks - which route on IP addresses.
Today's data centers are often run on layer 3 networks, but this demands huge numbers of person-hours to set up and maintain. As well. layer 3 networks prohibit straightforward implementation of virtual machine migration- limiting flexibility and efforts to reduce energy and cost in the data center.
"Our goal is to allow data center operators to manage their network as a single fabric," added Vahdat in the statement. "We are working toward a network that administrators can think of as one massive 100,000-port switch seamlessly serving over one million virtual endpoints."
As mega data centers handle more and more of the world's computing and storage needs, data center networking is becoming increasingly important. Loading the front page of any active Facebook user, for example, typically involves over 1,000 servers in 300 milliseconds or less.
One of PortLand's key innovations is its location discovery protocol, which, according to the computer scientists, opens up the possibility of a scalable layer 2 network. Switches automatically learn their location within the data center topology without any human intervention. These switches, then, assign "Pseudo MAC" (PMAC) addresses to each of the servers they connect to. These PMAC addresses - rather than MAC addresses - are used internally in the network for packet forwarding.
Server behavior remains the same in networks running PortLand. When a server wants to talk to a server on the other side of the data center, that first server still sends out an "ARP," which is a request for the MAC address of the computer with which it wants to communicate, based on its IP address.
But now, instead of broadcasting this request to the entire network, the switch that received the ARP talks to a directory service which returns a PMAC address, rather than the traditional MAC address.
"We have replaced broadcast with a server lookup. And we are forwarding based on PMAC addresses rather than MAC addresses. On the last hop, the egress hop, the switch rewrites the PMAC to be its actual MAC address," explained Vahdat. "We in effect transparently leverage the built-in hierarchy of data center networks."
When new machines are added, or when virtual machines are moved, new PMAC addresses are automatically generated.
This Digital Communities white paper highlights discussions with IT officials in four counties that have adopted shared services models. Our aim was to learn about the obstacles these governments have faced when it comes to shared services and what it takes to overcome those roadblocks. We also spoke with several members of the IT industry who have thought long and hard about these issues. The paper offers some best practices for shared government-to-government services, but also points out challenges that government and industry still must overcome before this model gains widespread adoption.