Advertisement
Articles
Advertisement

A tale of two cities: Ethernet meets PCIe

Wed, 07/10/2013 - 4:28pm
Michael Zimmerman, vice president of marketing, Tilera Corporation

Two industry megatrends are reshaping the way we design and deploy networks and compute (i.e. datacenter servers). The first is Network Functions Virtualization (NFV) with the goal of moving functions such as content distribution, firewalls and base station controllers from proprietary hardware to standard, low cost servers. These servers can run multiple virtual machines, each hosting a particular software-based network appliance. The other trend is Software Defined Networking (SDN) which makes applications network aware – that is, the network can be reprogrammed on the fly with open standards to optimize application flows.

These two trends, at first seeming to address different technologies – compute and networking – are creating a fusion of networks and servers. Since most of today’s applications are network centric, servers may end up spending many cycles processing networking traffic that represents content, users and applications. The migration of applications to the cloud has dictated a shift of networking and security workloads to the datacenter, which comes with a clear pain-point.

The Von Neumann architecture is still applicable to modern server design, in which processing, memory and I/O are interconnected with data, address and control buses. This architecture was designed for local memory workloads in which the processor mostly crunches algorithms, while the I/O throughput is secondary and dimensioned for peripherals.  The PCIe interface, quickly established itself as the lead I/O interface, creating massive industry backing for it.

However Von Neumann based assumptions are challenged as a modern computer operates heavily on incoming packet data flowing from the network, making PCIe a potential bottleneck since it must provide a high throughput interface for the servers. On the networking side, Ethernet is the pervasive communications technology having migrated from 1Gbps, 10Gbps to 40Gbps and now moving to 100Gbps. Modern servers and processors are spending more time processing packets flowing from Ethernet ports to the PCI interface than any other task. The Network Interface Card (NIC) is the logical bridge from the network world (Ethernet) to compute (PCIe).

The pain point lies in the sheer volume of Internet flows crossing the network and being transported from the Ethernet interface to the PCIe. The NIC is one of the critical technology building blocks and needs to be revisited in order to bring it in-line with the new requirements for fused networking and compute.
   

   The traditional design of a NIC caters to a simple function – bridging packets from an Ethernet interface into PCIe transactions to and from host system memory with minimal overhead. However NFV, SDN and the migration of networking and security workloads to the cloud have changed the role of the NIC. It is now a key processing block that connects a dynamic and high throughput network to a set of Virtual Machines (VMs) running high-performance networking applications. The NIC must evolve.

Let’s take a closer look at the required NIC processing functions when information is passed between the network and the compute realm (Figure 2):

1. Packets and flows need to be properly classified, as each VM will be attached to a different flow, content or user. This classification can be simple as IP address look-up or as complex as application ID indexing
2. Some networking and security workloads are taxing the server. Those workloads are tightly coupled to the flow processing, and it is far more effective to process them in the NIC before transporting the flow to the VMs. Examples are Transmission Control Protocol (TCP)/IP stack processing, Secure Sockets Layer (SSL) termination, and Deep Packet Inspection (DPI) with the evolution to even more advanced offload functions such as Hadoop acceleration and Open vSwitch offload
3. The NIC must efficiently deliver packet flows to the VMs through the PCIe interface, with Single Root I/O Virtualization (SR-IOV) – a logical function allowing seamless connectivity to many VMs over PCIe

The sum of the three functions is the ability of the NIC to steer flows, content, applications or users to the proper VM for compute processing - with as little networking processing load on the server compute resources, and with maximum possible efficiency as measured by throughput and low latency.  In addition, software programmability in the NIC itself is essential in order to keep up with the evolving networking and datacenter needs.


The migration of advanced functionality to the NIC is already in motion. Intel, Mellanox and Tilera have announced products that provide the ability to process flows intelligently and offload the server from spending cycles on flow processing, a workload which is not optimized for a server processor. Most notably Intel’s announcement of its QuickAssist Server Adapter 8920-SCC paves the way for heterogeneous computing, where the server processor performs application compute, and the NIC performs the network/security processing.

Advanced flow programming features in the NIC requires careful design and optimization. On the one hand, cost and power have to be kept to a minimum, while on the other hand the NIC needs to be programmable and offer similar flexibility as the network it attaches to and similar programmability as the VMs that it serves. These types of requirements will call for optimization at a silicon level. The compute – such as manycore CPUs – is tightly coupled to networking/security processing, while preserving ease of C/Linux programmability. The traditional NIC form factor and limited power budget drive the need for very high performance-per-watt attributes, implying a programmable architecture which delivers high-throughput at minimal power consumption.

While the migration to SDN/NFV has only just begun, it is already well accepted that more protocols, applications and content will need even higher performance, low latency and more flexible processing. The protocols and applications evolution is somewhat unpredictable, and therefore the programmable architectures are winning compared to fixed-function architectures. All the processing elements in the flow path: Network, NIC and compute VM can and should be programmable in this brave new world.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading