Organizations are losing money simply because they are not looking critically at the interconnect technology choices they have. Why? Because they are too wed to Ethernet and InfiniBand to acknowledge that these two are no longer the most economical technologies in system connectivity. Ethernet — with its three decades of ubiquity in the technology industry — continues to be suitable as a system-to-system interface in data centers and cloud computing, while InfiniBand has expanded into a viable interconnect technology for high-performance computing. However, both have been eclipsed within the rack, in terms of cost and energy, by PCI Express (PCIe).
Let's take a look at the price and power comparisons of systems based on Ethernet and InfiniBand vis-a-vis those using PCIe, and explore the big deltas between them in cost and energy requirements.

InfiniBand was originally envisioned as a unified fabric to replace most of the other data center interconnects. While it did not achieve that goal, it has become popular as a high-speed clustering interconnect, replacing proprietary solutions that were in use until then.

PCI and its successor PCIe have permeated all manner of computing, storage and networking systems for decades, and PCIe has built a huge ecosystem in its existence — so much so that, aside from select products marketed by just a few vendors, almost all semiconductors come with PCIe native. The PCI-SIG has more than 800 members, reflecting the reach and the adoption breadth of this popular interface. With the latest incarnation of PCIe in Gen 3 speeds( 8GT/s), PCIe is now showing signs of growing from a chip-to-chip interconnect interface into an interface of choice within the rack and encroaching on applications once the domain of Ethernet and InfiniBand.  Engineers won’t be easily swayed anytime soon to replace Ethernet and InfiniBand with PCIe for incremental savings; they need an order of magnitude in savings for both power and cost before changing anything. PCIe is definitely up to that task!

Current Architecture
Today's high-volume systems have several interconnect technologies that need to be supported, with InfiniBand, Fibre Channel and Ethernet representing the majority of them. These system architectures have several limitations:

  1. Existence of multiple I/O interconnect technologies
  2. Low utilization rates of I/O endpoints
  3. Higher latency because the PCIe interface native in the processors on these systems needs to be converted to multiple protocols - rather than converging all endpoints by using PCIe
  4. High power and cost of the system due to the need for multiple I/O endpoints to satisfy the multiple I/O interconnect technologies
  5. I/O is fixed at the time of architecture and build… no flexibility to change later
  6. Management software must handle multiple I/O protocols with overhead

The solution to these limitations can be found in sharing I/O endpoints.  I/O-sharing is appealing to system designers because it lowers costs and power, improves performance and utilization, and simplifies design.  Implementing it in a PCIe switch is the key enabler to enhanced new architectures. Single-Root I/O Virtualization (SR-IOV) technology implements I/O virtualization in the hardware for improved performance, and makes use of hardware-based security and Quality of Service (QoS) features in a single physical server. SR-IOV also allows the sharing of an I/O device by multiple guest OS’s running on the same server.

Tables 1 and 2 provide a high-level comparison of the cost and power requirements, respectively, of using PCIe over 10G Ethernet and Infiniband.

cost comparison











power savings comparison











PCIe offers more than 50 percent savings over 10G Ethernet and QDR IB, mainly through the elimination of adapters.

What makes this cost and power savings possible is that, as stated earlier, PCIe is native on an increasing number of processors from major vendors so designers can benefit from the lower latency realized by not having to use any components between a CPU and a PCIe switch. With this new generation of CPUs, the designers can place a PCIe switch directly off the CPU, thereby reducing latency and component cost.

To satisfy the requirements in the shared-IO and clustering market segments, vendors such as PLX Technology are bringing to market high-performance, flexible, and power- and space-efficient devices. These switches have been architected to fit into the full range of applications cited above. Looking forward, PCIe Gen4, with speeds of up to 16Gbps per link, will help accelerate and expand the adoption of PCIe technology into newer market segments, while making it easier and economical to design and use.

Krishna Mallampati is senior director of product marketing for the PCIe switches at PLX Technology.