Advertisement
Articles
Advertisement

A Bright Future for High Speed I-O Cable Links

Fri, 05/21/2010 - 11:45am
David Sideck, FCI, www.fciconnect.com

Unrelenting demand for more communications bandwidth for data center networking is challenging interconnect technologies, such as Ethernet, Infiniband, Fibre Channel, and Serial-Attached SCSI (SAS).

The pull from consumers’ seemingly insatiable appetite for video content - whether clips on sites like YouTube, real-time Internet video, ambient video from security monitoring cameras, or video downloads from service providers for instant entertainment - is expected to continue to be the principal driver of extremely high growth of IP traffic for the foreseeable future. To cater to demand, some Internet service providers’ data centers have reached “warehouse-scale,” with a need to network tens of thousands of servers.

Meanwhile, increasingly complex scientific, technical, and financial computing applications are demanding more powerful supercomputers and larger high-performance clusters. Installation of top-tier number-crunching machines - such as the Cray “Jaguar” at Oak Ridge National Labs, IBM “Roadrunner” at Los Alamos National Labs, and the Sun “Ranger” at Texas Advanced Computing Center - can require over a thousand cables and miles of cable to accomplish all necessary server, switch and storage connections.

Advances in multi-core processing, virtualization, consolidation, increased host bus speeds, memory performance, and storage capacities have certainly helped increase the available capability a designer can integrate into a system design, but these technologies further strain bandwidth capacity, power consumption and thermal management. Size constraints are also a consideration for high-speed I/O ports. Racks aren’t getting any bigger, so options to increase the number of I/O ports and bandwidth entail either increasing port density along the edge of a switch line card or using higher signal speeds within a lane.

High Speed I/O Solutions
The good news is that I/O systems have been developed which address many of these requirements. Leading companies and industry organizations have spearheaded efforts to develop specifications to assure commonality, compatibility and networking functionality of hardware connections, signaling and software communications. Organizations such as the Optical Internetworking Forum, Infiniband Trade Association, IEEE 802.3 Working Group, INCITS T10 and T11 Task Groups, and the SFF Committee have all made important contributions.

Today, SFP+ links are supplanting SFP links for both Ethernet and Fibre Channel. While the SFP+ system fits the same board space as SFP, SFP+ provides a 10x bandwidth improvement over SFP for Ethernet (10 Gb/s vs. 1 Gb/s) and 2x improvement for Fibre Channel (8.5 Gb/s vs. 4.25 Gb/s). The SFP+ system also offers the capability to freely designate or configure any available system port with either copper or fiber-based cabling as dictated by the specific installation environment.

Other new and evolving high-speed parallel link specifications are enabling even-higher bandwidth ports. The QSFP+ system uses a 4 x 10 Gb/s link configuration for a 40 Gb/s port. Similarly, the CXP system provides 12 lanes that can be deployed to support 100 to 120 Gb/s aggregate port bandwidth. QSFP+ and CXP are specified for 4x and 12x Infiniband Quad Data Rate (QDR) interconnect links today, and are also anticipated to be used for 40 Gb/s and 100 Gb/s Ethernet links upon the release of the IEEE 802.3ba specification later this year.

A comparison of relative bandwidth density among the three port types demonstrates that QSFP+ and CXP can also increase I/O port bandwidth density along the edge of a switch line card. A single SFP+ port operating at 10 Gb/s provides roundly 16 Gb/s bandwidth per inch, QSFP+ offers a 3x improvement to 48 Gb/s per inch, and CXP offers a further 2.3x improvement to 113 Gb/s per inch. The use of ganged, double-stacked, or belly-to-belly I/O port configurations gives system designers options to achieve even higher linear bandwidth density with some port types.

The SFP+, QSFP+ or CXP host port can accept either a passive copper-based cable solution for cable lengths of 5 to 7 meters (or longer depending on the acceptance criteria), an actively-equalized copper-based cable solution for cable lengths up to 15 meters (or longer depending on the acceptance criteria), or a plug-in optical transceiver module with an optical connector on the rear of the module to accept passive fiber optic cable assemblies to enable even longer cable lengths. This architectural approach gives the equipment installer and system user the flexibility to configure the cable link as needed to cater to different installation environments.

Figure 1. QSFP+ copper cable assembly, host connector, cage and heat sink components.
Figure 2. Active optical cable (AOC) assemblies: CXP-to-QSFP+ splitter assembly (left) and CXP cable assembly (right).

With the evolution to QSFP+ and CXP for Infiniband QDR connections in high performance computing (HPC) systems, a fourth cabling option, known as an active optical cable (AOC) assembly, recently emerged. In an AOC, the optical fiber is terminated directly to an optical transceiver that is sealed within the metal backshell on each end of the cable assembly. The integrated electro-optical assembly helps to lower cost by component reduction and presents an electrical interface to the outside world. These end-to-end solutions are rapidly gaining traction for HPC installations because of reduced acquisition and operating costs; elimination of interoperability, eye safety, and optical connector contamination and cleaning issues; and improved reliability.

Future Direction
Although discussions are still in the early stages within various industry organizations, preliminary indications point toward further increases in electrical channel speeds as the likely direction. Ethernet, Fibre Channel, and Infiniband appear to be converging on 25 – 28 Gb/s as future lane speeds for high-speed I/O ports. Although much work remains to be done to assess connector and link capabilities at these speeds before form factors and specifications can be finalized, one anticipated outcome is further gains in bandwidth port density. For example, a more compact 4 x 25 Gb/s QSFP-like form factor might replace the initial 10 x 10 Gb/s configuration for a 100 Gb/s link.

Optical interconnects are anticipated to continue to grow in importance as technical challenges and cost considerations for 25 – 28 Gb/s copper lanes may limit the use of copper cables. Some predict that 2 – 3 meters may prove to be the maximum length for passive, direct-attached copper cables. This limitation may further spur the adoption of optical fiber within the data center, either in the form of integrated AOC assemblies or as individual transceiver modules and optical cables.

David Sideck is global market manager – high speed and power products at FCI, Inc. For more information, contact FCI, 825 Old Trail Rd., Etters PA 17319; 800-237-2374; www.fciconnect.com

Topics

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading