The movement to tap silicon photonics as an underlying technology for higher performance, low-power consumption data centers is well underway. In fact, fast interconnects have become a pacing factor for the industry, particularly as data centers become the backbone for cloud computing, which continues to accelerate with market demand.

According to Cisco’s recent Global Cloud Index, cloud traffic is forecasted to grow 600 percent by 2016. Data center traffic is now measured in zettabytes instead of exabytes. As a point of reference, one zettabyte is roughly equivalent to a trillion hours of online high-definition video streaming. Such a figure is indicative of the massive amounts of behind-the-scenes processing of cloud computing currently occurring in large data centers.

With traffic of this magnitude, the biggest challenge facing large data centers today is how to scale. How does cloud computing support 600 percent more traffic in a cost effective way? How does a data center search for a face amongst an almost infinite number of photos and videos stored on disks located either in the next building or half way across the world? How do data centers perform all of the functions, the storage, the processing, and the computing for cloud users? Part of the answer lies in better data center architectures, virtualization, and Software Defined Networks (SDN). But in the end, there is no alternative but to increase the physical speed of the optical fabric connecting all of the switches and routers in the data center.

Today, almost all cloud traffic runs through the data center on fabrics of 10 Gb/s. Over the last three years, server blades have largely migrated from 1 Gb/s ports to 10 Gb/s ports, so the traffic from server blades to the top of the rack switch is almost all 10G. Switches and routers largely communicate with each other on 10G networks with some of the high capacity links at 40G or even 100G. To keep up with demand, switches have steadily increased the number of 10G ports; some advertise 48 SFP+ ports, the likely maximum, in a single 1U switch. So how does this architecture scale to 100 Gb/s and beyond?

One innovative option is to use 100 Gb/s transceivers based upon silicon photonics chips. Because all of the functions are integrated in silicon, they are very small. For example, a four-channel laser array, four 25G modulators, and a wave division multiplexer can all fit in a chip no bigger than 4 mm x 7 mm. Likewise, for the receiver portion, integrated waveguides, demultiplexer, and four 25G germanium detectors can fit on a chip of similar size. These chips can easily be packaged in the most common four channel package, the Quad Small Form-factor Pluggable (QSFP).

The QSFP package shown in Figure 2 was designed to fit four channels in roughly the same space as a single channel SFP package. It is widely used at slower channel speeds, but not for 100 Gb/s because the maximum heat that a QSFP package can dissipate is only 3.5 W. Most of the early 4x25G (100 Gb/s) solutions, not using silicon photonics, required much larger packages so they could fit the myriad of components and dissipate the amount of heat generated. The first generation CFP, “C” 100G Form-factor Pluggable, was 16 square inches and allowed up to 32 W of power. Only four of these would fit across the front panel of a switch, so the total bandwidth of the switch actually decreased with 100G ports. The bandwidth of 48 ports at 10 Gb/s is 480 Gigabits per second; the bandwidth of four 100 Gb/s ports is obviously only 400 Gb/s. The second generation, CFP2, was an improvement at 12 W and half the size. But this is still a long way from a QSFP package which can increase the bandwidth over the CFP ports by a factor of 10.

Silicon photonics solutions can use the QSFP package because they are small and low power. The low drive voltages of the 25G modulators allow for low-power CMOS drivers; the responsivity and the efficiency of the 25G detectors allow for low-power CMOS trans-impedance amplifiers (TIAs). Eliminating thermal electric coolers (TECs) reduces another typical source of power consumption. In fact, silicon photonics solutions can consume less than the 3.5 W, and, at the same time, support distances of up to 2 km, more than the vast majority of data center links.

There are other advantages of silicon photonics. Because the chips are fabricated using the same CMOS wafers as electronics chips, they are low cost. Silicon photonics chips are processed using mask layers in the same foundry as electronics wafers. Just like traditional wafers, the silicon photonics wafers are diced into chips and packaged. Optical chips fabricated in this manner can be just as inexpensive as their electrical cousins. When mass volumes are needed, the wafer fab simply runs more wafers of the same recipe.

Silicon photonics eliminates the need and the expense for hand assembly of hundreds of piece parts. Silicon photonics chips are much, much smaller than the optical subassemblies they replace. Other optical solutions assembled from discrete components are packaged in expensive, hermetically sealed packages. With these solutions, even a speck of dust between any of the components can inhibit the light path and render the product useless. By contrast, silicon photonics devices are totally self-contained within the layers of the chip. With no need for hermiticity, they can reuse low-cost industry standard electronics packaging.

A huge advantage of optical communication is the ability to provide parallel channels over the same optical fiber using different wavelengths of light. This technique is called Wave Division Multiplexing (WDM), and there is no equivalent in the electrical domain. With WDM, four, eight or even 40 channels of light, each at a different frequency, can use a single strand of optical fiber.  Fiber is low-cost, especially when a single strand is replacing so many copper cables. Therefore, for large pipes, optical interconnect is far less expensive than copper cabling.

The future for silicon photonics is bright. By integrating more channels, future versions will support up to 400 Gb/s and 1.6 Tb/s on a single chip. The 400 Gb/s configuration can be accomplished by either 16 WDM lanes, each operating at 25 Gb/s, or 8 WDM lanes operating at 50 Gb/s. The key optical components, the high speed modulators and high speed detectors, are already capable of 50 Gb/s operation and could support either approach. Scaling to 1.6 Tb/s will likely be accomplished by 32 WDM channels, each operating at 50 Gb/s. So when cloud computing requires that high-speed data center fabric move to 400G and 1.6 Tb/s, silicon photonics solutions will be ready as well.

With cloud computing traffic growing at 44 percent combined annual growth rate (CAGR), the need for faster networks is imminent. We may not want to watch the trillion hours of HD video streaming in a zettabyte, but we do want the cloud to find that lost clip of a band that we saw in high school.