Advertisement
Articles
Advertisement

Addressing the data deluge challenge in mobile networks with intelligent content caching

Fri, 06/07/2013 - 3:31pm
Seong Hwan Kim, Ph.D., Technical Marketing Manager, LSI

The most recent IDC Predictions 2013: Competing on the 3RD Platform report forecasts the biggest driver of IT growth to once again be mobile devices (smartphones, tablets, e-readers, etc.), generating around 20 percent of all IT purchases and accounting for more than 50 percent of all IT market growth. Mobile devices continue to provide the ubiquitous and constant Internet access that is creating massive amounts of multimedia traffic, with video remaining the dominant component in this data deluge. 

Mobile networks are struggling to satiate the seemingly unquenchable thirst from more users for faster access to more and more digital content. This dynamic is creating a “data deluge gap”—a disparity between network capacity and growing demand. Competitive pressures prevent mobile operators from being able to make the capital investment required to close this widening gap with brute force bandwidth, making it necessary explore new ways of providing services more intelligently and cost-effectively.

This article explores one such technique: intelligent content caching to improve overall throughput by minimizing traffic flows end-to-end in mobile networks. 

Meeting user expectations
Before exploring content caching, it is instructive to understand the user expectations driving the data deluge gap. A recent study (reported in an Open Networking Summit presentation titled OpenRadio: Software Defined Wireless Infrastructure) found that it takes around 7-20 seconds to load a full Web page over mobile networks. On a corporate LAN or home broadband network, Web pages typically take 6 seconds or less to load. This diverging user experience adds to the perception that mobile networks are too slow. 

Meeting user expectations will become even more challenging as the amount of video and multimedia traffic increases. Cisco’s Visual Networking Index forecasts video will constitute more than 70 percent of all network traffic in the near future. Accommodating this explosive growth, particularly during periods of peak activity, will require both more bandwidth and more intelligent use of that bandwidth from the access to the core in mobile networks. 

New mobile network management solutions will need to go beyond Quality of Service (QoS) and other traditional traffic management provisions, however. The reason is: while QoS can prioritize traffic flows, it can do nothing to minimize them. So as mobile networks become increasingly like content delivery networks, it will be necessary to operate them as such.  And one proven technique for minimizing the amount of traffic end-to-end in content delivery networks is caching. 

Intelligent content caching in mobile networks
Intelligent content caching is a cost-effective way to improve the Quality of Experience (QoE) for mobile users. The fundamental idea of intelligent caching is to store popular content as close as possible to the users, thereby making it more ready available while simultaneously minimizing backhaul traffic. 

Content caching employs a geographically-distributed and layered architecture as shown in the Figure 1. There are two layers of caching established by location: one is at the edge or access portion of the network; the other is more centralized toward the core of the network. Such a model is defined as hierarchical caching.


While financially justified based on the cost savings, caching in Layer 1 at the edge of the network, such as with the eNodeB or Radio Network Controller platforms, requires a higher initial investment owing to the high number of access nodes involved. With far fewer nodes in the core, such as a gateway node and/or a central datacenter, caching at Layer 2 requires a relatively low initial investment. 

In a hierarchical caching architecture, content is cached concurrently in both layers to compound the bandwidth savings. Numerous industry studies have shown that caching at Layer 2 can reduce traffic from the mobile network core to the Internet by more than 30 percent. Caching at Layer 1 can reduce backhaul traffic from the radio area network (RAN) to the core by 30 percent or more depending on the cache hit rate, as recently reported in a Light Reading Webinar titled Extensibility: The Key to Maximizing Caching Investments. 

How intelligent content caching works
The bandwidth-reducing benefit of caching increases as the “hit rate” increases, which it inevitably does with popular content, such as video going viral or a breaking news story. Figure 2 shows two different data paths: a “cold” path for the first time content is accessed by any user; and a “hot” path for subsequent access from cache by other users. This particular configuration employs an intelligent communications processor to offload the CPU for better performance, and a “flash cache” card with solid state memory.  Not shown is the coordination of cached content between Layers 1 and 2. 


In the cold data path from a user’s perspective, if the deep packet inspection (DPI) engine finds the content is not already cached by matching the request to an entry in the cache content table, the processor’s classification engine passes the request to the uplink Ethernet connection to be fetched from an upstream source, either the Layer 2 cache or the target site on the Internet. If the content is coming from the Internet and each cache has available capacity, the content will be placed in both Layer 1 and Layer 2 cache while it is being delivered to the user. Intelligent algorithms are used to continuously determine which content should be cached based on a combination of recency, popularity and other factors. 

Again from a user’s perspective, but this time from a different user, the DPI engine checks to see if the content requested has been cached locally. If it is found in the cache content table, the processor’s classification engine sends the request to the local, Layer 1 cache. All subsequent requests from this particular user for this particular content are recognized directly by the classification engine and do not, therefore, require any further involvement from the DPI engine. 

While many of the content caching solutions available today utilize x86 or other general-purpose CPUs to perform traffic inspection, this approach is not well suited for a Layer 1 cache where there are requirements for low power consumption and low cost. Offloading the CPU with an intelligent communications processor equipped with purpose-built acceleration engines, as depicted in Figure 2, can yield up to a 5-times improvement in performance. 

The problem with using general-purpose CPUs for packet-level processing is that critical, real-time tasks like traffic inspection are often performed only at the port level. Because many applications use HTTP as a transport layer, the lack of deep understanding of the specific applications in the network traffic flows hinders efficient content management. So while a general-purpose CPU programming model makes software development easier, it can result in CPU resources being overwhelmed and poor performance/watt/cost. 

By contrast, the hardware acceleration engines in purpose-built System on Chip (SoC) communications processors provide much deeper application-level awareness in real-time, which is critical in broadband 3G and 4G mobile networks. The SoC design also provides superior throughput performance while consuming less power. 

The use of solid state storage in purpose-built, small form factor flash cache acceleration cards similarly maximizes performance with minimal power consumption compared to caching in memory or to hard disk drives. A Vodafone “Typical Data Usage” chart shows that a 4-minute YouTube video is about 11 MB of content, for example, while the video streaming of a 30-minute TV episode represents about 90MB of data. A flash cache acceleration care with 512 GB of capacity would, therefore, be capable of storing about 50,000 of these video clips or about 6,000 of the half-hour streaming videos. 

Conclusion
Intelligent content caching affords three major benefits that together help close the data deluge gap. First, by reducing latency, user QoE is improved dramatically, even under heavy loads, resulting in more satisfied users.  Second, by distributing the total load more evenly from the edge to the core, overall network throughput can be optimized.  Third and perhaps most importantly, profitability is increased through a combination of more revenue from satisfied users and better utilization of available backhaul bandwidth. 

These benefits can all be maximized by using solutions purpose-built for the special needs of mobile networks. The use of specialized mobile communications processors that combine multiple CPU cores with multiple hardware acceleration engines—all on a single integrated circuit—results in maximum performance with minimal power consumption. Dedicated and standards-based flash cache acceleration cards provide both the performance and versatility needed to optimize the configuration of a hierarchical caching architecture. 

It bears repeating: As mobile networks become increasingly like content delivery networks, it will be necessary to operate them as such. And intelligent content caching is a proven technique for delivering content more quickly and cost-effectively.


About the author
Seong Hwan Kim is a Technical Marketing Manager for the Networking Solutions Group at LSI Corporation. He has close to 20 years of experience in Computer Networks and Digital Communications. His expertise is in Enterprise network, network and server virtualization, SDN/OpenFlow, cloud acceleration, wireless communications and QoS/QoE management. As a noted industry expert, he has several patents in networking. His work has been published in numerous publications including IEEE communication and Elsevier magazines, and has presented at several industry conferences.

Seong Hwan Kim has a Ph.D. degree in Electrical Engineering from State University of New York at Stony Brook, and received his MBA degree from Lehigh University.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading