While consumers love the freedom of streaming video, there is no doubt that it makes the job of running the network more unpredictable. Planning for what content will be trending and how it will impact peering points, servers and content delivery networks (CDNs) is getting tougher every day. Fortunately, new tools exist that make planning and troubleshooting issues easier. They can even monitor what’s happening on the network in real-time, making it possible to react very quickly to ensure that subscribers aren’t suffering from poor quality.
As in many areas of modern life, the past was simpler; a limited number of players created the content, distributed it, and could meticulously plan for large television events. Is the premiere of Game of Thrones up next week? The operator would know what locations typically accessed this content and could add more peak capacity as needed. Are there new markets consuming greater quantities of nightly television? Operators had the data to support specific bandwidth additions.
Today anyone can create content and release it onto the net. Demand for specific content can go hyper-scale in the blink of an eye. From Drake joining Ninja to play Fortnite on his Twitch stream, to millions suddenly binge-watching Stranger Things on Netflix, there is an enormous range of both unplanned content and demand that makes it extremely challenging for network planners. Add to these challenges the fast growth of the Internet of Things, augmented and virtual reality applications, as well as escalating DDoS attacks, and the potential exists for bandwidth contention and a loss of quality for subscribers.
Whether you are a network operator or an over-the-top content player, it is critical for your brand that you understand the quality of experience for your end customers. A common problem for operators that are deluged with customers complaining about poor streaming quality is figuring out why. Sometimes throwing more bandwidth at the problem works, but it is hit and miss — not to mention, expensive. Analytics are required that can give the operator knowledge of what is happening on its network and the power to do something about it.
Traditionally, most analytics were only capable of spot-checking the network. Deep packet inspection, or DPI, has proven too expensive to implement holistically across the network. It also raises privacy concerns and is useless against encryption, which is being used with a majority of today’s streaming video.
Luckily, the problem isn’t a lack of data. There is a wealth of network data. The problem is that it is collected in silos across multiple different systems—making it extremely arduous (or impossible) for an operator to manually combine and correlate the billions of data points amassed, organize them into a coherent report and then try to sniff out issues and do better planning.
As it turns out, advances in database technologies, such as streaming vector, column-store databases, now make it possible to overcome this problem. Using only publicly available data of the same sort that Google collects, we can now map out the entire internet and catalog the development, over time, of everything happening on it in ways that are tremendously useful and valuable to network operators.
Using a more comprehensive data set, it is possible to map flows from source to destination. Correlating telemetry from internet endpoints, DNS requests and a host of other information sources, it is possible to create an historically rich analysis that is able to identify up to 90 percent of the traffic, encrypted or not.
By studying cloud applications and services on the internet, operators can unravel their supply chains to see what the related IP addresses are, where they’re located and how they interact. So when an IP flow reaches their network, they don’t need DPI to tell them what application or service it is, how it landed on their peering router or how it traverses their network.
With this holistic view of the entire network in real-time, operators can adjust their resources to precisely where they are needed. They can also work intelligently with partners upstream, such as content providers and cloud providers, as well as downstream, such as neighboring networks and CDNs, to adjust their own resources. Is peering point A acting as a bottleneck, while peering point B is under-utilized? Would a different cloud provider better serve a content provider, such as Netflix, in an emerging market?
Detailed, holistic analytics can solve a host of other problems as well. For instance, if you are a traditional network operator or MSO, how many of your subscribers have cut the cord? Wouldn’t it be nice to be able to identify them and target them with promotions —either to lure them back or to offer them your new streaming sports channel? What apps are trending this month and can you market micro-services to those users? Can you offer premium QoS services to certain content platforms?
On the security front, understanding in granular detail the flows in and out and across your network also makes it possible to recognize and respond to DDoS attacks much more efficiently. Instead of sending all suspected traffic to expensive scrubbers, if you understand IP flows, you can simply instruct your edge routers to block up to 90 percent of the problem traffic before it even enters your network.
Running a network efficiently is getting more complex, daily. And the stakes are rising as more and more mission-critical applications and activities go online. Happily, network operators, content platforms, cloud providers and distributors have new tools to take the guesswork out of network planning, troubleshooting and security. Looking forward, holistic network analytics will provide the knowledge — and the power — for network operators, working with their partners, to cost-effectively optimize their networks and delight their customers.