We are living in an exciting time of change. The era of cloud computing has come upon us over the last few years bringing profound changes in the way we conduct our day to day lives. We are seeing a lot of changes in the way traditional high performance computing (HPC) applications are developed and deployed. These changes are happening at a rapid pace and there are many dimensions where cloud computing and HPC are converging. This article  examines some of the dynamics of the changes and addresses potential next steps for the future.

Before we dig deep into the subject of the cloud computing and HPC convergence, let us define a common understanding of both technological terms.

In the basic sense, cloud computing is defined as a resource that can be addressed and utilized by multiple authorized clients using internet protocols commonly defined and understood by the standards bodies. There are two distinct use cases – first is Amazon’s Elastic Cloud framework that enables multiple clients to use the computing resources at a competitive rate. Second is the services availed from major search engines, online banking portals or other web based portal services. Mobile adoption has contributed significantly to the rise of cloud computing. The next wave of change we are seeing is the Internet of Things (IoT). The IoT is going to bring profound changes and we will examine this in detail later on.

Traditional HPC applications are related to weather modeling, financial transaction processing, or processing associated with nuclear detonation modeling. The HPC community has also attempted to model dark matter1. These applications demand massive amounts of computing power. This power cannot be provided by one processor resulting in the application being divided across many computing nodes. Each processing node consists of one or more processors. Most of these HPC applications require massive computing power as well as memory footprints with sophisticated technology to drive communication between multiple processors. Traditionally done on homogeneous processor architecture, this industry is undergoing massive change with the adoption of heterogeneous processors architectures. Companies such as Texas Instruments (TI) have introduced heterogeneous SoCs. For example, the 66AK2Hx from TI provides significant power efficiency as well as better total cost of ownership with superior integration.

Now that we have a baseline for both technologies, let us talk about emerging convergence between cloud computing and HPC. This convergence is driven by the fact that the cloud computing of tomorrow needs to adopt the technologies being used and perfected over the years in the HPC domain, similarly this community can gain momentum by using cloud computing technologies. The convergence is driven by market forces through:

1. The desire to understand the consumer behavior and monetize it. We have seen technologies such as Hadoop using the MapReduce algorithm for a wide scale consumer behavior analysis.

2. The desire to reduce cost of the HPC investment. Over the years, multimillion dollar investments are required for deploying a HPC cluster. There is a growing realization that a cluster can use web based resources for deploying HPC task. Nimbix, a provider of cloud-based HPC infrastructure and applications has announced products for enabling HPC using the Nimbix Accelerated Cloud Compute with DSP-based acceleration for specific workloads.

Let us examine the first market driver and its real manifestation. During the Supercomputing Conference last year in Denver, I attended a bird of the feather session titled “Defining Big Data: Industry Views on Real-Time Data, Analytics, and HPC Technologies to Bring Them Together.” What caught the audience’s  attention was that the session demonstrated emerging convergence between cloud compute players and the high performance computing community. The talk was led by PayPal, Twitter, Google, Map-D, Bank of America and Oak Ridge National Laboratory and focused on opportunities for HPC in the emerging area of the industry’s real-time decision making and analytics for activities like fraud detection, dynamic pricing and web analytics. It was clear from the session that leading industry players were looking to adopt HPC technologies for enabling a better cloud experience.

Another potential convergence between the cloud computing and HPC world is emerging in the area of IoT. In a paper (2) published by Ericsson, the company estimated that by 2020 we will have 50 billion connected devices. This paper continues to predict that with three billion people getting connected over the Internet, having 10 devices each will contribute close to 30 billion devices. Apart from that we will also have devices such as electric meters, meaning that industrial equipment will also come onboard. It will take about 100 billion processors to support such a massive scale IoT deployment. While the first wave will focus on deployment challenges, the next wave of analytics will throw interesting compute challenges that can deploy commonly used HPC techniques.

So, it is an undisputed fact that we are seeing interesting convergence emerging between cloud computing and HPC technology streams. HPC workloads are looking at cloud computing as a massive cluster with the capacity to run compute intensive applications. Cloud computing is adopting some HPC techniques to solve emerging analytics, fraud detection and other customer collaboration challenges. It is inevitable that implementation of such novel workloads cannot be done purely on homogeneous processors. Developers need to think about the platform such as TI’s 66AK2H integrating compute efficient TMS320C66x DSP with a general purpose ARM® processor along with high speed interconnect technologies such as sRIO or 10 GbE.