categories

HOT TOPICS

Subscribe to our Feed

Boundaries Between HPC And Cloud Computing Vanishing

Posted on Friday, Oct 8th 2010

By guest author Shaloo Shalini

High-performance computing (HPC) just might be on the verge of a colossal shift in paradigm in terms of both technology and market aspects. The erstwhile islands of large-scale supercomputing facilities may soon become outdated thanks to inroads made by cloud computing into HPC territory in addressing the issues of scale, access, and affordability.

But let’s be realistic: This is not completely true for the top-notch supercomputing centers or the high-end tier of HPC setups dealing with specialized research for government, defense, and other performance-sensitive workloads. However, some of the academic research, pure sciences, weather modeling, automobile design simulations, aerospace, energy, life sciences, and the newer class of business intelligence (BI) and analytics applications may see the maximum benefit from this merger of HPC and cloud paradigm.

What is helping this merger of sorts between the mid and low end of the HPC world and cloud computing? Let’s look at some of the technology and market facilitators in this context.

First is the technology advancement in terms of newer hardware chips with multi-cores, better chip-to-chip interconnects, direct memory accesses, and overall energy efficiencies. The advent of GPGPU and chipmakers such as NVIDIA that are pushing toward a sort of commoditization of these smart processors such that they cater not only to the specialized HPC segment but also to the ever-increasing amounts of data and need for analytics and BI applications, speed, and agility for dealing with graphics, multimedia, digital media. The first part of this HPC transformation was led by Intel’s x86 adoption – more than 81% of Top 500 list uses Intel processors.

Second is the ease with which some of the HPC setups can now leverage “cloud-bursting” to cater to their occasional additional needs of scaling up by using additional computing capacity on demand without having to either tear apart their existing expensive setup and upgrade to newer systems. IaaS vendors such as Amazon Compute Clusters or supercomputing on demand services from IBM and partners provide them with a way to achieve this in a cost-effective manner.

Last, the emergence of the SaaS flavor of some of the HPC applications in manufacturing, simulation and mathematical modeling, and financial analysis is increasingly making an impact on HPC users in terms of their interest and purchasing capacity.

However, there are roadblocks in the form of a large percentage of HPC applications being I/O intensive and the need to secure data access in the cloud. Until there are breakthroughs by smart entrepreneurs and innovators in the way HPC application data is staged or moves around in the cluster extending to the cloud, such applications will be sitting tight in traditional HPC centers or, optimistically, may be in a super specialized private HPC cloud.

There are many initiatives across the globe that herald this merging of HPC and cloud boundaries. Here are a few that I found interesting in this context:

  • CERN, the European Organization for Nuclear Research, a nuclear and particle physics research center, is developing a mega computing cloud to distribute data to scientists around the world as part of the LHC project.
  • Wolfram’s Mathematica offering provides some of the key modeling and simulation used for manufacturing product lifecycles, but in an on-demand application form based on R Systems.
  • In Europe, researchers are experimenting with attaining “scalability-on-demand” in their erstwhile islands of HPC clusters via SARA.
  • Amazon EC2 compute clusters – a customer case study.
  • SGI Cyclone supports computational fluid dynamics, finite element analysis, computational chemistry and materials, computational biology, and ontologies through its IaaS and SaaS offerings.
  • Harvard Medical School IT department has an on-demand cloud to augment individual research cluster initiatives.
  • Nonvirtualized server-based Penguin on Demand (POD) HPC Service.
  • Pharma manufacturing: Eli Lilly Systems is working on having up to 10 HPC applications “cloud enabled” by end of the year and to have release 1.0 of their self-service utility computing offering, known as their internal vending machine computing environment, rolled out internally.

In order to gauge the HPC industry’s point of view, I had a conversation with Phillip Morris, CTO of Platform Computing. He mentioned that CERN and a couple more Platform customers have already started deploying some of their business-critical HPC workloads in a cloud computing model. There are others in the life sciences, financial services, and EDA segment doing chip design and development who are working with a combination of Platform ISF or ISF plus LSF, but these are not yet in production mode.

On the topic of HPC workload characteristics that lend themselves well to a cloud model, Phil says, “A good example is, we have media customer, a visual content customer, 3-D movies and those kind of things, where for periods of perhaps two or thee months they have huge workloads for their creators infrastructure. For other periods, it is just basic design and architecture of the standard context of the frames that they are going to be putting together and visual drops etc that they can do on a local infrastructure. They have a partner that actually has a super computer that with lot of rendering software already on it, called Cerelink. For periods of time, they burst from their internal cluster out to the super computer, to do a lot of frame rendering and get that information back into the local cluster and integrate that into the movie overall.”

As cloud computing embraces HPC, will it make the Top500 list of supercomputing sites? I don’t know that yet, but I wouldn’t be surprised if it did some day. Meanwhile, vendors need to deal with early adopter pain points, provide some HPC-friendly pricing models to address scale and offer better interconnects or nodes provisioned with larger physical memory to overcome the slower network. If you are looking for more technology insights or have some to share with the world in the context of HPC in the cloud, check out the First International Workshop on Data Intensive Computing in the Clouds (DataCloud2011).

Hacker News
() Comments

Featured Videos

`