First, I’d like to offer kudos to the HPC community for tackling some of the largest and most complex problems known. Unsung heroes in so many aspects of our everyday life – for example, have you ever wondered how cars continue to get safer and more efficient each year? (Hint: they use lots of computers to model and simulate scenarios to improve safety and efficiency.) Similar techniques are used to uncover new medicines, forecast weather, identify new energy sources and predict future environmental impacts to name just a few. Then there’s ‘Big Data’ which applies HPC-like techniques to mine the ever-increasing sources and quantities of unstructured data (search queries, social media, financial transactions, crime reports, live traffic, smart meters etc…) for seemingly unrelated but extremely interesting (read: valuable) patterns and insight.
To tackle a large project, you typically break it down into smaller manageable chunks. In the case of HPC and Big Data, that means decomposing and distributing data across many servers (think hundreds and in some cases thousands or even tens of thousands), then collecting and consolidating the results into an overall ‘solution.’ Today, this is typically performed using a technique such as MapReduce enabled by software from companies like Cloudera, Datastax, MapR and Pervasive running on a cluster of general-purpose servers connected via high-performance networks. Often the compute requirements are somewhat modest relative to the enormity of the data, meaning unimpeded data movement is fundamental to overall efficiency.
With that as a backdrop, think for a moment – “how would you architect highly efficient servers for this purpose if you had a clean slate?” ARM’s business model enables innovative companies the freedom and choice to do just that, resulting in highly efficient and targeted solutions.
As stated before, one size no longer fits all.
To achieve a step function in efficiency, often requires new thinking. In the case of data intensive computing, re-balancing or ‘right-sizing’ the solution to eliminate bottlenecks can significantly improve overall efficiency. That’s exactly what Calxeda has done with its EnergyCore™ ECX-1000 series processor. By combining a quad-core ARM® Cortex™-A series processor with topology agnostic integrated fabric interconnect (providing up to 50Gbits of bandwidth at latencies less than 200ns per hop), they can eliminate network bottlenecks and increase scalability. EnergyCore also includes all the traditional server I/O, memory and management interfaces you would expect. This ‘just add memory’ server on a chip approach means servers can literally be credit card sized and operate at a power-sipping 5W of total power. That means huge density increases are also possible: -
Click here for more details on the Calxeda EnergyCore ECX-1000 SoC.
With all this innovation, it’s easy to get caught up in the hardware, but we also need to recognize software plays an important role here. While the ecosystem is coming together quite nicely with Canonical’s Ubuntu Server 12.04LTS release and various open source libraries already available, there’s still much work ahead. As of today, the fundamental pieces are in place to begin doing useful work and key software partners are already engaged with Calxeda on early access hardware. Forthcoming availability of ARM processor-based server systems from HP and other OEMs will accelerate the next phase of software ecosystem developments.
If you’re at ISC’12 this week and want to know more, be sure to visit Calxeda at booth #410, and check out Karl Freund’s speaking session on the show floor Tuesday, June 19th at 4:15pm. If you’re not at ISC’12 we’ll also be at SC’12 in November (booth #122.) But trust me you don’t want to be left waiting until then! There are plenty of other opportunities throughout June (including GigaOM Structure 2012 in San Francisco.) And we’ll be announcing more opportunities to meet the Calxeda and ARM teams in the near future so be sure to watch this space!
Jeff Underhill, Server Segment Marketing Manager, ARM, is based in Silicon Valley. After spending 10+ years working in the traditional server market Jeff saw an opportunity to revisit server design and redefine an industry. ARM’s business model enables innovative companies the freedom and choice to ask themselves “how would I architect highly efficient servers if I had a clean slate?” Consequently, he is helping drive ARM’s server program with a view to redefining the boundaries of traditional servers as opposed to simply replacing incumbent platforms.
1 Comments On This Entry
Please log in above to add a comment or register for an account
Fortune Brainstorm Green
on May 13 2013 10:58 AM
Moonshot - a shot in the ARM for the 21st century data center
on Apr 09 2013 01:22 PM
Bringing the Benefits of the Smartphone to Pay-TV
on Mar 14 2013 05:34 PM
2013 - A Lucky Year For All Smartphone Consumers
on Mar 13 2013 06:58 PM
Internet of You at Mobile World Congress with M2M, Sensors and LTE
on Mar 12 2013 02:44 PM