- Written by Ryan Quick
"The time has come, the Walrus said, to talk of many things."
Lewis Carroll got it right. Microservices architectures deliver on many of the promises that additive software design originally proposed for object-oriented programming with the same loose-coupling benefits from messaging middleware. In the HPC community however, monolithic software architectures still reign supreme. While we do not argue that a highly optimized central code will deliver amazing performance, we do contend that modern supercomputers make accurately programming these sorts of applications difficult.
We think the time has come to reset the clock. Rather than measure the performance of a "run", we think the right time measurement for software development is from the time the idea forms until the computer starts producing answers to your question. This talk is a roadmap on how to bring microservices architectures to bear on traditional HPC problems, with an eye towards availability, resiliency, and performance as equal requirements on the system design.
You can watch the original here: https://insidehpc.com/2018/02/high-availability-hpc-microservice-architectures-supercomputing/
- Written by Ryan Quick
This is really just a teaser deck to introduce some of the work we're doing as a part of our new partnership with Data Vortex Technologies. We have believed for some time that some of the best applications of HPC are in places outside of HPC. So that's exactly what we're doing here. We're going to see if the DV can knock the socks off the opensource messaging world by giving some chops to RabbitMQ.
Stay tuned -- we're publishing a paper on this in April, but we didn't want to let anyone who wanted to see the slides from Supercomputing Frontiers Europe 2018 miss out.
- Written by Ryan Quick
One of the questions we get most often is about learning. Whether you're interested in machine learning, deep learning, or artificial intelligence, the base question everyone wanting to know about is the same: What do I need to buy in order to get immediate ROI and long-term value from my project? It's followed almost immediately by the next question: Which of the frameworks do I need to use to get started? I don't have a lot of machine learning experts on staff...
We hear you ... unfortunately the answers are probably more complex than you're wanting. But we have put together a set of slides which aim to answer these questions in the generic sense and cover our view of the outlook in this space for the next 3-5 years. We put this together initially for the HPC User Forum conference in Beijing (2016) and revised and updated it for the National Geospatial-Intelligence Agency in early 2017. But as we said, the work is for the generic case -- give it a read and contact us and we'll help you with the specifics for your organization.
PDF: HPC in Machine/Deep Learning: Practical Deployments and Best Practices for 2017-2019
- Written by Arno Kolster
Machine Learning at HPC User Forum: Drilling into Specific Use Cases
September 22, 2017 by Arno Kolster
The 66th HPC User Forum held this month in Milwaukee focused on the latest trends in modern computing – deep learning, machine learning and AI – and some common themes became obvious: First, that ML and DL are focused currently on specific, rather than general, use cases and second, that ML and DL need to be part of an integrated workflow to be effective.
This was exemplified by Dr. Maarten Sierhuis from Nissan Research Facility Silicon Valley with his presentation “Technologies for Making Self-Driving Vehicles the Norm.” One of the most engaging talks, Dr. Sierhuis’s multi-media presentation on the triumphs and challenges facing Nissan while developing its self-driving vehicle program showcased that machine and deep learning “drives” the autonomous vehicle revolution.
The challenge that Nissan and other deep learning practitioners face is that current deep learning algorithms are programmed to learn to do one thing extremely well – the specific use case: image recognition of stop signs for example. Once an algorithm learns to recognize stop signs, the same amount of discrete learning must apply for every other road sign a vehicle may encounter. To create a general-purpose “road sign learning algorithm”, not only do you need a massive amount of image data (in the tens of millions of varied images), but also the compute to power the learning effort.
Dr. Weng-Keen Wong from the NSF echoed much the same distinction between the specific and general case algorithm during his talk “Research in Deep Learning: A Perspective From NSF” and was also mentioned by Nvidia’s Dale Southard during the disruptive technology panel. Arno Kolster from Providentia Worldwide in his presentation “Machine and Deep Learning: Practical Deployments and Best Practices for the Next Two Years” claimed as well that general purpose learning algorithms are obviously the way to go, but are still some time out.
Nissans’s Dr. Sierhuis went on to highlight some challenges computers still face which human drivers take for granted. For example, what does an autonomous vehicle do when a road crew is blocking the road in front of it? As a human driver, we’d simply move into the opposite lane to “just go around”, but to algorithms, this breaks all the rules: Crossing a double line, checking the opposite lane for oncoming traffic, shoulder checking, ensuring no crossing pedestrians, etc. All need real-time re-programming for the encountering vehicle and other vehicles that arriving at the obstacle.
Nissan proposes an “FAA-like” control system, but the viability of such a system remains to be seen. Certainly, autonomous technologies are integrating slowly into new cars to augment human drivers but a complete self-driving vehicle won’t appear in the marketplace overnight -cars will continue to function in a hybrid mode for some time. Rest assured, many of today’s young folks likely will never learn how to drive (or ask their parents to borrow the car on Saturday night).
This algorithmic specificity spotlights the difficulty of integrating deep learning into an actual production workflow.
Tim Barr’s (Cray) “Perspectives on HPC-Enabled AI” showed how Cray’s HPC technologies can be leveraged for Machine and Deep Learning for vision, speech and language. Stating that it all starts with analytics, Mr. Barr illustrated how industries such as Daimler improve manufacturing processes and products by leveraging deep learning to curtail vehicle noise and reduce vibration in its newest vehicles. Nikunj Oza from NASA Ames gave examples of machine learning behind aviation safety and astronaut health maintenance in “NASA Perspective on Deep Learning.” Dr. Oza’s background in analytics brought a fresh perspective to the proceedings and showcased that machine learning from history has earned a real place alongside modeling for industrial best practices.
In the simulation space, a fascinating talk from the LLNL HPC4Mfg program was William Elmer’s (LLNL) discussion of Proctor & Gamble’s “Faster Turnaround for Multiscale Models of Paper Fiber Products.” Simulating various paper product textures and fibers greatly reduce the amount of energy from drying and compaction. Likewise, Shiloh Industries’ Hal Gerber described “High Pressure Casting for Structural Requirements and The Implications on Simulation.” Shiloh’s team leverages HPC for changing vehicle structure — especially in creating lighter components with composites like carbon fiber and mixed materials.
It’s clear from the discussion that machine learning and AI are set to be first class citizens alongside traditional simulation within the HPC community in short order. While still unproven and with a wide variety of new software implementations, HP Labs presented a first-of-its-kind analysis of ML benchmarking on HPC Platforms. Hewlett Packard Labs’ Natalia Vassilieva’s “Characterization and Benchmarking of Deep Learning” showcased the “Book of Recipes” HP Labs is developing with various hardware and software configurations. Fresh off their integration of SGI technology into their technology stack, the talk not only highlighted the newer software platforms which the learning systems leverage, but demonstrated that HPE’s portfolio of systems and experience in both HPC and hyper scale environments is impressive indeed.
Graham Anthony, CFO of BioVista spoke on the “Pursuit of Sustainable Healthcare Through Personalized Medicine With HPC.” Mr. Anthony was very passionate about the work BioVista is doing with HPE and how HPC and deep learning change the costs of healthcare by increased precision in treatment through deriving better insights from data. BioVista takes insight from deep learning and feeds that into simulations for better treatments – a true illustration that learning is here to stay, and works hand in hand with business process flows for traditional HPC.
In his talk entitled “Charliecloud: Containers are Good for More Than Serving Cat Pictures?” Reid Priedhorsky from LANL covered a wide range of topics including software stacks, design philosophy and demoed Charliecloud which enables execution of docker containers on supercomputers.
The tongue-in-cheek title about cat pictures being synonymous with deep learning image recognition is not by accident. Stand-alone image recognition is really cool, but as expounded upon above, the true benefit from deep learning is having an integrated workflow where data sources are ingested by a general purpose deep learning platform with outcomes that benefit business, industry and academia.
From the talks, it is also clear that Machine Learning, Deep Learning and AI are presently fueled more by industry than by academia. This could be due to strategic and competitive business drivers as well as the sheer amount of data that companies like Facebook, Baidu and Google have available to them driving AI research and deep learning-backed products. HPC might not be needed to push these disciplines forward and is likely why we see this trend becoming more prevalent in everyday news.
There was obvious concern from the audience about a future where machines rule the world. Ethical questions of companies knowingly replacing workers with robots or AI came up in a very lively discussion. Some argued that there is a place for both humans and AI — quieting the fear that tens of thousands of people would be replaced by algorithms and robots. Others see a more dismal human future with evil and malevolent robots taking control and little left for humans to do. These are, of course, difficult questions to answer and further debates will engage and entertain everyone as we keep moving toward an uncertain, technical future.
On a lighter note, Wednesday evening’s dinner featured a local volunteer docent, Dave Fehlauer, giving an enjoyable, informative talk on Captain Frederick Pabst: his family, his world and his well-known Milwaukee staple, The Pabst Brewing Company.
By all accounts, this was one of the most enjoyed HPC User Forums meetings. With a coherent theme and a dynamic range of presentations, the Forum kept everyone’s interest and showcased the realm of possibilities within this encouraging trend of computing, both from industry and academic research perspectives.
The next domestic HPC User Forum will be held April 16-18, 2018 at the Loews Ventana Canyon in Tucson, Arizona. See http://hpcuserforum.com for further information.
About the Author