Providentia Worldwide

A lot of folks ask us "what do you guys do exactly?". And I'll admit that sometimes it can be a difficult question to answer. One thing that we usually explain is that we do unicorn consulting. That is to say, that Providentia shines in doing those sorts of niche situations where the client is literally the only one in the world with this problem. A lot of consulting firms shy away from these sorts of problems because they can be very difficult to scope and the knowledge-gain from solving them is not readily reusable. 

Fair enough.

But one thing that is absolutely rewarding for taking on these tremendous problems is that our team has depth in a wide range of areas that are themselves considered "niche". And that makes us the perfect choice when trying to do something that's never been tried before.

Solana is trying to change the world by creating the most performant blockchain. They understand that embarking on such a challenge requires expertise not just in blockchain design and scaling but also in infrastructure deployment, design, and middleware technologies at scale. In conjunction with Kudelski Security, the Solana team asked Providentia Worldwide to provide a detailed analysis of their system to help suss out issues as they grow and to help them become what they want to. We were proud to rise to the challenge.

So, what do we do? Take a read for yourself ... Solana is kind enough to make their full audit available online. The details inside are the kinds of tests and analysis we perform, as well as the mitigation proposals and strategies.

Contact us so we can help you too.

Oak Ridge, TN -- October 3, 2019

Summit – the world’s fastest publicly-ranked supercomputer – now has real-time streaming analytics. At the 2019 HPC User Forum at Argonne National Laboratory, Arno Kolster (principal and co-founder of HPC consultancy Providentia Worldwide) took the stage to explain how it happened – and what it means for the future.

The need for a smarter supercomputer

Summit launched at Oak Ridge National Laboratory (ORNL) in the second half of 2018. As of the June 2019 Top500 list, it still held the top spot among the world’s supercomputers, its 2.41 million cores delivering 148.6 Linpack petaflops. That also means a correspondingly massive power draw; Summit’s power consumption is rated at around 13 megawatts, equivalent to the energy draw of over 10,000 homes. That power produces an enormous amount of heat, requiring regular operation of power-hungry water chillers for Summit’s cooling system.

For ORNL, that means that a huge priority is reducing power consumption (and its costs) wherever possible. But a major obstacle remained: there was no mechanism in place that understood Summit’s second-to-second operations at a granular enough level to effectively optimize them.

This led Jim Rogers, director of computing and facilities at ORNL, to seek out Kolster. Rogers and Kolster, who knew each other from a partnership some six years ago, reconnected at a conference in 2017, where Kolster was speaking about streaming analytics.

Logical Messaging Design


This is really just a teaser deck to introduce some of the work we're doing as a part of our new partnership with Data Vortex Technologies. We have believed for some time that some of the best applications of HPC are in places outside of HPC. So that's exactly what we're doing here. We're going to see if the DV can knock the socks off the opensource messaging world by giving some chops to RabbitMQ.

Stay tuned -- we're publishing a paper on this in April, but we didn't want to let anyone who wanted to see the slides from Supercomputing Frontiers Europe 2018 miss out.

DV Messaging Presentation (Supercomputing Frontiers)

 graph1  graph2

"The time has come, the Walrus said, to talk of many things."

 HPCMicroservices.pngLewis Carroll got it right. Microservices architectures deliver on many of the promises that additive software design originally proposed for object-oriented programming with the same loose-coupling benefits from messaging middleware. In the HPC community however, monolithic software architectures still reign supreme. While we do not argue that a highly optimized central code will deliver amazing performance, we do contend that modern supercomputers make accurately programming these sorts of applications difficult. 

We think the time has come to reset the clock. Rather than measure the performance of a "run", we think the right time measurement for software development is from the time the idea forms until the computer starts producing answers to your question. This talk is a roadmap on how to bring microservices architectures to bear on traditional HPC problems, with an eye towards availability, resiliency, and performance as equal requirements on the system design.

This talk was prepared for the HPC Advisory Council at Stanford University, in February, 2018.

You can watch the original here:

 HPC Microservices

Machine Learning at HPC User Forum: Drilling into Specific Use Cases 

 by Arno Kolster

The 66th HPC User Forum held this month in Milwaukee focused on the latest trends in modern computing – deep learning, machine learning and AI – and some common themes became obvious: First, that ML and DL are focused currently on specific, rather than general, use cases and second, that ML and DL need to be part of an integrated workflow to be effective.

This was exemplified by Dr. Maarten Sierhuis from Nissan Research Facility Silicon Valley with his presentation “Technologies for Making Self-Driving Vehicles the Norm.” One of the most engaging talks, Dr. Sierhuis’s multi-media presentation on the triumphs and challenges facing Nissan while developing its self-driving vehicle program showcased that machine and deep learning “drives” the autonomous vehicle revolution.

The challenge that Nissan and other deep learning practitioners face is that current deep learning algorithms are programmed to learn to do one thing extremely well – the specific use case: image recognition of stop signs for example. Once an algorithm learns to recognize stop signs, the same amount of discrete learning must apply for every other road sign a vehicle may encounter. To create a general-purpose “road sign learning algorithm”, not only do you need a massive amount of image data (in the tens of millions of varied images), but also the compute to power the learning effort.

Dr. Weng-Keen Wong from the NSF echoed much the same distinction between the specific and general case algorithm during his talk “Research in Deep Learning: A Perspective From NSF” and was also mentioned by Nvidia’s Dale Southard during the disruptive technology panel. Arno Kolster from Providentia Worldwide in his presentation “Machine and Deep Learning: Practical Deployments and Best Practices for the Next Two Years” claimed as well that general purpose learning algorithms are obviously the way to go, but are still some time out.

Nissans’s Dr. Sierhuis went on to highlight some challenges computers still face which human drivers take for granted. For example, what does an autonomous vehicle do when a road crew is blocking the road in front of it? As a human driver, we’d simply move into the opposite lane to “just go around”, but to algorithms, this breaks all the rules: Crossing a double line, checking the opposite lane for oncoming traffic, shoulder checking, ensuring no crossing pedestrians, etc. All need real-time re-programming for the encountering vehicle and other vehicles that arriving at the obstacle.

Nissan proposes an “FAA-like” control system, but the viability of such a system remains to be seen. Certainly, autonomous technologies are integrating slowly into new cars to augment human drivers but a complete self-driving vehicle won’t appear in the marketplace overnight -cars will continue to function in a hybrid mode for some time. Rest assured, many of today’s young folks likely will never learn how to drive (or ask their parents to borrow the car on Saturday night).

This algorithmic specificity spotlights the difficulty of integrating deep learning into an actual production workflow.

Tim Barr’s (Cray) “Perspectives on HPC-Enabled AI” showed how Cray’s HPC technologies can be leveraged for Machine and Deep Learning for vision, speech and language. Stating that it all starts with analytics, Mr. Barr illustrated how industries such as Daimler improve manufacturing processes and products by leveraging deep learning to curtail vehicle noise and reduce vibration in its newest vehicles. Nikunj Oza from NASA Ames gave examples of machine learning behind aviation safety and astronaut health maintenance in “NASA Perspective on Deep Learning.” Dr. Oza’s background in analytics brought a fresh perspective to the proceedings and showcased that machine learning from history has earned a real place alongside modeling for industrial best practices.

In the simulation space, a fascinating talk from the LLNL HPC4Mfg program was William Elmer’s (LLNL) discussion of Proctor & Gamble’s “Faster Turnaround for Multiscale Models of Paper Fiber Products.” Simulating various paper product textures and fibers greatly reduce the amount of energy from drying and compaction. Likewise, Shiloh Industries’ Hal Gerber described “High Pressure Casting for Structural Requirements and The Implications on Simulation.” Shiloh’s team leverages HPC for changing vehicle structure — especially in creating lighter components with composites like carbon fiber and mixed materials.

It’s clear from the discussion that machine learning and AI are set to be first class citizens alongside traditional simulation within the HPC community in short order. While still unproven and with a wide variety of new software implementations, HP Labs presented a first-of-its-kind analysis of ML benchmarking on HPC Platforms. Hewlett Packard Labs’ Natalia Vassilieva’s “Characterization and Benchmarking of Deep Learning” showcased the “Book of Recipes” HP Labs is developing with various hardware and software configurations. Fresh off their integration of SGI technology into their technology stack, the talk not only highlighted the newer software platforms which the learning systems leverage, but demonstrated that HPE’s portfolio of systems and experience in both HPC and hyper scale environments is impressive indeed.

Graham Anthony, CFO of BioVista spoke on the “Pursuit of Sustainable Healthcare Through Personalized Medicine With HPC.” Mr. Anthony was very passionate about the work BioVista is doing with HPE and how HPC and deep learning change the costs of healthcare by increased precision in treatment through deriving better insights from data. BioVista takes insight from deep learning and feeds that into simulations for better treatments – a true illustration that learning is here to stay, and works hand in hand with business process flows for traditional HPC.

In his talk entitled “Charliecloud: Containers are Good for More Than Serving Cat Pictures?” Reid Priedhorsky from LANL covered a wide range of topics including software stacks, design philosophy and demoed Charliecloud which enables execution of docker containers on supercomputers.

The tongue-in-cheek title about cat pictures being synonymous with deep learning image recognition is not by accident. Stand-alone image recognition is really cool, but as expounded upon above, the true benefit from deep learning is having an integrated workflow where data sources are ingested by a general purpose deep learning platform with outcomes that benefit business, industry and academia.

From the talks, it is also clear that Machine Learning, Deep Learning and AI are presently fueled more by industry than by academia. This could be due to strategic and competitive business drivers as well as the sheer amount of data that companies like Facebook, Baidu and Google have available to them driving AI research and deep learning-backed products. HPC might not be needed to push these disciplines forward and is likely why we see this trend becoming more prevalent in everyday news.

There was obvious concern from the audience about a future where machines rule the world. Ethical questions of companies knowingly replacing workers with robots or AI came up in a very lively discussion. Some argued that there is a place for both humans and AI — quieting the fear that tens of thousands of people would be replaced by algorithms and robots. Others see a more dismal human future with evil and malevolent robots taking control and little left for humans to do. These are, of course, difficult questions to answer and further debates will engage and entertain everyone as we keep moving toward an uncertain, technical future.

On a lighter note, Wednesday evening’s dinner featured a local volunteer docent, Dave Fehlauer, giving an enjoyable, informative talk on Captain Frederick Pabst: his family, his world and his well-known Milwaukee staple, The Pabst Brewing Company.

By all accounts, this was one of the most enjoyed HPC User Forums meetings. With a coherent theme and a dynamic range of presentations, the Forum kept everyone’s interest and showcased the realm of possibilities within this encouraging trend of computing, both from industry and academic research perspectives.

The next domestic HPC User Forum will be held April 16-18, 2018 at the Loews Ventana Canyon in Tucson, Arizona. See for further information.

About the Author

Arno Kolster is Principal & Co-Founder of Providentia Worldwide, a technical consulting firm. Arno focuses on bridging enterprise and HPC architectures and was co-winner of IDC’s HPC Innovation Award with his partner Ryan Quick in 2012 and 2014. He was recipient of the Alan El Faye HPC Inspiration Award in 2016. Arno can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..