Home IssueArtificial Intelligence DOE’s $30 Million Investment in Supercomputing Software Will Help Maintain U.S. Top Spot

DOE’s $30 Million Investment in Supercomputing Software Will Help Maintain U.S. Top Spot

by Hodan Omaar
by
DOE Supercomputer

Since 2018, the U.S. Department of Energy’s (DOE) supercomputer Summit has secured the United States as the global leader in high-performance computing (HPC). But significant Chinese investments in HPC threaten to wrest supercomputing supremacy away from the United States and place China as the global frontrunner. Eager to retain America’s position, DOE recently announced a plan to provide $30 million in funding to enhance its HPC capabilities using artificial intelligence (AI), including machine learning (ML). 

This funding is earmarked for developing the software and algorithms that run on high-performance computers. HPC leadership requires more than just machines; it needs programs that can exploit the machines’ problem-solving power, massive amounts of data to analyze, and a workforce with the knowledge and skills to develop the technology. DOE has already invested $40 million for AI research to harness the research facilities it sponsors for scientific data, now it is quite rightly looking to AI and ML to increase the capabilities of HPC algorithms and software.   

The first research topic DOE is funding is in how machine learning can improve the adaptability of scientific models and simulations. Scientists increasingly use supercomputers to model the behavior of complex physical systems, such as weather patterns and vibrations of the earth, as high-powered computers are especially adept at revealing the interactions and processes that govern these dynamic systems. This makes them an essential tool for solving a wide range of scientific problems. When solving large, iterative scientific problems, supercomputers must choose between multiple different actions in the solution process, such as which parameters to use or how many decimal points to estimate to. Traditionally, humans define these variables as fixed inputs at the start of a simulation, but this means that the algorithm does not change as the numerical solution evolves. 

Machine learning offers an opportunity to improve HPC performance by substituting fixed algorithms for adaptive ones that tailor the selection of inputs to the solution, even as it is being computed. By using available contextual information to drive computational decisions, adaptive algorithms allow scientists to employ a more rational, targeted approach to problem-solving. The more adaptive the algorithm, the more complex the solution, and the more robust and accurate the simulation. This not only addresses the growing complexity of scientific applications, but supports its continued growth. 

DOE’s second research topic focuses on how supercomputers deal with results. Often, scientists face severe budget, time, and resource constraints in running simulations. For instance, seismologists who simulate ground motion for major earthquakes use some of the very largest computers at full capacity. This makes maximizing the information they gain from every simulation critical. The question here is: How can machine learning help supercomputers intelligently draw useful information from data?

Challenges of this nature are not new. They are mirrored in various fields of mathematics where much work has already been done in areas such as statistical inference methods, optimization, and experimental design. DOE is looking to fund projects that exploit the relationship between computational mathematics and ML to offer innovative strategies to extracting information. The objective here is to automate DOE’s HPC systems to intelligently identify the data that should be queried next. To be effective though, funded projects need to do more than just adapt mathematical methods developed for other problems; they need to develop new AI methods tailored to scientific problems.

These funding opportunities represent the type of robust software investments needed to develop the next generation of high-performance computing. The Department of Energy, as the largest user of supercomputing in the United States, plays a key role in the continued evolution of supercomputing infrastructure. Since software is a key driver of HPC growth, and because HPC stands at the forefront of scientific discovery and commercial innovation, HPC leadership is even more important. 

DOE’s announcement not only paves a path for strengthening the U.S. HPC ecosystem, but by extension, U.S. economic competitiveness.  

Image credits: Flickr user doe-oakridge

You may also like

Show Buttons
Hide Buttons