Skip to main content

The U.S. Needs a National Strategic Computing Reserve

One year after supercomputers worked together to fight COVID, it’s time to broaden the partnership to prepare for other crises

Two banks of futuristic supercomputers face each other in a glow of blue light.

Last spring, as the world was coming to grips with the frightening scale and contagion of the COVID pandemic, scientists started to make rapid progress in understanding the disease. For many discoveries, progress was aided by world-class supercomputers and data systems, and research results advanced with unprecedented efficiency—from understanding the structure of the SARS-CoV-2 virus to modeling its spread, from therapeutics to vaccines, from medical response to managing the virus’s impacts.

Computer-based epidemiology models have informed public policy in the United States and in countries around the globe, and newly studied transmission models for the virus are being used to forecast resource availability and mortality stratified by age group at the county level. Artificial intelligence and machine learning approaches tackled drug screening to find candidate medicines from trillions upon trillions of possible chemical compounds, and differential gene expressions among patient populations have been analyzed with important implications for treatment planning. Structural modeling of the virus has also led to new insights, speeding the development of vaccines and antigens.

Long-term investments in basic research and infrastructure, and the capacity to quickly leverage resources to respond to crisis, underlie the COVID-19 High-Performance Computing (HPC) Consortium that delivered those results. However, this pandemic will not be the last crisis we face, and one critical lesson learned is the importance of preparing for future emergencies.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Now, the leaders behind the HPC are proposing a National Strategic Computing Reserve (NSCR) that will provide funding, human talent and cyberinfrastructure that investigators can access when future crises—whether an epidemic, record-setting wildfires or severe storms—emerge.

A SUPERCOMPUTING NETWORK TO FIGHT COVID

The HPC Consortium launched on March 23, 2020, and was up and running just one week after the White House Office of Science and Technology Policy, the U.S. National Science Foundation, the U.S. Department of Energy and IBM decided to mobilize supercomputing resources and expertise from across government, academia, nonprofits and industry.

The consortium rapidly attracted new research proposals, as well as additional resources and service providers, and now spans three continents—with some nations launching their own activities modeled off the original consortium. Currently, the 43 members offer more aggregate computing power than any single system on the planet, equivalent to 1015 computational operations every second. It’s a powerful tool, and researchers continue to make progress.

The consortium’s success built from decades of strategic investment in advanced computing, cyberinfrastructure and basic research by government agencies, corporations, universities and nonprofit organizations. Its swift and agile establishment reflected a shared urgency and a foundation of experience and trust. Such collaboration requires decentralized coordination and flexibility, an operational model NSF has pioneered for nearly four decades as the agency has built, managed and sustained a national advanced computing infrastructure alongside academic, industry and agency partners. This combination of powerful computers, high speed networks, sophisticated software and massive data collections has supported breakthrough science including detection of gravitational waves and the first image of a black hole, as well as enabling achievements such as a detailed mapping of the Arctic and forecasting storm surges on the lower Mississippi in the wake of Hurricane Harvey.

ENSURING INFRASTRUCTURE IS IN PLACE AS NEW CRISES EMERGE

The NSCR would build on such attributes, supporting the urgent computing needs of future emergency responses. It would be immediately available, along with essential software, data, services and technical expertise—all the elements of a rapid computational response. Such a computing reserve would enhance the nation’s resilience, much as the Strategic Petroleum Reserve deepens the resilience of the national energy infrastructure.

One path towards that goal is outlined in the OSTP-led national strategic plan for the Future Advanced Computing Ecosystem. The plan presents a coordinated approach to develop the nation’s next-generation advanced computing ecosystem as a strategic resource that continues to span government, academia, nonprofits and industry. More recently, a multiagency request for information on the NSCR concept has provided insightful input to agencies as they explore next steps.

These ideas are still embryonic, but some of the parameters are clear: Speed and agility are essential, as are agreements among participants, operational structures and processes, and agreed-upon “triggers” that launch the reserve as well as criteria for returning to normal operation. Ensuring readiness through regular testing and exercises, and the need for coordination and partnerships with domain experts and other relevant reserves are other important elements. A trained workforce that can develop, operate and help scientists use these systems is also critical. Finally, any such reserve must build on a foundation of diverse resources, which can only be established through long-term strategic investments in the national advanced cyberinfrastructure.

Global climate change, population pressure and environmental stress are some of the reasons we can expect future emergencies, whether these include another pandemic; severe storms with attendant flooding and other threats to public health and safety, such as we have seen in the Gulf and the lower Mississippi; or wildfires such as those that have recently ravaged parts of California and degraded air quality in regions well beyond the impact of the fires themselves. In all of those examples, the ability to use large-scale computational models and data is proving critical to mobilizing effective responses and to saving lives.

The COVID-19 HPC Consortium demonstrated the essential role of computing and data analytics and the importance of quickly getting these resources into the hands of experts in response to a national crisis. The recent emergence of variant, even more aggressive, strains of the SARS-CoV-2 virus is a reminder that new crises may arrive earlier than one might expect. However, with an organized, agile and sustained approach for activating a national strategic computing reserve and continued investment in research and advanced cyberinfrastructure, we will be ready.

Any opinions expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

This is an opinion and analysis article.