Array ( [0] => all-news [1] => army [2] => defense-main [3] => defense-news [4] => on-dod [5] => on-dod-podcasts [6] => radio-interviews [7] => technology-main )

DoD presses its supercomputers into service for COVID-19 fight

On an average day, the massive supercomputers that make up the Defense Department’s high-performance computing ecosystem spend most of their time analyzing how future weapons systems might perform on the battlefield. But those same machines have turned out to be enormously useful in the battle against COVID-19.

A little over a month ago, the DoD High Performance Computing Modernization Program (HPCMP), headquartered in Vicksburg, Mississippi, offered up its resources to help solve coronavirus-related problems both in and outside the military. Within a day, it had its first tasking: To conduct fluid dynamic studies of how air and droplets move inside the cargo hold of a C-17 aircraft, so that Air Force medical evacuation crews can take steps to protect themselves from infection.

“The power of supercomputing is that you can actually take this problem and break it up into little pieces and send it to individual processors,” Will McMahon, HPCMP’s director, said in an interview for Federal News Network’s On DoD. “You actually have to break this problem down into very small spatial pieces — let’s say 1-inch cubes — and model the whole aircraft. And then you have to step through the problem very slowly, so that you can get not only a spatial, but a temporal, in-time view of what’s going on.”

That particular task was run on supercomputers at the Air Force Research Laboratory that the HPCMP also manage, and was able to take advantage of physics modeling software the department had already built.

The program’s ability to respond to the crisis has come not just through its ability to put massive numbers of computing cores against individual problems, but also its staff’s expertise in porting software that was designed to run on ordinary PCs to versions that can run in parallel across thousands of processing cores at the same time.

That’s what was required when the HPCMP was asked to help model various scenarios for virus spread when the USS Theodore Roosevelt arrived in Guam and began moving its crew off of the ship.

“We helped them take a computer model that looks at disease spread, how people would transmit the disease amongst each other, and then take that code and set it up to run as a supercomputer application instead of a desktop application,” said Kevin Newmeyer, the program’s deputy director. “So this case, we’re supporting the work that the Corps of Engineers is doing to support FEMA and some of the other agencies to figure out how to model the spread of the disease, so that they can take the best measures to mitigate that spread going forward.”

Using Onyx, pictured here, and other high-performance computing assets, the Department of Defense High Performance Computing Modernization Program is using its resources to help the federal response in combatting COVID-19 across the Nation. (Photo courtesy of the HPCMP)

The Guam modeling effort ran on a supercomputer called Onyx at the Army’s Engineer Research and Development Center in Vicksburg. The system has 214,568 compute cores, compared to the few dozen one might find in even the highest-end desktop processors.

In another case, the Army’s Medical Command asked for help with research it had sponsored at the Southwest Research Institute, evaluating possible drug treatment candidates for COVID-19.

There, too, researchers had been using more standard computing hardware that only allowed them to evaluate about two million possible drug compounds over a three week period.

“The work being done in this case is looking at a how potential compounds would adhere to the protein of the corona virus as a means of finding the most effective way to develop a vaccine or develop a treatment pattern,” Newmeyer said. “We used our experts to parallelize it, spread it across thousands of cores on our supercomputer. This allows the program to run orders of magnitude faster, so instead of looking at 2 million things over three weeks, we can look at 40 million compounds in less than a week. That’s the advantage of being able to divide this problem, because each problem can run with a different set of variables, but they all run at the same time.”

Related Stories