Tackling the coronavirus with supercomputers

In the search to understand the coronavirus and its implications, the government is marshaling much of its supercomputer capacity.

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

In the search to understand the coronavirus and its implications, the government is marshaling much of its supercomputer capacity. That includes NASA and its high performance computing complex. With how NASA is bending that capacity to bear on the crisis, Federal Drive with Tom Temin turned to the division chief of NASA Advanced Supercomputing, Dr. Piyush Mehrotra.

Interview transcript:

Tom Temin: Dr. Mehrotra, Good to have you on.

Dr. Mehrotra: Thank you. Thank you for having me on.

Tom Temin: Tell us what NASA is doing and how you can convert it suddenly and quickly to the coronavirus. Give us a sense of what it was doing before that.

Dr. Mehrotra: We have several supercomputers at the center that we have at the NASA Ames Research Center here in Silicon Valley. And they’re pretty much utilized at a heavy rate of about 85-90% at all times. But even though this is all set up, they were reserved for national priorities outside of the agency’s missions. And so what we have done is we have proposed that we use that reservation for helping any COVID related research that may be proposed by folks in the country.

Tom Temin: And how would that work technically? Suppose someone has a problem and they’ve written say an algorithm, what do they do with it?

Dr. Mehrotra: Generally, most of the folks that working in this area have actually run the code somewhere else or have written panel code, and so then we have expertise here to help them port that code onto our machines. So anything specialized that is needed for running on our machines, we have in house expertise to help them port the code, optimize the code and then run the codes after that. So as part of allocation that we give them, we give them that kind of labor support.

Tom Temin: Is it accurate to say that in recent years, with supercomputers being built out of standard parts, just lots of them with special interconnects, that it’s easier to port programs from one to the other than it might have been, say, in the seventies and eighties?

Dr. Mehrotra: That is true. Some of the supercomputers do have some specialized hardware, like for example the NVIDIA GPUs, which are also being used for scientific computations, and in that case, the codes have to be ported to those specialized processors. But in general, like you’re saying, because they’re using standard off the shelf processors to a large extent they’re portable from one place to another.

Tom Temin: Now, relative to the code that you’re running, the algorithms that you’re running, is the data, and that’s a much bigger problem I’m assuming. And how do people get the data in there that they’re going to run their algorithms against?

Dr. Mehrotra: It depends on the particular problem and on the particular code that they they’re reading. Some of them, the data inputs needs are fairly small, or their databases, for example, the molecule structure of the proteins or the molecule structure of the drugs. But they’re not that humongous at this point. And so, yes, they’ll have to be transported into our system, transferred there so that they can use it. But I don’t see that as a very big problem.

Tom Temin: So relative to a black hole, the data sets are pretty small?

Dr. Mehrotra: Relative to a black hole or relative to some of the observational data that the scientists at NASA produces, which is in terabytes. Most of the databases is in more terabytes, where a terabyte is 1000 terabytes. So instead of being multi terabytes, it’s only in, you know, less than 100 terabytes, and handling that data is easy these days.

Tom Temin: And just for people that like speeds and feeds, give us a sense of how much capacity NASA does have out there.

Dr. Mehrotra: So we have three systems here with a total of about 15 teraflops. A teraflop is one quadrillion floating point operations per second. The latest one that we just have got on board is about 3.7. To give you a sense of what a teraflop is, if for example, the population off U.S. about 350 million people, did one floating point operations per second, it would take a year to do what one teraflop system can do in a second. So that you get the kind of power that you have, in one second you could do, a machine could do. It will take about a year for 350 million people working together to do

Tom Temin: So it sounds like the issues that are needed for this research and coronavirus might not be possible to get done in any reasonable time were it not for supercomputing.

Dr. Mehrotra: I think that is absolutely true that these days the machines are really helping. You know, a lot of the drug research and vaccine research has been experimental. But what is happening now is that with the kind of supercomputer that available, you can do the modeling of the drugs or the virus, the proteins in the virus, how they fold, how they bind to each other. You can do quantum mechanical simulations, and because you have such large machines, you can do it in a much faster rate. So there’s a chance that in the next few weeks that some of these simulations can actually produce some results, which then, so that folks in the med-lab can look at that as initial starting point and then do experiments on that, reducing the amount of space that the med-lab people have to go through to figure out if a particular drug works or not. The supercomputers are there to help, and we’re there to have people who want to use it to fight the pandemic.

Tom Temin: Do you have a sense of who is going to be using your capacity?

Dr. Mehrotra: So this process has started very simply since you know, we started getting proposed, that the review committee has been set up at the national level. We started getting proposals just Monday, so we have received a few proposals. And basically the committee is looking at the merits of the proposals and then matching them to the multiple providers that have volunteered their resources. We’re one off them, there are other volunteers. And so they’re doing a matching process. At this point, we haven’t matched any of the proposals coming through, but we’re hoping that some university professors, some professors from other labs and even private citizens are proposing to this process.

Tom Temin: I was going to say this also includes commercial interests that might have good research programs would also get this capacity along with academics.

Dr. Mehrotra: Yes. So on both sides in fact, the resource providers, you have federal agencies like DoE and NSF and NASA providing, the academic institutions like MIT and RPI, and then Google, Microsoft and Amazon Web Services are also providing their resources. On the proposal sides also, the PIs are coming in from the universities, they’re coming in from the federal agencies and private citizens also.

Tom Temin: So the panel that is doing all of this matching is a multi-agency affair than to, correct

Dr. Mehrotra: Yes, and not only multi agency but multi commercial folks are also on there, some academy folks are there. So yes, the review committee is pretty large, including subject matter experts and experts in the high performance computing area.

Tom Temin: And for the agencies that will be getting, I guess, new clients you could call them. It’s almost sounds like med school matching. Will that work maybe delay or push back some of the work that the agencies would otherwise be doing on their own supercomputers.

Dr. Mehrotra: Possibly. But as we look at what we have to do for example for NASA, we may have to re-prioritize the work a little bit, but it depends on how many outside proposals come in.

Tom Temin: And is there a timeline for the decisions?

Dr. Mehrotra: We’re trying to do the decisions and the matching process as quickly as possible. Okay, so the proposals come in, and within a day we’re doing the matching process. Then it’s up to the provider to negotiate with the PI as to how they get on board. And we try to do the best matching that the kind of resources required by the PI are the ones available at the provider’s organization. And once you’ve done that, then negotiates happen, the on-boarding happens. But we’re hoping that this first project will start working almost immediately because some of the proposals already have used the same machine, so it may be easy for them to get on there, and then start having impact within a few weeks.

Tom Temin: Dr. Piyush Mehrotra is division chief of NASA Advanced Supercomputing. Thanks so much for joining me.

Dr. Mehrotra: Thank you. Thank you for giving me the opportunity to talk to you.

 

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories