Meet something new for NASA — its first chief artificial intelligence officer

NASA has a new first to add to its history of firsts — a first chief artificial intelligence officer, Dave Salvagnini.

NASA has a history of firsts. Now it has a new first: a first chief artificial intelligence officer. Joining the Federal Drive with Tom Temin is that chief AI officer, Dave Salvagnini.

Interview transcript:

Tom Temin
Well, let’s begin with you. Are you new to NASA, or is this a new job for you at NASA? I think I know, but you tell us.

Dave Salvagnini
Certainly. So, it’s a little bit of both, quite candidly. So, I spent most of my career in the Department of Defense for a period of time as an active duty Air Force officer, and then also as a civilian, for the Defense Intelligence Agency and some other parts of the intelligence community during that career. I joined NASA last year in June. So, I just passed my one-year anniversary. I joined as the chief data officer, and then led a group to evaluate how NASA should best respond to the executive order that was released last October, requiring federal agencies to have a chief artificial intelligence officer. We closed out a study and made our recommendations to the deputy administrator and administrator, and I was named as the chief artificial intelligence officer here in May.

Tom Temin
Now, I’m imagining that throughout the hundreds of mission areas and projects that are going on at a given time in NASA, people are using artificial intelligence, because everything NASA does is basically software that goes somewhere. So, what will the chief AI officer actually do, then?

Dave Salvagnini
No, that’s a great question, and you’re absolutely right. And I just want to amplify your point, NASA has been using artificial intelligence for many, many years, quite successfully. So, what will the chief artificial intelligence officer do? Certainly we will meet the requirements in the executive order and the OMB guidance that required the federal agencies to appoint a leader who is looking after the use of artificial intelligence across the agency. Of course, what has occurred more recently is the advent of, or the advancement of general AI capabilities, and the ability to equip the workforce writ large with those capabilities, which means now, we also have to equip them to use those capabilities responsibly, ethically. So, that requires some training. So, one of the things that the chief artificial intelligence officer will have to do is look at equipping the workforce for safe and responsible use of AI tools, especially those tools that have become more widely accessible.

Tom Temin
Yeah, that’s a big difference, because the traditional AI tools and robotic process engineering, that required some expertise to make it do anything, like programming, whereas the generative, it’s in everybody’s palm of their hand and at their fingertips.

Dave Salvagnini
That’s exactly right. And it’s important for the workforce to understand that ultimately, the employee is responsible for any outcome, any work product outcome, whether it be solely derived by their own cognitive abilities, or whether it be augmented or enhanced in some way with an AI capability. So, it’s really about equipping the workforce to understand that they own the responsibility for their work, and they also have a responsibility for the ethical use of those tools. They have a responsibility to protect data, and that means not presenting proprietary data that shouldn’t be put out in the public, to tools that may not protect that data, or may circumvent some of the security controls around that data. So, there’s a whole host of things, just like if you imagine some of the cyber training that we all go through every year so that we can safely operate our computer within our work environment, which equips us with what does a phishing email look like, or text, and how not to click on that and how to report on it. We have to do the same thing with this new emerging technology.

Tom Temin
And you could say, maybe AI divides along another axis. And that is, what it is I’m using to apply to my personal work. Say I’m evaluating download data or something, or some flight pattern data from an aircraft wing experiment, and I’m creating a work product from that, versus what, say, HR might deploy using generative AI, looking at NASA’s personnel policies, and therefore you have an enterprise type of application.

Dave Salvagnini
The guidance from the administration right now is that we have to report an inventory of all AI use cases across the agency. In the past, NASA has reported over 400 AI use cases. I would expect in some cases that will grow. In some cases, based on some of the rule changes, it may actually shrink a little bit. But in particular, the administration is concerned with safety and rights impacting AI use across the federal government. So, safety would have to do with and in particular with NASA, it has to do with, let’s say the movement of vehicles across ground, sea, air or space. So, obviously NASA moves vehicles across space and any use of AI to enable that is subject to a safety use case criteria. The other would be certainly in the area of aeronautics and air traffic control, where AI is starting to be experimented with as it relates to the air traffic control system and looking at opportunities to increase the capacity of that system. That would be certainly a safety categorized use case. An area of rights use cases, for NASA, we don’t have nearly as many as a lot of our other federal departments would have. We’re not adjudicating health claims. We’re not dealing with education benefits requests, we’re not adjudicating housing requests. So, we’re not in that public facing services space, where the rights impacting use cases will be quite important. We’re not handling a lot of public data. So, we don’t have people applying for services where there’s an implicit expectation of privacy. I would say where rights impacting use cases start to enter into the equation is where NASA may start to use AI in its HR practices, whether it be to triage, let’s say, a set of candidates or maybe in a performance evaluation system at some later date. At this point in time, we’re not doing that, but it’s certainly being considered because there is some opportunity there.

Tom Temin
We’re speaking with Dave Salvagnini, he’s NASA’s new chief artificial intelligence officer. And are you also still the chief data officer? Again, in my view, the AI officer kind of would be a higher notch than data, because data is needed for AI. So, do you report to yourself in that manner?

Dave Salvagnini
I like how you phrase that. I am still the chief data officer here at NASA, so I have the role of chief data and artificial intelligence officer. What we evaluated when we looked at the opportunities as we looked at separating them, so there could be a chief AI officer role, a chief data officer role, or combining them. And we found that there is a lot of synergy in combining the two roles. You don’t enable AI without data. And there are some very specific things that have to be thought about as you manage data over its lifecycle to ensure that that data is really prepared for AI, or what I sometimes would refer to as AI-ready. So, for the time being, there’s a synergy in having those roles under a single leader. And now that said, I am scaling the team. So, we’ve just selected a deputy chief data officer who will handle a lot of the data officer functionality. We’re hiring some folks on the AI front as well. I do have a chief, a deputy chief AI officer supporting me. So, much like a CIO would do, where there’s a broad scale, a broad portfolio of activity under their domain, I see myself having that kind of role as well as an executive, where I’ll have a team that’s looking after the data needs and the team that’s looking after the AI needs. And, in some cases, this synergy comes from, let’s say, governance. So, when you think about governance for AI, while there’s also a connection with governance for data, and maybe there’s an opportunity for those to converge at some point. So, that’s where we are now. One of the things that we recommended to our leadership, when we briefed out our recommendations was, we will evaluate and reassess, certainly after six months, and then annually thereafter. So, there could be a change in our future.

Tom Temin
NASA, being highly federated organization, you have the different flight centers and the different labs that are pretty autonomous. What is the governance? How do you relate to the leadership within those different mission areas and different large projects, such that can all get along and downtown over the expressway isn’t dictating out to the West Coast?

Dave Salvagnini
No, that’s absolutely correct. So, rightsizing the governance, building a governance model that really honors the value associated with the highly distributed nature that NASA has, I think is important. Of course, I’ve spent my first year here at NASA doing that on the data side. So, the approach we’re taking is a federated governance model. It’s tiered, where, yes, at the headquarters level, there is enterprise governance for data. But the expectation is, is that mission organizations would stand up their own governance function that’s specific to their needs. And likewise for AI. So, what we will do is we will set conditions so that centers and mission organizations can create governance that’s tailored to their mission needs. So, a mission support organization is going to have different types of AI governance needs than, let’s say, our science mission directorate, or aeronautics directorate. So, in giving the organizations the latitude to be able to build the governance and optimize it for their needs has been the approach that I’ve been taking. And, candidly, I think it’s well-received because people certainly don’t want headquarters saying this is a one solution fits all approach to how we’re going to do governance, and just take it or leave it. That’s not a way to win friends and influence people in an organization, any organization, but particularly NASA.

Tom Temin
I would think that you would have to have some centralized policies, such as for how handling data has to be done, and so on. Whereas there’s got to be some latitude if someone wants to reexamine old data on blown lift for some new insight. That’s where the expertise is, whereas the data stewardship is more universal in scope. Fair way to put it?

Dave Salvagnini
Absolutely. I mean, well think about science data, as compared to wind tunnel test data, as compared to HR data, as compared to contracts data, as compared to cyber data, on and on and on, right? Financial data. I mean, there’s so much nuanced difference in how you manage that data. And there’s so much nuanced difference in a, well, I wouldn’t even say nuanced difference in AI use cases that can apply to any of those domains. So, giving organizations latitude, but setting conditions from a guardrails perspective, more broadly, such as, again, you know, what are what are the right sets of training courses that an employee should have, if they’re going to be using a generative AI capability as part of their work? Or if you’re going to work on what are the tests that validation procedures that are going to go into deployments of an AI capability? And I’d like to just take a moment to comment on that for a second, because I often talk about AI here at NASA and the many years of AI experience that we have, but that experience is in very well-tested circumstances. So, if you think about our systems engineering lifecycle where we have autonomous capabilities and space vehicles, like the Perseverance Rover on Mars, or in our science mission directorate, the amount of rigorous testing that goes into the development of those systems is, it’s quite substantial. And in the case of science, the amount of peer review, and validation and replication of results that goes into some of that work is quite substantial. So, there is a lot of validation, verification, validation in our history of AI use. Of course, now, with generative AI, and putting it in the hands of every employee, you know, that is not the case. So, again, now we have to equip people with understanding its appropriate use, responsible use, ethical use, and so on.

Tom Temin
Right. That’s a pretty daunting training challenge, then, because there’s technology, there’s policy, there’s data handling. And in the case of generative, you have to teach people how to prompt. It turns out that’s really where the magic is, and not in all that soup of data that they’re prompting.

Dave Salvagnini
I completely agree and often talk about that, you know, because there are errors of omission that can enter into the equation if you don’t prompt the act correctly. So, equipping the workforce, not only with that, but also understanding. I was testing a capability more recently that a colleague had developed, it was a prototype. And I intentionally asked the AI questions that I knew were very broad in scope, but the responses I got were narrow in scope. So, there were errors and omissions. But you know, me being somewhat of an evaluator, in that particular scenario, knew enough to be able to, again, ask those questions and be able to verify whether the AI was giving me a complete an answer or not. So, again, that’s part of the role of equipping the workforce and making sure that they know they’re ultimately responsible for the completeness of an answer that might come from generative AI capability.

Tom Temin
And as the chief data and AI officer, are you in the CIO channel at NASA?

Dave Salvagnini
I am, and I think there’s a lot of synergy there as well. But of course, I am probably unlike some of the other organizations that are part of the CIO. In other words, I’m not directly delivering IT services. There’s other parts of the CIO organization that do that. However, I’m the advocate, the functional advocate, and a bit of, certainly the advocate for the deployment of AI tools across NASA. So, one of the things I’m working with the CIO, Jeff Seton, on, is accelerating access to AI tools, so that people can have, I won’t say the identical experience they have at home, but they’ll have access to the tools because candidly, the point that I make is, is the longer we don’t have access to tools that are protected within our protected boundary, the higher the risk to our NASA data we incur because people will find creative ways of using these tools, whether they’re available to them or not. So, what I would rather do is put the tools in their hands within a protected boundary, nasa.gov, rather than have them go outside of that boundary and start to potentially do harm. Tolerating access to those tools is one of our main thrust areas.

Tom Temin
And as someone relatively new to the agency, is part of your job getting out and checking out what the different centers and labs are doing? I mean, NASA has, outside of the Air Force, some of the greatest toys in government.

Dave Salvagnini 
You know, to think that I have even the level of understanding I have about this organization and its vastness is, it’s pretty small, you know, with one year in the sea. So, I absolutely have to get out. And I get out virtually, and I also get out physically, but building that network of trusted colleagues, and having people be open to sharing the work that they’re doing with me, is an important part of my role as a leader and an important part of my leadership style as it relates to how I interface with and how I interact with organizations across the agency. But this is an amazing organization, it’s an amazing culture, the things that we do are just looked up upon by the world all over. And it’s just a pleasure to be here and be in this role. I can’t imagine being a chief AI officer anywhere else. I mean, this is an organization that wants to explore, that wants to push the envelope. So the culture is just eager to capitalize on the opportunities that AI bring, but also understands the value of verification and validation testing. It’s very much part of our culture as well. So I think it’s a perfect blend of being responsible, and also looking to be innovative and trailblazing.

Tom Temin
So, artificial intelligence works best when the natural intelligence is pretty high.

Dave Salvagnini
I would agree.

Tom Temin
Dave Salvagnini is NASA’s new chief artificial intelligence officer and data officer. There’s much more to the interview. Hear it in its entirety at federalnewsnetwork.com/federaldrive. Hear the Federal Drive on demand. Subscribe wherever you get your podcasts.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    APTrump Transition

    Trump picks Dr. Oz to run Medicare and Medicaid, Linda McMahon for Education, Lutnick for Commerce

    Read more