DARPA honors artificial intelligence expert

Artificial intelligence requires human brainpower. That is why Dr. Siegelmann has been on loan from UMass to the Defense Advanced Research Projects Agency.

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

The irony of artificial intelligence is how much human brainpower is required to build it. For three years, our next guest had been on loan from the University of Massachusetts, to the Defense Advanced Research Projects Agency. There’s she headed up several DARPA artificial intelligence projects. Now she’s been awarded a high honor, the Meritorious Public Service Medal. Dr. Hava Siegelmann joined Federal Drive with Tom Temin for more.

Interview transcript:

Tom Temin: Dr. Siegelmann, good to have you on.

Dr. Hava Siegelmann: Thank you for having me.

Tom Temin: Well, what did you do with those three years at DARPA? Because they have many, many programs I guess in the AI field and tell us about your work there.

Dr. Hava Siegelmann: Sure. I actually was there in two different offices. The first office the one I arrived to was MTO for hardware, and then after about two years I moved to the I2O which is information innovation. But I’ll tell you about things that have done I think the main piece is a program that is called Lifelong Learning. We call it in short L2M. So let me try to explain to you in short the crux of this problem. So we hear AI everywhere, you know, everybody’s doing something with AI, classifying images, something of this kind — but perhaps the pinnacle of today’s AI are the self driving car, where you need all kinds of things. There is a motor part and the visual parts of sensors and motor coming together and done with very smart programming and with learning from many, many examples. But even when we look at the self driving car, which is the top of the top, and they’re not as used yet, we still see they fail. Once in a while there are accidents and just imagine if there are so few cars have some part of self driving and we see accidents, just imagine what happens if we have many of them. And this is really the weakness of current AI technology. The weakness is that you code and train this AI and once you feild it, t’s frozen. So you can have experience in driving outside, you can have experiencing going to different places, but it is in fact frozen since you fielded good it. So you know, we sometimes hear sentences that AIs better with experience. This experience is not it’s experience, it’s the experience of whoever prepared the data in advance.

Tom Temin: Got it. Okay, because often you hear the people that are touting artificial intelligence is its adaptability. But it sounds like what you’re saying is all of that adaptation has happened before it’s actually fielded.

Dr. Hava Siegelmann: We actually call it the training time or the training phase and the fielding time or the fielding phase. AI sometimes people like to kind of not make it as clear, or sometimes it come out not as clear but in fact, after the training phase is over, once it’s fielded, want it starts computing, that’s it. And the main flaw is that if you want something for real world application, you cannot have it trained on an eventuality. So we see AI does very well with computer games, when the screen is exactly the same, try to change the screen a little bit, or the brightness and it may fail. So when you take it outside, that’s why the self driving cars so much effort is put in them, because they still work pretty well, you know, many times, but they’re not safe. So what L2M to solve is really the safety issue that we have with AI today. And if you really want to use AI, we cannot rely on what ever was trained half a year ago and go with that.

Tom Temin: So the Lifelong Learning project then was a way to have artificial intelligence algorithms keep learning once they’re fielded?

Dr. Hava Siegelmann: Yes. So this is really kind of a major advancement. So if L2M is coming to self driving car, it’s ready to go. But then it’s actually uses what it see, it uses the road that it’s driving, the experience that it has to incorporate them and actually use them right away in the next steps. So even there is another part that is very interesting, they say L2M becomes more and more expert as it goes. So if you have a car and it drives on snow and ice, with more driving, it becomes better. So it actually uses what it does to incorporate it to become better. And hence, you hardly get into these points which it wouldn’t know what to do because it already did all kinds of things that even led us to that point. So it’s a very robust type of AI. And you know what I’m talking, I kind of gave it to the example of a self driving car, really think about the way we built L2M is the core technology. And then later, we’re talking about applications. But for the core technologies, the same core technology that we can use for financial prediction, for medical testing for planes. I mean, just imagine that people come with their coffee order or whatever there is images, now to identify viruses now — there is a little bit change over time, and the system isn’t worth anything.

Tom Temin: And I imagine there must have been some military applications, maybe you can’t talk about them. But the DoD must have had an interest in self learning systems because I know that one of the big concerns of autonomous vehicles and autonomous weapons are the safety of them, and the auditability of the decisions they make. So was that part of this whole effort?

Dr. Hava Siegelmann: Yes. And we have another program that maybe it’s relevant to the question that we call it the GUARD, and it stands for Guaranteeing AI Robustness to Deception. So not only to be strong because you’re learning online, but even if people come to deceive you, you’re focusing on what’s the real and what’s the deception in provide this robustness, which is a second program,

Tom Temin: And how did you conduct all of this program? Did you have staff at DARPA, did you bring in students and interns? I mean, how do you arrive at this kind of success?

Dr. Hava Siegelmann: Yeah, the way that DARPA works, we have a program manager, around every program manager there are support staff that come from various places. So they already come ready, you can hire your own if you want, you know, I hire person of my own too. But most of the people are already there. And you know, you train them like you train people in the lab on new ideas, and they grow with you and with the program, you definitely need the group of people to do that.

Tom Temin: And so will some of these discoveries that you made, I guess these are mainly software discoveries..

Dr. Hava Siegelmann: Yeah, yeah. Just there was a comment here that Lifelong Learning is actually a very large program not only in the sense of financial focus, but in terms of number of people that join, it’s a very large program. It’s one of the largest programs that there are. So we needed a nice size of group in DARPA to help us with that. And also, you know, we kind of try to keep the top researchers in United States and also some outside of the United States. And so L2M is really kind of the credit is to all these top researchers that came together to work here together.

Tom Temin: And this was primarily a software effort, correct? I mean, the activity that you led and conducted was coding and then feeding data on the fly and seeing if it reacted properly. I mean, I’m simplifying, but is that basically what you were doing?

Dr. Hava Siegelmann: It’s nice that you’re saying it. So L2M is a program that comes out of MTO, which is the hardware office. The program itself is focused mainly on software, but the way we think about it is already about the type of hardware that we’ll need next to have it.

Tom Temin: What are the hardware challenges for AI? Because most people, I think, assume it just runs like any other software on a processor.

Dr. Hava Siegelmann: So currently type of hardware for AI are these that can multiply matrices very fast. So the deep learning deep neural network would be able to be done fast just by by crunching a lot of numbers. When you talk about Lifelong Learning, you think about different kind of hardware, hardware that can support changes. We call it Lifelong Learning hardware. And we’re actually thinking and designing and in touch with companies since the very beginning of Lifelong Learning about this hardware.

Tom Temin: And for all of this, you have received the Meritorious Public Service Medal. That’s the highest civilian honor from DoD. What is that like?

Dr. Hava Siegelmann: You know, I was very, very surprised and, you know, I’m kind of proud in a good proud way because I love this country, you know, like all of us and, and I’m so happy that they manage to acknowledge that they have an expertise that they have and actually turn it to something so positive.

Tom Temin: And now that you are returning to the University of Massachusetts, will you be a software professor or hardware or L2M?

Dr. Hava Siegelmann: I’m directing a lab there for the last many years, which I’m looking at the combination of a neuroscience and computer science. And I kind of go back to that, but in a way, I have kind of a new type of knowledge that I’m bringing with me, I think that they’ll be able to offer the type of support for graduate students to learn about AI that just start to exist.

Tom Temin: In fact, I wanted to also add that we had an interview with some researchers recently that were looking at the motivations of AI grads, recent grads, and the question was, how can public service be made to appeal to these people once they graduate with their AI degrees?

Dr. Hava Siegelmann: Yes, you know, I can say that they had an unbelievable time in the sense that they worked with people that are not only top researchers but really want to take what they do for the good. So people want to do different things with the brain. And if you if you focus mainly on money, on immediate money, then there is one way, but just when you go to the federal world, you meet so many people and so many companies that if you really want to go into this direction, you can do it. And you know, I just kind of to say on this side, I’ve been consulting with major tech companies for years already from UMass and the universities allow it. So there are many ways to bring AI and the good of AI to here — but I’m definitely a spokesman for public service.

Tom Temin: Dr. Hava Siegelmann is former program director at DARPA, now back to her professorship at the University of Massachusetts. Thanks so much for joining me.

Dr. Hava Siegelmann: Thank you very much Tom.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories