The Department of Homeland Security's Science and Technology Directorate has established a strategic plan for how artificial intelligence and machine learning c...
Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.
The Department of Homeland Security’s Science and Technology Directorate has established a strategic plan for how artificial intelligence and machine learning can help the DHS mission. It covers both the technology and people sides of this growing field. For details, Federal Drive with Tom Temin spoke with the acting deputy director of the Science and Technology Directorate, John Merrill.
Interview transcript:
John Merrill: Science and Technology division is the research and development arm for the Department of Homeland Security. Our goal is to support the DHS Homeland Security enterprise mission to protect the homeland and the protection of its people. And we focus primarily on R&D, research and development for many operational components of DHS, like CBP, ICE, US Coast Guard, TSA, etc. And obviously, other components as well.
Tom Temin: All right, and now you have a strategy for artificial intelligence, machine learning. A lot of agencies are looking at this. Tell us what’s basically in the strategy, and then we’ll talk about how you came up with it.
John Merrill: I think it’s important to step back a little bit, take a look at how we came to the point where we are today, because we need to understand that how this aligns with the administration and their goal related to AI. Our AI-ML strategic plan fine, obviously we’re the department, DHS AI strategy, which was released back in December 3rd of 2020. And the overarching issues regarding principles associated within that document specified how S&T, the research arm of DHS, would support and address the number of challenges, many of the opportunities with the emerging AI-ML could potentially pose to the department. We’re also guided by the principles set forth in the executive order 13859, which states that maintaining American leadership in artificial intelligence, and also executive order 13960, which states the promoting the use of trustworthy artificial intelligence in the federal government. The executive orders were the basis for the development of the department’s AI strategy. And the Secretary of Homeland Security established five goals to govern the department’s approach to integrating AI into our missions in a responsible and trustworthy manner, and to successfully mitigate risk for the Homeland Security enterprise. These five goals are assessing potential impact of AI in the Homeland Security enterprise. Goal two is invest in DHS AI capabilities. Goal three was mitigating risks to the department and to the homeland. Four was develop a DHS AI workforce. And five, improve public trust and engagement. Now for the S&T AI and ML strategy, our plan laid out an actual path for S&T to advise and assist the department in harnessing the opportunities in AI and ML. It’s important state that the strategy that if our goal to build and apply expertise to help the department fulfill the game changer promise, this technology will also help mitigate inherent risks associated with bringing in new cutting edge capabilities.
Tom Temin: Does it seek to put a lot of expertise in the components, or is Science and Technology itself planning to become kind of the repository of best practices that the department could draw on?
John Merrill: Essentially, all of the above. And the reason I say that is that our ultimate goal is to be the supporting agency for all the DHS components, that if they’re in the process of implementing AI and ML capabilities, they can reach back to Science and Technology for segment expertise, talk to our scientists that are experts in the area of AI and ML so that if they want to implement a particular capability, they can reach back to us for advisement for any type of field testing, or if they want to, they’re in the process of actually just investigating to send the information to us so we can provide a technical assessment.
Tom Temin: We’re speaking with John Merrill, he’s acting deputy director of the technology center division of the Science and Technology Directorate at Homeland Security. And what do you do first here? What kind of resources do you need to make this policy, the strategy real? Where will you start?
John Merrill: That’s a very good question. Because AI and ML over the past several years has exploded within industry, within academia, as well as within all the national labs, and the types of research they do. And one of the biggest challenges that we’ve run into is competing with them in terms of bringing on expertise to help us out. So if the components, as I mentioned earlier, if they come asking for assistance, we need to be able to provide them with that expertise. However, because we are a little bit limited in our resources, we have to reach back and try to partner with the national labs or potentially with universities to bring that expertise on board to assist them in whatever that may be. Whether it’s a tactical level, working with CBP, Coast Guard, ICE to determine if the AI capabilities they are investigating or if they need any subject matter expertise to conduct some scientific evaluations.
Tom Temin: It sounds like you might be having a grant program then to bring in partners, say from academia, to evaluate some of the ideas that people bring.
John Merrill: Yes, we could potentially use the grant program. We also have partnership with the National Science Foundation. Many of the National Labs have specific areas when they’re trying to address AI and ML. We also have partnership with FFRDC that we’ve funded research and development centers, which you could reach back to to get some assistance as well.
Tom Temin: Yes, and it seems like your division then at S&T could almost be a clearinghouse in some ways. And if something is going on at point A in the government and you get a request from point B that is similar, you could kind of get them together, perhaps and be a connector.
John Merrill: Excellent question. Yes, one of the things that we love doing is the networking and connecting up the appropriate people and appropriate subject matter, expertise parties. And we’ve done that on a number of occasions. And we like it when our components reach back to us saying we need some assistance on here, do you have any reach back into any areas that could potentially be of use to us working with the National Labs like Lawrence Livermore, Pacific Northwest National Labs, or even like, as I mentioned, parts of the FFRDC, like MITRE, or even MIT Lincoln Labs.
Tom Temin: Sure, I know them well. And talk more about the goal of improving public trust and engagement, because let’s face it, a lot of people that encounter Homeland Security in one form or another, it’s often not under the best of circumstances from their point of view. And so what do you mean by improving public trust and engagement using AI and ML?
John Merrill: That’s a very good question. And it’s probably one of the most important aspects of when we try to implement or teach AI-ML to our components or when work with other federal partners or within academia, trustworthy and ethical use of AI and ML is extremely important, we need to maintain the privacy and what we call CRCL, civil rights and civil liberties, to fully understand the actual impacts of what it is with respect to AI and ML that we want to implement. When we talk about AI and ML, it can be any number of things. However, when it comes to the actual utilization of any type of data that’s being used, we need to ensure that privacy is maintained. I’ll use facial recognition as an example. You need to ensure that whatever facial recognition, how it’s going to be used in AI and ML, that privacy is maintained in terms of how it’s actually going to be utilized. So in a particular use case, if you’re looking at from the question point, to the analytics perspective, and the final output or the outcome of what it is that you’re trying to do with it in to ensure that whatever the issue is at hand, that the privacy is maintained in the civil rights, civil liberties associated with that particular case is also maintained as well. That’s the only one of thousands of use cases that are out there. And it is a very tough, tough issue that we need to address. And obviously, we can’t address every aspect of privacy and CRCL. However, we do our best to address as many as we possibly can, by going through and looking at various use cases based on a number of different scenarios.
Tom Temin: All right. And I guess in some ways, it’s parallel to the situation TSA had when they first deployed those machines that could see under clothing, and there were images of people’s outlines, and so forth, there was quite a campaign to make sure that people understood that those images were only used at that moment and then discarded. So that’s kind of a parallel type of convincing that sometimes you need to do.
John Merrill: That is absolutely correct. And on occasions where we do, we used to try to provide as much real estate data as possible. So when it comes to AI and ML utilizations, when I talk about a particular use case, a use case is decomposition of a particular scenario you might have on hand, and go back to your common regards to TSA, to use the imagery. In a similar manner for AI and ML, when you collect that data, you bring it in, you need to be able to synthesize in a manner that’s going to protect the privacy and civil rights, civil liberties. And it’s going to be extremely difficult to look at every aspect of it. However, we will do our best by running and conducting a number of tests to ensure that we maintain the privacy aspect.
Tom Temin: And are there any good AI or ML projects going on right now you can talk about?
John Merrill: Most of our AI-ML that I’ve been involved with recently are associated with the law enforcement level. However, there is one that I am familiar with. I don’t know if you’ve heard what we call the Next Generation 911 program. With the proliferation of 5g coming up online, and the amount of information that’s going to be pushed through to what we call the Peace App, the public safety answering point for 911 centers. The amount of information that’s gonna be coming in to the dispatcher on the call taker is going to be pretty extensive on the backend which potentially uses an AI capability that would collect that information That’s coming in from a number of sources for, let’s say, there’s a major event that’s going on, and the dispatcher is getting that information. And however, as a human, you cannot synthesize all that information at any one time. So on the backend, what we tried to do is be able to collect that information, synthesize it and only provide the relevant information for the human to make a prudent decision. And to pass that information on to the first responder so that they can also have it as they’re approaching whatever that incident may be.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED