The Biden administration's executive order on artificial intelligence handed an assignment to the National Institute of Standards and Technology.
The Biden administration’s executive order on artificial intelligence handed an assignment to the National Institute of Standards and Technology. NIST is supposed to develop guidance for testing and so-called red-teaming of AI models. NIST has a request for comments about what it’s supposed to do. One group responding is the Information Technology Industry Council. The Federal Drive with Tom Temin talked wit the Council VP Courtney Lang.
Interview Transcript:
Tom Temin And what exactly is NIST supposed to do? Because developing standards for artificial intelligence would take it 30 years.
Courtney Lang Yeah. So there’s actually quite a few things that NIST is tasked to do in the executive order and kind of a couple of buckets that I would break it into is the first bucket is indeed looking at ways in which they can develop standards, best practices and guidelines to help support the safe and secure development and deployment of AI systems. And so, what this looks like, in the executive order is they are tasked with developing a companion document to their AI risk management framework, focus specifically on generative AI systems. They are also tasked with developing, as you mentioned, standards for AI, red teaming or standards that can help organizations test and evaluate the capabilities associated with their AI systems. They are also tasked with taking a look at the existing landscape for content authentication. One of the directives in the executive order is focused on reducing the risks of synthetic content. And so NIST is supposed to be looking at kind of what the existing landscape is like for standards in that area. And then in the event that there are gaps that need to be filled, tasked with then developing additional guidelines and best practices there. And then finally in the executive order, they’re also tasked with developing a global engagement plan for international AI standards. So that’s kind of the final area that they’re tasked with looking at. So, it’s not a small number of activities that NIST is tasked with supporting under this executive order.
Tom Temin Right. And so, they’ve put out a call for comments, which they do. That’s their standard operating procedure pretty much broadly, not just industry, but anyone that wants to comment then can weigh in here.
Courtney Lang Yep. That’s right. The goal is to get as many perspectives and diverse viewpoints as possible so that they have a wide variety of input as they’re moving forward with these various directives under the executive order.
Tom Temin And what did ITI choose to comment on? What are your big concerns here?
Courtney Lang First and foremost, there is a lot to unpack. As we just mentioned in just the directives provided to NIST alone. And so, the RFI itself is pretty wide ranging. It’s asking for input on quite a lot of different areas, which I just elaborated on. And we try to respond to every area that we thought was relevant, which is quite a lot in the executive order. So, for example, we discussed, you know, how NIST might approach creating this companion document for generative AI risk management. And one of the things we really emphasized in that regard is the importance of working with international counterparts while they are doing this work, so that as this moves forward, they are remaining aligned and that approaches can be made interoperable to the extent possible with international counterparts who are also looking at, you know, developing similar types of frameworks or ways in which to manage risk associated with generative AI or advanced AI systems. So, this is one area that we specifically encourage them to look at. And as a part of that, really highlighted the important role that both developers and employers play within the AI value chain, because they were specifically interested in learning more about, you know, how transparency functions both within the value chain and then kind of externally when the system is deployed.
Tom Temin And just a quick question, though, about working with international partners. How do you make sure that we’re not aligned with China, which could care less about transparency or ethical deployment at all? Really?
Courtney Lang Yeah. So, when we’re talking about international counterparts, we’re really encouraging this to bring what they’re doing to international standards bodies. We think that these are the premier place to be kind of working on the development of these very technical standards so that they can be adopted widely, and they are globally recognized, and they’re really industry driven. And so multiple jurisdictions are involved in standards development bodies. There is a set of rules that those bodies follow. And so, what’s really interesting about the standards development process is that pretty much no standard goes into that process and remains untouched coming out. So, although, you know, you have multiple different countries engaged there, it’s really a meeting of the minds. And really what comes out is kind of the best of the best ideas that are put in. So, in that way, you know, we really encourage participation there or meeting aligned with. With folks that do have those kinds of like-minded ideas and then are kind of allied in that fashion.
Tom Temin We are speaking with Courtney Lang. She’s vice president of policy, trust, data and technology for the Information Technology Industry Council. And what about standards for AI? I mean, it’s such a wide-open field with so many different applications. What areas of it can standards have any meaning at this point?
Courtney Lang There’s actually a lot of areas right now where standards can be really helpful, especially because, as you know, we are in a rapidly evolving field and it feels like, you know, every week there’s something new that’s happening. You know, things are changing rapidly. And so, you know, one of the areas that we’ve really seen an increased focus on lately is related to red teaming for AI systems. And this is definitely an area where standards can be really helpful.
Tom Temin There are what is red teaming anyway?
Courtney Lang Red teaming is really something that I’m familiar with primarily from a cybersecurity context. Right. So, you have either an internal team of employees or, you know, kind of an external organization that a company will hire in order to break or hack into a system in a way that would reflect, you know, an attack by a malicious actor. And the goal of that is to find, you know, vulnerabilities or security flaws so that they can be patched before that system is, you know, placed on the market. Oftentimes this is a continuous process, but sometimes it’s not. When we’re talking about AI. This is an area that is, I think still being kind of figured out, because we’re really taking what is something that has been very traditionally cybersecurity oriented and now talking about it in a context that is much broader than cybersecurity. Of course, organizations are going to want to test their system for security flaws. But some of the things that we’re talking about in the AI context are broader than just security, right? You’re talking about the ways in which they might impact people’s human rights. You’re talking about, you know, ensuring that biased outcomes are mitigated. You are making sure that the model is, you know, secure against malicious attacks or, kind of data input attacks, things of that nature. So, it’s somewhat broader than just what you think of in the cyber cybersecurity context. But what that means when it comes to standards is that right now, there are organizations that might be undertaking different types of testing, different types of evaluation. Sometimes it’s consistent, sometimes it may not be. And I think right now we’re still working towards finding a common agreement as to what exactly red teaming looks like in the AI context. And so that’s one area where standards are going to be really helpful moving forward.
Tom Temin I imagine one area for standards could also be, say, how you make sure that your algorithm is consistent in its output over time, because that’s one of the big issues is drift. And maybe there are ways that you can ensure in an industrial setting that that drift is kept within some sort of parameter.
Courtney Lang Yeah, absolutely. I mean, I think another area that standards will be particularly helpful is related to measurement. I think one of the challenging things in AI is really figuring out how you measure not only kind of various outputs and impacts of those outputs, but you know, the risks associated with it. So having those metrics and really being able to delineate, as you said, you know, what a reasonable range might look like for that kind of thing. Or alternatively, you know, how you really constitute what risk looks like is going to be something that’s really helpful to actually operationalizing a lot of the things, for example, that are in the current AI risk management framework.
Tom Temin I mean, in regular software, the logic never changes. The statements of logic that are executed in hardware and give you your outputs never change. Sometimes for software that runs for 50 years, it might put it on a new machine, but it’s the same logic. Artificial intelligence, by definition, changes the logic in the software. And is there a way of measuring what has changed as a way to understand how bias might be coming in, or is there a way to say limit it to only so many lines can be changed, or only this part of the algorithm can be changed as it learns through new data?
Courtney Lang So that I think, is an area that is still, you know, being explored. And some of the international standards development body is about, you know, how you measure kind of the amount of change things of that nature. I will say one of the standards that actually recently came out was the ISO 42,001 series. And this is an overarching kind of AI systems management standard, and it really offers organizations a framework to look at a lot of these overarching questions. So, as they were thinking about, you know, what kind of framework they need to put in place for governance, they have something that they can work with. And then from there, figure out what sorts of components they need to actually leverage in order to address things like, you know, potential bias, potential, you know, concerns related to, you know, how the model is evolving if it’s not supposed to evolve in certain ways, things like that.
Tom Temin And a final question, just from your comments that I read, there’s something called multiple content authentication techniques. And I was just curious, what is that?
Courtney Lang Yeah. So, one of the things that we’ve been looking at a lot in conjunction with our member companies is kind of this. Concern related to the proliferation of myths and disinformation, particularly as AI generated content becomes much more widely accessible and really understanding, you know, when and how that content is, you know, generated and making sure that as an end user, for example, you’re aware, you know, if and when that content is AI generated. And so, we put out a paper recently on AI generated content authentication techniques. And the overarching thing that came out of that was really that, you know, watermarking has been talked about quite a lot in this conversation as kind of the solution for content authentication. And I think what we found as we were digging into this topic a little bit further, is that there are a lot of other content authentication techniques that work hand-in-hand with watermarking. And so, as NIST is exploring this landscape in the context of their, you know, tasking under the executive order, we’ve really encouraged them to take into account this fact, right, that there is watermarking. But then there is also, you know, things like metadata auditing that really need to go hand in hand with watermarking in order to make it as effective as possible. What’s also interesting is that watermarking can take place at different points in the value chain. And so, at different points, you know, watermarking might be appropriate, but that at a different point in the value chain you may want to use a different authentication technique. And so, you know what we’ve really encouraged NIST to do is kind of catalog all of these various content authentication techniques. And then from there figure out, okay, where are there gaps, where do we need to make more progress on and then move forward. But really, the point of mentioning the multiple content authentication techniques was to highlight that watermarking is not the only solution. You do have things like provenance tracking, like metadata auditing, like even human authentication in certain instances where it makes sense, that should be paired with watermarking Writ Large.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED