The National Institute of Standards and Technology (NIST) has a long list of companies on an advisory group called called the AI Safety Institute Consortium.
The National Institute of Standards and Technology (NIST) has a long list of companies on a special artificial intelligence advisory group called the AI Safety Institute Consortium. Members advise NIST on a variety of matters. And among the latest members: The Human Factors and Ergonomics Society. For details, the Federal Drive with Tom Temin spoke the society’s lead on outreach and government relations, President of SA Technologies, Dr. Mica Endsley.
Interview Transcript:
Tom Temin We should also point out that you are former chief scientist of the Air Force. So you have been involved in automation and in human augmentation and human machine relations, let’s say, for quite a while.
Mica Endsley That’s correct.
Tom Temin Let’s get to the human Factors and Ergonomics Society. What is that all about? And what’s your experience there?
Mica Endsley The Human Factors Ergonomics Society is a professional organization. It’s a leading professional society for human factors and ergonomics professionals in the United States and really worldwide. We focus on developing and designing systems to support human performance. That’s really important in a lot of arenas, such as driving, flying, military systems, health care. Any area where you don’t want people making mistakes, it’s really important to make sure that the technologies are using, the systems that they’re using are designed in such a way that they’re going to support the way people process information and make decisions, as opposed to systems which actually play to our weaknesses.
Tom Temin So it sounds like artificial intelligence, which is widely touted as a way to augment human beings in their thinking and their decision making. Often in like adjudication or judgment situations, but maybe also in kinetic ones like flying an airplane or driving a bus has both potential to do a lot of good, but also maybe ruin things if you’re not careful.
Mica Endsley Yes. I’ve been working on the role of artificial intelligence and how it affects human performance for about 35 years now, ever since we started looking at how we would put it into cockpits early on in a program called Pilots Associate back in the 80s. And what we found was that while AI can perform certain tasks or do certain things, it also has the problem of putting people out of the loop, just like traditional automation does when it makes you more removed from how the system is working and what’s going on in that situation. And that lack of engagement we find reduces people’s situational awareness. It reduces their ability to understand what’s happening and then to be able to oversee the automation effectively and jump in when needed. And that’s the real challenge that we see with AI.
Tom Temin Right. So I could be an advisor. But again, in thinking in kinetic situations or motion situations but not necessarily actually perform the function, I’ll just make up an example. Suppose you had a sensor system that knew when ice was about to form. Well, it could automatically turn on the icers, but maybe it’s better for the pilot to least be warned. But the pilot turn them on so that they don’t fall asleep while their wings ice up.
Mica Endsley Yeah, it turns out to be really interesting as to how it works if you automate the actual control tasks. So that might be actually flying the plane, for example, or steering the automobile. What we find is that puts people the most out of the loop in terms of advanced sequence of operations. So they have the biggest problem of understanding when something’s gone wrong if you leave them out of the loop. So leaving them in the loop so they’re actually performing tasks is good. A lot of systems are designed to be recommender systems, like you say. They provide guidance and recommendation and people think, oh, that’ll be better because the person will be more in the loop, the more understanding what’s going on. And what it turns out is that if the system is correct, then that’s true. You get a lot of benefit. But when the system’s incorrect, you actually lead people astray more. So if it mis categorized as a situation, for example, or you reliant on it and it’s incorrect, people are much more likely to be making a mistake than if you just left them alone. So even simple recommender systems can have a negative effect on human performance unless they’re absolutely perfect, which they really usually aren’t.
Tom Temin We’re speaking with Mica Endsley. She’s president of SA Technologies and the lead on outreach and government relations for the Human Factors and Ergonomics Society. Let’s talk about that NIST consortium. What is Human Factors and Ergonomics Society bringing to it? What is it they’re seeking from you as they put this consortium together and learn what they need to learn? I guess this is all ultimately congressionally mandated.
Mica Endsley Yes. So Congress and the White House have been very concerned about what’s happening with AI. They see this as being a very transformative technology, potentially in terms of how it affects our society, in terms of employment, in terms of legal issues such as liability. And really even where we’re looking at just adding AI to help people in a job, it can dramatically affect human performance in terms of people’s ability to do those jobs. So they are looking to the National institute of Standards and Technology for guidance on this. It’s a very technical issue. And NIST has set up this AI safety consortium. It’s an AI safety Institute consortium. So this consortium includes people from industry, from the academia, people who’ve been doing research on this, as well as non-profits that are heavily involved in issues such as AI performance and ethics. And they’re bringing all of these groups together to really provide the best guidance we can on what are the critical issues that need to be considered here, and how do we formulate guidelines and testing standards so that we can make sure that AI is implemented safely in our society?
Tom Temin Maybe discuss for us the connection, then what the society brings in terms of ergonomics. Because ergonomics, I think people tend to think of it as well, a keyboard bent so you don’t get carpal tunnel syndrome or something. I think that’s probably the most hackneyed example known to man. But how does it raise to the level of interacting with AI?
Mica Endsley So human factors in ergonomics actually covers a wide range of areas where people interact with systems. You can talk about the physical technology, the ergonomics. So people are used to thinking about ergonomic chairs or ergonomic workstations. And they’re we’re trying to prevent physical injuries. But Human Factors has been long involved in other kinds of issues perceptual performance and cognitive performance as well. So we have a long history of working on issues like automation and artificial intelligence, and how people interact with these technologies to affect their decision making in their performance in doing a job. This goes back to World War Two, when people were flying airplanes and these airplanes were falling out of the sky, and they realized that, oh, they hadn’t really thought about how they needed designed the gauges and dials in the cockpit so that people could understand them and make very rapid decisions in complex, stressful situations. And that was really the birthplace of the whole human factors ergonomics movement that has spawned a century of research now.
Tom Temin Yeah. I remember once a truck manufacturer, many years ago, advertised the fact that all of the many gauges in a semi has a lot of dials. Zero or normal was the same relative position in all of them. So it was much easier to see when something’s out of whack for the driver than if they’re all kind of randomly placed with zero could be anywhere on the circle.
Mica Endsley That was a very early human factors discovery. They did research to actually establish that and show that it really improved human performance. So very small things about how we design technology can make a big impact on how people interact with it. That’s true of gauges and it’s true of things like artificial intelligence. So we’re really focused on what we can do to improve the transparency and understandability of these systems, which it turns out is really important for how people interact with them.
Tom Temin And in terms of human performance. Sometimes people have expectation biases, or they think they know something and figure out the answer in advance. I was talking to one federal practitioner the other day, and this agency is applying AI in a lot of areas. And I asked, I said, well, does this simply confirm the decisions the decision makers would make, or does it sometimes throw up something from the AI that makes them say, gosh, I never looked at it that way and it’s right, and I was wrong. There’s that factor also. Fair to say?
Mica Endsley There is. And what we have found is that traditionally, the way that the AI or the automation gets implemented is it comes in up front before this is in process. And when that happens, you actually get decision biasing. So if the automation is correct and it’s 100% reliable as accurate, then that’s a real benefit. But when it’s incorrect, it actually can lead people the wrong way. So they become more reliant on the system and they’re more likely to make errors than if you had left them alone. When you implement it the other way. And as you describe, if it comes in at the end of the decision process and says, well, did you think about this or this or this, then actually that can help people in many ways and help them to maybe consider a broader range of things than than they did. So even when you implement, the AI makes a difference.
Tom Temin So it sounds then like it’s incumbent on the operators of the AI system to make sure that it’s trained properly so that it doesn’t start producing results that are simply not applicable, and to monitor that over time to make sure it stays in band.
Mica Endsley Well, that’s one of the real challenges, is the people who are actually using the AI know very little about how it works or how it was developed, or what situations it’ll work for or not work for. Even the developers of the AI who should have that responsibility oftentimes don’t know what’s going on under the hood. Because the way the AI works is it’s basically just a pattern matcher, it’s trained to recognize patterns and it’ll execute those patterns, but it’s very opaque. It’s a black box in terms of how it works. So the developers don’t always know. You can have biases creep in which they’ve seen in a lot of employment screening systems, for example, where it might be biased against women or minorities. And the developers didn’t know that and the people who are using it don’t know that. And that’s a real challenge. And that’s where we have to make it more transparent, in terms of what it’s doing, how it’s operating, how it’s working. Those become really important considerations.
Tom Temin And real briefly, the consortium interacting with NIST. Do you have in-person meetings. Are there giant zoom meetings? How are you working with NIST in reality?
Mica Endsley Yeah, this is just starting up. So we’ll see how that goes. But I think the majority of the work is going to be done virtually across a slack channel where people are exchanging information and ideas. We’re very excited about being part of that consortium. It looks like it’s got a lot of organizations from people who are actually developing AI for different kinds of applications to academic organizations and safety institutes and organizations, all of whom have different concerns and different pieces to bring to the picture. And we’re very excited to be a part of that.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED