Artificial intelligence: The pit bull of computing

The key word in artificial intelligence is "artificial." It needs people to work right.

The mayor of Munchkinland offered a rigorous set of criteria for ensuring the Wicked Witch of the East was really dead. He wanted to verify it “legally, morally, ethically, spiritually, physically, positively, absolutely, undeniably and reliably.”

That’s not a bad filter for artificial intelligence outcomes, either.

AI is established sufficiently that we can call it by its acronym. Researchers at IDC expect government and industry spending on AI to grow at a nearly 50 percent compound growth rate for the next couple of years, reaching $52 billion in 2021.  By way of contrast, they predict cloud computing spending worldwide will be in the $160 billion range. That is, AI is in the same order of spending magnitude as cloud — big.

AI has seeped into government, and not just in pilot projects. It’s doing real work in a diverse range of agencies.

Thanks to concerted outreach by researchers and the major AI services vendors like Accenture and Deloitte, the idea of ethics in AI has stepped front and center. With a purely technical approach, AI can produce accurate but wrong results — accurate according to the data on which the algorithm acts, but wrong in the reality of the world of people and policy.

Ethics in this context boils down to a simple idea: Train your algorithm so that it matches what fair-minded, knowledgeable people would know or do. Train it so that it produces policy-compliant outcomes. The literature is full of examples. The research director for IDC Government, Adelaide O’Brien, gave one. With certain data sets, an algorithm might conclude that all doctors are men, and all nurses are women. That notion wouldn’t occur to a person, but it could to an AI algorithm given limited training data.

In that sense, AI is like a pit bull. Some grow into adorable goofs, rolling on their backs if you glance their way. Others become snarling, biting killers. It depends on how they were trained.

IDC advises agencies to take an AI approach based on three vectors. The first is that everyone needs to be involved in an AI initiative. Don’t toss it to the IT shop. Application of AI calls for HR, legal, policy and, of course, program people to get in their two cents. IDC also includes the inspector general, the security people, and central agency management. There’s that much at stake.

The second vector concerns data management. It calls for what IDC calls a “robust data foundation, data governance, and analytics.” Train AI algorithms using diverse data sets, and relevant ones.

Vector three calls for accountability and transparency both in explaining the expected aims of the AI program and in how it works. What happens when code executes should be explainable and auditable.

All of this implies the need for constant human oversight of AI systems. Cybersecurity experts always maintain the need for a human in the cybersecurity chain, regardless of the degree of automation involved. The same is true of AI.

The key word in AI is artificial, the antonym of human. The term “artificial intelligence” is saddled with old connotations and science fiction overtones; think Robby the Robot and Dr. Morborious. It reality, it’s just software. If not handled properly, it can have a mind of its own.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories