Feed the world. Cure cancer. Vanquish all enemies. End traffic jams. You hear many claims for what you can do with artificial intelligence.
In some sense you can do those things with AI, but not in the way they’re sometimes presented in breathless marketing headlines. One of the big market intelligence and software evaluation outfits coined the term “hype cycle.” Right now — for federal agencies no less than in the private sector — AI is high on the hype cycle.
Success in AI really begins with the answer a human being or an organization wants. Artificial intelligence depends on humans doing a lot of hard work before feeding the first stream of data into the AI algorithm.
At last week’s ELC, put on by ACT-IAC, I caught up with one of the more skilled explainers of AI. Accenture Federal’s Chief Technology Officer Dom Delmolino. He pointed out something would-be AI users should know, but I’ll repeat. AI systems don’t generalize, but rather optimize themselves on specific outcomes they’ve seen before. “The key,” he said, “is picking outcomes you want without building in bias.”
Delmolino used the example of bank loans. If the algorithm is trained to favor only the highest credit rankings, then very few applicants might get loans. It would have a bias toward scores above, say, 700, causing rejection of people who might be perfectly capable of repaying a loan but perhaps at a higher interest rate.
In hiring situations, AI trainers can inadvertently introduce bias leading to recommendations only for one race or gender, or some other unwanted factor.
The possibility of bias underscores the importance of choosing desired outcomes carefully. And using data that can reliably produce that outcome.
Delmolino said the bias potential points to some AI buying considerations for federal agencies. It’s why the AI software you use should be designed to show its work. As a teacher would with an equation-solving prodigy, AI users should have visibility into how the AI system came to its conclusion. Third parties (like Accenture) have tools to help assess the fairness and transparency of AI systems.
Before everyone had computers, machines were viewed as black boxes into which you fed a question and the machine spit out an answer. In reality, AI systems need constant attention and adjustments in how they learn.
Delmolino said they also require care in how they’re applied. Many federal agencies adjudicate cases for matters such as workplace disputes, vendor invoices, disability payments, veterans benefits and medical claims. In such use cases, AI should assist but not be set up to make decisions. As Delmolino put it, “AI must be subservient or assistive.”
That show-your-work quality is important to the military as it ponders the possibilities of autonomous weapons systems. Autonomous surveillance is one thing. Autonomous shooting is another. At one roundtable I hosted earlier this year, an Army officer commented that the military would never allow autonomous shooting unless authorities could audit the decision trail.
In short, before you buy an artificial intelligence system, think hard about the program goals and make transparency of the algorithm a purchase criterion.
So can AI feed the world? I looked at information Microsoft puts out on its AI website. It posits the question of “how to feed the world without wrecking the planet.” Microsoft’s AI product is used by The Yield, a company that turns farms into internets of things. Sensors monitoring all the important farming parameters, like soil and leaf moisture, and weather. The system produces accurate micro-level weather forecasts helping farmers time planting, watering, fertilizing and harvesting with more precision. The idea is to maximize yield while minimizing inputs. The headline might sound like hype, but the implementation is highly practical. Perhaps not the stuff of Norman Borlaug but helpful nonetheless.
The watchword for artificial intelligence is neither “think big” nor “think small” but “think specific.”