XAI: Explainable artificial intelligence

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Fed Tech Talk’s audio interviews on Apple Podcasts or PodcastOne.

Artificial intelligence is being added to many aspects of federal information technology programs.

Claire Walsh, vice president of Engineering, and Henry Jia, Data Science lead at Excella, joined host John Gilroy on this week’s Federal Tech Talk to unpack some of the challenges in using AI, how AI is implemented, and what suggestions NIST has for taking advantage of AI.

Claire Walsh and Henry Jia, Excella

Walsh began the discussion with a confusing acronym — XAI.  While “AI” stands for artificial intelligence and UX is user interface, the “X” in this abbreviation stands for “eXplainable,” in other words, “explainable artificial intelligence.”

Most of today’s artificial intelligence comes out of a black box and users can’t examine how a conclusion was derived. Much like your high school math teacher, XAI strives to show users how the answer was accomplished.

Jia has an extensive background in higher mathematics, and he looks at AI from the practical perspective of transparency. Once he can make a finding, he wants to have the ability to examine the validity of the data being used as well as having a system that is subject to human analysis.

The interview covered a wide range of topics ranging from ethics of AI to bias in data sets.

Related Stories

Comments

Federal Tech Talk

TUESDAYS at 1:00 P.M.

Host John Gilroy of The Oakmont Group speaks the language of federal CISOs, CIOs and CTOs, and gets into the specifics for government IT systems integrators. Follow John on Twitter. Subscribe on Apple Podcasts or Podcast One.

Sign up for breaking news alerts