Planning for artificial intelligence should start with whys, that is, the use cases. After that, think about the data strategy to train algorithms and support o...
Planning for artificial intelligence should start with the use cases. How might it benefit the organization? Who will it affect? Where can we get some quick wins?
After that, think about the data strategy to train algorithms and support ongoing AI deployment. Without a solid data strategy, an AI project could fail because of erroneous output.
With respect to use cases, “the overall picture of artificial intelligence is there’s a lot of fear about how it’s going to replace jobs. I think that fear is largely unfounded,” said Frank La Vigne, data cloud services global leader at Red Hat, on Federal Monthly Insights – AI/ML Systems. “Just the arc of history shows that automation tends to bring about more jobs than it takes away. Ultimately, AI is really about augmenting human intelligence.”
With respect to data, “it’s the deluge of data which is the real problem,” said Michael Epley, Red Hat’s chief architect and security strategist. The deluge means “we have too much work for humans to do. That’s really where the value of AI comes in, where we can use these tools to automate the process of sifting through that data.” AI’s pattern recognition can pull out important data requiring human attention, while vastly expanding the amount of data contributing to human understanding.
“So, it opens up options rather than closes them down,” Epley added.
You can think of data as the supply chain for artificial intelligence, Epley and La Vigne said.
As a raw material, “data in its unrefined state, like oil, really can’t do much,” La Vigne said. “It has to go through a process of refinement. Our refinement is called data engineering.”
He cited large language models used in generative AI. “They consume mass quantities of information. The old phrase of garbage in, garbage out still applies,” La Vigne said. Corrupted, irrelevant or biased data will give subpar results, he said, and at worst “you can get a really biased and ineffective model.”
As an element in a supply chain, data presents potential cybersecurity risks too, Epley said.
“As we consume data, process data and do all the data management necessary to enable our AI workflows, these supply chains are getting more complex and less understood,” Epley said. “That introduces opportunities for malicious actors to taint the data, maybe intentionally bias our results.” He noted the potential for hackers to try and alter training data, in order to push results a certain way.
To keep data secure and to ensure appropriate data for training algorithms, La Vigne and Epley said, make sure you have a complete picture of the provenance of your data as well as tools you apply to refine it.
“One big part of it is understanding where your data is coming from and who’s contributed that data,” Epley said. “The same can be said about all other parts of your supply chain, how you’re managing that data through your data management tools. Filtering, enhancing or cleaning that data is going to all impact that data quality.”
La Vigne said the need for the right data with the right provenance underscores the importance of people in AI deployment.
“Good provenance and good governance are 80% people and 20% technology,” he said. Visibility into provenance also favors open sources of tools and data, he added.
“Even though there are developers scattered throughout the world,” La Vigne said, “everything is out on the table, and each part can be inspected. There are many eyes on it. That’s far better than a closed-source system, where you really have no idea what hands have touched the code.” He said government agencies should have visibility all the way to source code of AI algorithms, particularly large language models.
“All of our engineering at Red Hat is based primarily on open source,” La Vigne added.
With use cases chosen and data curated and engineered, agencies can borrow techniques from DevSecOps and apply them to MLOps, or machine learning operations. Techniques include integration of control points in the pipeline, and version and configuration management.
The question to answer, Epley said, is: “How do we engineer these processes to be reliable, robust and repeatable?” He said that in the production of AI and ML applications, agencies might find they need to use different data sets at different stages of development. For example, even though personally identifiable information (PII) might be required in production deployment of an algorithm, for regulatory and ethical purposes, training would use data with PII removed.
And, as with DevSecOps, it’s important to bring users in at each development increment to ensure the product is what they actually need.
La Vigne said, “I think it comes down to planning. There’s no substitute for good engineering. We can slap all the AI we want together. But unless you have a good plan, and a good execution of said plan, you can’t use AI to get away from good solid engineering foundations.”
Listen to the full show:
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Global Leader for Data Services, Red Hat
Chief Architect and Security Strategist, Red Hat
Host, Federal Drive, Federal News Network
Global Leader for Data Services, Red Hat
Frank La Vigne is a seasoned Data Scientist and the Global GTM (Go-to-Market) Lead for Data Services at Red Hat. He possesses an unwavering passion for harnessing the power of data to address pivotal challenges faced by individuals and organizations. A trusted voice in the tech community, Frank co-hosts the renowned "Data Driven" podcast, a platform dedicated to exploring the dynamic domains of Data Science and Artificial Intelligence. Beyond his podcasting endeavors, he shares his insights and expertise through FranksWorld.com, a blog that serves as a testament to his dedication to the tech community. Always ahead of the curve, Frank engages with audiences through regular livestreams on LinkedIn, covering cutting-edge technological topics from quantum computing to the burgeoning metaverse.
Chief Architect and Security Strategist, Red Hat
Michael Epley has been helping the US defense and National Security communities use and adopt open source software over the last 2 decades with practical experience as a software developer and enterprise architect. During his tenure at Red Hat, Michael has passionately driven adoption of key technology: cloud and kubernetes, tactical edge/forward deployed systems, data analytics tools and platforms, and disconnected operations -- always in the context of security and compliance concerns unique to this sector. Michael has BS degrees in Mathematics and Mechanical Engineering from Virginia Tech and a JD from The University of Texas School of Law.
Host, Federal Drive, Federal News Network
Tom Temin has been the host of the Federal Drive since 2006 and has been reporting on technology markets for more than 30 years. Prior to joining Federal News Network, Tom was a long-serving editor-in-chief of Government Computer News and Washington Technology magazines. Tom also contributes a regular column on government information technology.