On March 28, the White House took a pretty big step toward establishing a broader national policy on artificial intelligence when it issued a memorandum on how the federal government will manage it. It established new federal agency requirements and guidance for AI governance, innovation and risk management. All of this is in keeping with the AI in Government Act of 2020, the Advancing American AI Act, and the President’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
Tucked into the 34-page memorandum is something that could easily go unnoticed, but it is perhaps one of the most important and far-reaching details to come out of it. On Page 5 of the document, it lists the roles of chief artificial intelligence Officers (CAIO), and more specifically that there should be a chief data officer (CDO) involved in the process.
While the memorandum doesn’t spell out responsibilities in detail, it points to a mandate to include data scientists in the development, integration and oversight of AI in society. More to the point, it’s a reminder that we need the right and most qualified people at the table to set policy on the role AI will play in society.
You cannot just assume the right experts have a seat at the table. Even though the field of AI has been around for nearly 70 years, it’s only since the generative AI boom starting in November 2022, when ChatGPT was launched, that many leaders in society have begun to see the sea change AI represents. Naturally, some are jockeying for control over something many don’t understand. There is the risk they could be crowding out the people who do, the data scientists who’ve thus far conceived, created and are incorporating AI into our daily lives and workflows. For something this revolutionary and impactful, why?
Credit human nature. People are at once intimidated by and even scared of the kind of massive societal change AI represents. This reaction is something we as a society and as a country have to quickly get beyond. Society’s welfare, and America’s national security and competitiveness are at stake.
To be sure, AI’s benefits are real, but it also poses real risk. Shaping and navigating its future will depend on a combination of regulation, broader education, purposeful deployment, and our ability to leverage and advance data science underlying AI systems.
Without the latter, systems run a greater risk of being ineffective, unnecessarily disruptive to the workforce, biased, unreliable and even underperforming in areas that could truly be positively impacted by AI. In high-stakes cases like health care, unproven or untested AI can even cause outright patient harm. The possible setbacks in function can lead to setbacks in perception. And setbacks in perception do little to marshal the resources, talent and institutions needed to realize AI’s potential while safeguarding the public.
The states take the lead
As the federal government has wrestled with how to approach AI regulation, more nimble state governments and regulators have taken the early lead. In the 2023 legislative calendar, some 25 states, along with Puerto Rico and the District of Columbia, already introduced AI-centric legislation. Eighteen states and Puerto Rico have “adopted resolutions or enacted legislation,” according to the National Conference of State Legislatures.
At the federal level, there have been dozens of hearings on AI on Capitol Hill, and several AI-centric bills have been introduced in Congress. Many of these bills center on how the government will use AI. Increasingly, we are seeing specific AI applications being addressed by individual federal departments and committees. This includes the National AI Advisory Committee (NAIAC).
Where are the data scientists?
You don’t have to look far to find the critical mass of data scientists who need to be involved in society’s efforts to get AI right the first time. We are (some of) those data scientists and we have been part of an organization that understood the intricacies of “machine learning” long before policymakers knew what the term meant. We, the leaders of the sector charged with bringing the promise of AI to the world, have long worked — and continue to work — to create a framework that realizes the potential of AI and mitigates its risks. That vision centers on three core areas:
Ensure that the right data is behind the algorithms that continuously drive AI.
Measuring the reliability of AI, from the broadest use down to the most routine and micro applications, ensures AI quality and safety without compromising its effectiveness and efficiency.
Aligning AI with people, systems and society so that AI focuses on the goals and tasks at hand, learns from what is important, and filters out what is not.
All of this must be addressed through an ethical prism which we already have in place.
There is some irony in this early stage in the evolution of AI. Its future has never been more dependent on people – ones who have a full understanding of the issues at play, along with the need for and application of ethical decision-making guardrails to guide everything.
Ultimately, AI systems are a function of the data that feed them and the people behind that data. Obviously, the ideal is to have accuracy and effectiveness enabled by good data. Sometimes, to better understand how you want it to work, you have to confront those instances where you see what you don’t want – in this case, instances where AI decisions were driven by poor data.
For example, when AI systems inaccurately identify minority populations, which is a problem that has plagued security screening technologies for years. This is usually not a technology problem, but rather a data problem. In this case, the systems are operating on bad or incomplete data and the impact on society is significant because it leads to more people being unnecessarily detained.
Chances are, many of these sorts of problems can be traced back to the human beings who were involved, or – perhaps more importantly – not involved in AI development and deployment. Poor data that lead to bias or ineffective decision making is a significant problem across industries, but one that can be solved by combining the expertise of the data science community with that of diverse stakeholders, especially frontline workers and subject matter experts.
Data scientists must have a seat at the table … now
Data scientists need to be at the decision-making table early on, because they have the holistic training and perspective, as well as the expertise to set algorithms in specific domains that focus on leveraging data for actual decision-making. Whether the AI system is supporting healthcare, military action, logistics or security screening, connecting effective data with AI will ensure better decisions and therefore fewer disruptions.
When it comes to measuring reliability, that’s what data scientists do. No one is better positioned to ensure that AI systems do what they are designed to do and avoid unintended consequences. Data scientists know. They’ve been there.
Data scientists are the intersection of ensuring better and more effective decision making across AI and identifying impacts and biases of AI systems and other problems. As states, Congress, the White House, and industry consider the next steps in AI policy, they must ensure data science is at the table.
Tinglong Dai, PhD, is the Bernard T. Ferrari Professor at the Johns Hopkins Carey Business School, co-chair of the Johns Hopkins Workgroup on AI and Healthcare, which is part of the Hopkins Business of Health Initiative. He is on the executive committee of the Institute for Data-Intensive Engineering and Science, and he is Vice President of Marketing, Communication, and Outreach at INFORMS.
To make effective AI policy you must trust those who’ve been there
Data scientists are essential as policymakers shape legislation around AI
On March 28, the White House took a pretty big step toward establishing a broader national policy on artificial intelligence when it issued a memorandum on how the federal government will manage it. It established new federal agency requirements and guidance for AI governance, innovation and risk management. All of this is in keeping with the AI in Government Act of 2020, the Advancing American AI Act, and the President’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
Tucked into the 34-page memorandum is something that could easily go unnoticed, but it is perhaps one of the most important and far-reaching details to come out of it. On Page 5 of the document, it lists the roles of chief artificial intelligence Officers (CAIO), and more specifically that there should be a chief data officer (CDO) involved in the process.
While the memorandum doesn’t spell out responsibilities in detail, it points to a mandate to include data scientists in the development, integration and oversight of AI in society. More to the point, it’s a reminder that we need the right and most qualified people at the table to set policy on the role AI will play in society.
You cannot just assume the right experts have a seat at the table. Even though the field of AI has been around for nearly 70 years, it’s only since the generative AI boom starting in November 2022, when ChatGPT was launched, that many leaders in society have begun to see the sea change AI represents. Naturally, some are jockeying for control over something many don’t understand. There is the risk they could be crowding out the people who do, the data scientists who’ve thus far conceived, created and are incorporating AI into our daily lives and workflows. For something this revolutionary and impactful, why?
Learn how high-impact service providers have helped the government reinvent the way they deliver their mission and services to the public in this exclusive ebook, sponsored by Carahsoft. Download today!
AI development faces a human nature problem
Credit human nature. People are at once intimidated by and even scared of the kind of massive societal change AI represents. This reaction is something we as a society and as a country have to quickly get beyond. Society’s welfare, and America’s national security and competitiveness are at stake.
To be sure, AI’s benefits are real, but it also poses real risk. Shaping and navigating its future will depend on a combination of regulation, broader education, purposeful deployment, and our ability to leverage and advance data science underlying AI systems.
Without the latter, systems run a greater risk of being ineffective, unnecessarily disruptive to the workforce, biased, unreliable and even underperforming in areas that could truly be positively impacted by AI. In high-stakes cases like health care, unproven or untested AI can even cause outright patient harm. The possible setbacks in function can lead to setbacks in perception. And setbacks in perception do little to marshal the resources, talent and institutions needed to realize AI’s potential while safeguarding the public.
The states take the lead
As the federal government has wrestled with how to approach AI regulation, more nimble state governments and regulators have taken the early lead. In the 2023 legislative calendar, some 25 states, along with Puerto Rico and the District of Columbia, already introduced AI-centric legislation. Eighteen states and Puerto Rico have “adopted resolutions or enacted legislation,” according to the National Conference of State Legislatures.
At the federal level, there have been dozens of hearings on AI on Capitol Hill, and several AI-centric bills have been introduced in Congress. Many of these bills center on how the government will use AI. Increasingly, we are seeing specific AI applications being addressed by individual federal departments and committees. This includes the National AI Advisory Committee (NAIAC).
Where are the data scientists?
You don’t have to look far to find the critical mass of data scientists who need to be involved in society’s efforts to get AI right the first time. We are (some of) those data scientists and we have been part of an organization that understood the intricacies of “machine learning” long before policymakers knew what the term meant. We, the leaders of the sector charged with bringing the promise of AI to the world, have long worked — and continue to work — to create a framework that realizes the potential of AI and mitigates its risks. That vision centers on three core areas:
All of this must be addressed through an ethical prism which we already have in place.
There is some irony in this early stage in the evolution of AI. Its future has never been more dependent on people – ones who have a full understanding of the issues at play, along with the need for and application of ethical decision-making guardrails to guide everything.
Read more: Commentary
Bad data makes bad decisions
Ultimately, AI systems are a function of the data that feed them and the people behind that data. Obviously, the ideal is to have accuracy and effectiveness enabled by good data. Sometimes, to better understand how you want it to work, you have to confront those instances where you see what you don’t want – in this case, instances where AI decisions were driven by poor data.
For example, when AI systems inaccurately identify minority populations, which is a problem that has plagued security screening technologies for years. This is usually not a technology problem, but rather a data problem. In this case, the systems are operating on bad or incomplete data and the impact on society is significant because it leads to more people being unnecessarily detained.
Chances are, many of these sorts of problems can be traced back to the human beings who were involved, or – perhaps more importantly – not involved in AI development and deployment. Poor data that lead to bias or ineffective decision making is a significant problem across industries, but one that can be solved by combining the expertise of the data science community with that of diverse stakeholders, especially frontline workers and subject matter experts.
Data scientists must have a seat at the table … now
Data scientists need to be at the decision-making table early on, because they have the holistic training and perspective, as well as the expertise to set algorithms in specific domains that focus on leveraging data for actual decision-making. Whether the AI system is supporting healthcare, military action, logistics or security screening, connecting effective data with AI will ensure better decisions and therefore fewer disruptions.
When it comes to measuring reliability, that’s what data scientists do. No one is better positioned to ensure that AI systems do what they are designed to do and avoid unintended consequences. Data scientists know. They’ve been there.
Data scientists are the intersection of ensuring better and more effective decision making across AI and identifying impacts and biases of AI systems and other problems. As states, Congress, the White House, and industry consider the next steps in AI policy, they must ensure data science is at the table.
Tinglong Dai, PhD, is the Bernard T. Ferrari Professor at the Johns Hopkins Carey Business School, co-chair of the Johns Hopkins Workgroup on AI and Healthcare, which is part of the Hopkins Business of Health Initiative. He is on the executive committee of the Institute for Data-Intensive Engineering and Science, and he is Vice President of Marketing, Communication, and Outreach at INFORMS.
Sign up for our daily newsletter so you never miss a beat on all things federal
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Air Force’s new policy sets guardrails around generative AI
Top lawmakers on AI set sights on House resolution to expedite policy
Managing the enthusiasm for AI by taking a measured approach