For all of the technology advances across every conceivable domain, we still live by a lot of witchcraft. Thus people improvise their lives in the face of the coronavirus: Keeping away from people, canceling events, generally circumscribing our lives, stuffing SUVs with packages of toilet paper.
Too bad we can’t just hit the return key on Watson or some other artificial intelligence engine and have it spit out an answer to the current, anxiety-producing situation.
But, to get back to Normalville, AI is actually far more ingrained in federal government operations than you might have realized. An eye-opener for me was a 122-page study by a group of eggheads from Stanford University and New York University, hired by the Administrative Conference of the United States, or ACUS. This relationship is crucial to the credibility of this report. A lot of vendor-sponsored studies of AI have come out. They’re okay so far as they go, but they tend to be light on the scholarship and heavy in promoting, by implication, the vendors’ consulting chops.
But ACUS has, in my view, one of the least publicly visible but most important missions. Without strong, thoughtful oversight of agency administrative functions, the nation runs the danger of runaway administrative governments. We’ve all heard the charges: Swamp, government by accountable (or worse) bureaucrats, deep state. Sometimes the charges aren’t wrong.
And even with the best intentions, we see the results of administrative agencies and bureaus that can’t keep up: Backlogs, mistaken decisions, inability to account for decisions or expenditures. ACUS helps the government continuously improve. It has its antecedents in the 1950s, but its enabling legislation dates to 1964. The statute includes this, among the purposes of ACUS: “To improve the use of science in the regulatory process.”
The bill’s authors back in the 1960s might not have been thinking about artificial intelligence, but they were prescient enough that AI fits right in ACUS’ wheelhouse. I discussed the study, called “Government by Algorithm,” with principal author David Freeman Engstrom, in this interview.
The ACUS-sponsored study found in fact 152 cases of AI applied by 64 rulemaking agencies. Leading in terms of numbers are the Office of Justice Programs, Securities and Exchange Commission, NASA, the Food and Drug Administration, U.S. Geological Survey, U.S. Postal Service, Social Security Administration, U.S. Patent and Trademark Office, Bureau of Labor Statistics, and Customs and Border Protection. That’s as of last August.
Report authors don’t claim the government is highly sophisticated in its use cases, stating, “While the deep learning revolution has rapidly transformed the private sector, it appears to have only scratched the surface in public sector administration.”
There’s still another place it might help, AI that is — namely in modernizing the federal rulemaking process. A project headed by the General Services Administration is working on just that.
As a series of three interviews airing Tuesday, Wednesday and Thursday will detail, a foundational stone in the federal government’s approach to administration is notice-and-comment — the process of proposing regulatory rules and having anyone who wants to comment on the proposals. It seems routine now. But as ACUS research director Reeve Bull pointed out, at one time agencies used adjudicative processes, acting like judges and juries.
“Rulemaking was envisioned as being more of a legislative process,” Bull said. The comment part of notice-and-comment proceeded from the assumption that rulemakers don’t have all of the information they need to make fair and reasonable rules.
A big challenge for rulemaking agencies is dealing with the “information” they get via comments of two types. Mass comments, often identical, can run into the millions, the cyber-age equivalent of burying an office in post cards. The other is fake comments, possibly bot-generated, to gum up the works or coming from foreign sources that have no legal standing. It’s not a brand new issue, but agencies may have fresh tools for handling mass and fake comments equitably.
The Federal Communications Commission already is, after receiving 3.7 million comments on a 2017 proposed rule change to net neutrality. It used a contractor and natural language processing to make sense of what was going on.
At a recent public meeting, a Google regulatory affairs guy outlined several machine learning methodologies for dealing with mass and fake comments. Michael Fitzpatrick, who was also an Office of Information and Regulatory Affairs adviser in the Clinton and Obama administrations, also noted that fake and fraudulent comments can undermine the integrity of rulemaking. Although Bull and others said it’s possible even those comments could contain useful information.
Updating e-rule making is an important project. Regardless of whether you think rulemaking is out of control by the swamp or exactly what a government in a complex society should be doing, more people agree that the U.S. approach to rulemaking is the gold standard. That comes from Patrick Hedren, deputy general counsel at the National Association of Manufacturers.
Whether the government proposes too many rules or too few is a debate for another forum. But so long as it does, it’s obligated to use whatever tools necessary to keep rulemaking credible and legitimate.