The corruption of phishing email, social media scamming, and phony phone calls is threatening the integrity of a cherished and crucial federal process: Rule-making. No surprise, I guess, given how the era of IP-on-everything has turbo-charged the fraud industry.
This is a big concern for rule-making agencies, no less than for the General Services Administration as it embarks on a project to modernize electronic rule-making. However that happens, GSA acknowledges it will have to deal with two phenomena that characterize rule-making when everybody has a keyboard.
One is the rise of mass comments to proposed rules. The second is the rise of faked mass comments, many generated by bots. The former is known as MCC, or mass comment campaigns, and they’re perfectly legitimate provided they come from Americans. The latter are already doing harm to the process.
This was the topic at the first of two public meetings staged by the GSA’s Office of Governmentwide Policy. The first, held Thursday, featured a parade of experts who had a docket full of insights into rule-making — a topic to which I hadn’t given a lot of thought.
Yet rule-making lies at the heart of government. Doing it right is crucial to the government’s very legitimacy.
MCCs are the electronic version of bury-the-mail room. George Washington University political science professor Steven Balla had statistics showing that both activist and industry groups regularly launch mass electronic comment campaigns, especially when it comes to the EPA. In the period he studied, Balla’s research counted 542 MCCs from advocacy groups and 198 from regulated entities.
Balla described what he called a transparent and accessibly way the EPA deals with MCCs. It collects and posts the mass comments and responds to them. It makes a representative sample available at the rule web site. That is, it neither dumps them nor ignores them. But, Balla said, rule-proposers do spend more time with individually-crafted comments that offer data and facts, and not merely an opinion on a proposed rule.
Agency rule-makers err if they consider mass comments as votes for or against a rule. That warning came from Reeve Bull, the research director at the Administrative Conference of the United States. The main legal purpose is to collect data that informs the potential efficacy of a rule, he said.
“It’s not a vote or a plebiscite,” Bull said.
If agencies can reasonably handle mass comment campaigns, there’s less certainty about fakes. You can define fake comments as those using faked or borrowed identities, or mass generated — often using language auto-generators — by the same sort of people who do mass spam.
Dealing with them will require technology. Michael Fitzpatrick, a two-time former adviser to the Office of Information and Regulatory Affairs (Clinton and Obama), now with Google, said agencies could run comment sources against lists of URLs known to spew out hoaxes. Tools exist that can ferret out and flag “toxic” comments. Artificial intelligence-driven challenge programs will create friction for bot-driven incoming comments, and the agency would set scale-of-certainty filters to make sure legitimate comments get through.
Regardless of the method used to control fakes, Fitzpatrick said, if agencies do nothing the bot problem “will get exponentially worse.”
GSA will have multiple challenges in overhauling e-rulemaking. Improving the basic discoverability of a document is one of them. Finding the right item and right version can vex even rule-making power players, said Patrick Hedren of the National Association of Manufacturers, which represents regulated industries. Dealing efficiently and fairly with mass and faked comments will be others.
The whole process of proposing, taking in comments, evaluating them, and finalizing a rule is wrapped up in administrative procedure law, fair mindedness, Congressional mandate, and government transparency. Ultimately, agencies’ approaches to rule-making determine whether Americans perceive their government as fair and open, or non-responsive and overbearing.