With a national election on the horizon and the ever-present need for vigilance against cyberthreats, maintaining the integrity and security of our governmental institutions is a top priority for the public sector. This has proved to be challenging in recent years, particularly surrounding the profusion of both mass and fake comment campaigns within the federal regulations feedback process.
On Jan. 30, the General Services Administration (GSA) held the first of two public meetings on the subject of mass and fake comments. During the meeting, two separate panels of experts were given the opportunity to share their thoughts on the phenomena.
When agencies are seeking to source public input on their regulations, they are often inundated with large numbers of near-identical, form-letter-esque comments and/or or comments made by users, often automated bots, with either fictitious or stolen identities. This can have a number of resulting impacts upon the regulatory process, for both the responsible agencies and citizens alike.
The danger of fake comments was discussed by several of the event’s panelists. One of the most glaring examples of these dangers is that in many of these fake comments, the identities of real and often deceased individuals are used to give the appearance of legitimacy.
“You now have on the public record a statement made by someone without their consent,” said Sanith Wijesinghe, an innovation area lead with MITRE. Wijesinghe elaborated on another concern, saying simply that when bots and automation are responsible for misleading comments, it is a non-human, “… non-carbon-based life form attempting to sway public opinion.”
Filling a gray area that neither abuses nor benefits the regulatory process, mass comments proved to be a much more nuanced topic covered by the panel. Oliver Sherouse, a regulatory economist with the Small Business Administration, said most small businesses do not have very many direct methods of making their voices heard by regulatory bodies, other than commenting platforms. “Small businesses are thus harmed when their comments are lost in the flood,” Sherouse said.
On the issue of mass comments, Steven J. Balla, Associate Professor of Political Science, Public Policy and Public Administration, and International Affairs at George Washington University, said that while mass comments are fair-use in the rulemaking process, “[I] don’t think they have democratized the process either.” It was later noted that many participants in mass comment campaigns choose to add some personalization to their form letters before commenting. When this is done, the comment is saved and reviewed by the agency in the same manner as any other unique comment.
The importance of addressing these issues was echoed by Michael Fitzpatrick, head of global regulatory affairs at Google. Pointing to emerging developments in the commenting crisis, such as deepfake commenting (comments generated entirely by automated bots that are indistinguishable from genuine, original comments), Fitzpatrick offered his prognosis that the technology used in the cases of automated fraudulent commenting will only get more sophisticated and prolific over the next five to 10 years. “It’s only going to get worse,” he said.
When asked if there was a timeline for the government’s response to the mass/fake comment problem, Tobias Schroeder, director of the GSA’s erulemaking program explained that they were currently in, “a phase of discovery” and encouraged participation in the agency’s effort to crowdsource potential solutions from the public.