Earlier this month, a pump burned out mysteriously at a water plant in Springfield, Illinois. Log data traced the problem back several months to a command from an IP address in Russia that forced the pump to turn on and off repeatedly until it broke. When this news was leaked to the media from a cyber expert convinced that we were under attack by Russian hackers, a media frenzy ensued that made it all the way to Congress. On MSNBC, Rep. Jim Langevin, the founder of the Congressional Cybersecurity Caucus, lamented our state of preparedness and called the attack, allegedly the first against the United States with a kinetic effect, yet another “wakeup call.” The weakness of American supervisory control and data acquisition (SCADA) systems was referenced by an anonymous hacker on PasteBin, which some in the press believed to be a confession. The worst fears of cybersecurity experts had been confirmed, foreign hackers could cause damage to US critical infrastructure through the internet.
Except that wasn’t what happened in Illinois, which highlights the difficulty of forensics and attribution in cyberspace. The DHS and FBI, who were investigating the alleged attacks, denied from the beginning that there was any proof of intrusion on the SCADA logs and recently concluded the investigation, releasing the results. The failure was due to a faulty command inputted by a contractor several months ago who accessed the system remotely while travelling through Russia on personal business. Over time, his mistake caused greater and greater errors until, several months later, the pump failed. While, as the source who initially leaked the suspicious information noted to defend his claims, there is no proof that the water plant wasn’t hacked, it seems very unlikely given corroborating evidence of the mistake.
The overreaction and debunking of this attack hurts both the cybersecurity field and the cybersecurity of the nation, as in this case those for and against the claims were both right and wrong. As this incident illustrates, when attacks come in the form of data and commands rather than bombs and bullets, it’s difficult to tell an enemy action from friendly fire, another side of the attribution problem which is just as important as tracking criminals through cyberspace. And, given the prevalence of cyber yellow journalism and cries of “cyberwar” and “cyber Pear harbors”, overreactions like this hurt the credibility of cybersecurity researchers and solution providers. This is problematic as, at the other end of the spectrum, the constant need for “wakeup calls” shows that we are still lagging behind the threat. In this case, though no attack took place, the SCADA failure proves that one can easily be implemented. All it took was a command from a contractor’s computer on his or her own time to damage and shut down critical infrastructure in the United States. That means that a malicious actor can get unauthorized access to SCADA systems abroad to produce the same effects, perhaps by stealing a contractor’s credentials through a fishing attack. And, it would be difficult to tell a mistake from a malicious insider or a hijacked account.
Though this alleged attack turned out to be a false alarm, it still serves as a “wakeup call,” one that we’ve gotten many times before. Thus, rather than dismissing the kinetic effects of cyber because this attack never happened, we need to try out best to make sure that it never will.