You want to hear a secret? City governments are still at the beginning stages of understanding how best to optimize the use of machine learning (ML) algorithms to make city services more efficient. This is nothing to be ashamed of; I see this fact as an incredibly promising development in local government. If cities can embrace their journey to becoming more mature and robust in using ML technology to solve complex problems, then sharing challenges and best practices with each other may be a way for them to get better, stronger and more efficient.
Do you remember the term “laboratories of democracy” that was popularized by U.S. Supreme Court Justice Louis Brandeis? He was describing how a “state may, if its citizens chose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.”
The federal government was structured so states were autonomous to the extent that state and local governments acted as social “laboratories.” Policies and laws were created and tested in a way that was theoretically similar to the scientific method.
What I have observed is that many cities across the United States are adopting and deploying ML techniques in a similar fashion. The city of Chicago has deployed ML techniques to identify homes that are likely to contain lead-based paint hazard. New York City has used ML and analytics to identify landlords who are likely harassing tenants in order to get them to vacate their apartments. New Orleans has used ML to identify whom should get free smoke detector alarms. These and many other efforts are taking place in local governments across the country; many with varying levels of success.
For instance, there are some city challenges that are perfect candidates for ML techniques to be applied. There are some that are not. And these can certainly differ from city to city. As more and more cities begin to figure out what does and doesn’t work for them, they should be sharing all of this information, and it should go into a nationwide clearinghouse. This is happening in some form in many places. But for the most part we are just sharing and discussing the “good.” I’m talking about sharing the good, the bad and the ugly.
There is a really excellent book by Henry Petroski titled, “To Engineer is Human.” In short, he walked the reader through a number of case studies. The core notion of this book is that behind every great engineering success is a trail of often ignored engineering failures. As he discussed, we have embraced and learned from those failures the same way we have learned from our successes. These failures help us become smarter and more efficient when building some of our greatest engineering marvels, such as the Brooklyn Bridge.
When it comes to deploying ML technology to build a smarter and more responsive city, we need to recognize that cities and their circumstances are different: one size does not fit all when it comes to ML algorithms. We may not always be able to tell other cities where to go and how to get there, but we can tell them what perils may lie ahead and how to avoid them. We can certainly share best practice patterns and missteps.
We are at the beginning of a very promising journey. The citizens of our cities deserve for us to put our best foot forward in adopting such a powerful technology to help make cities better. We should keep our “eyes on the prize,” but make sure we are keeping track of and sharing with each other where we have been and what we have seen.
We should embrace this culture of adoption, innovation, agility and sometimes “failing fast.” But only if we learn from each attempt and build on top of it so that we can get smarter and better at deploying ML solutions in our cities.
Amen Ra Mashariki is part of the GovLoop Featured Contributor program, where we feature articles by government voices from all across the country (and world!). To see more Featured Contributor posts, click here.