It has never been easier for citizens to get their hands on government data. The website data.gov features over 193,000 datasets (and counting) from every area of the government at federal, state, and local levels. State, county, and city governments have also been pioneers in the open data movement. Cheerleaders for open data include President Barack Obama, Tim Berners-Lee (the inventor of the Web), and seemingly everyone in the technology industry.
There is much more government can do to make its data open, now and in the future—I’ve written about the historical importance of the government’s open data movement before. But the open data genie is not going back into its bottle. The data is out there; it is machine-readable, ready to be used and programmed in countless ways.
So … now what?
Governments often mark the success of open data by how many datasets they’ve published on open portals. But publishing data, I’d argue, is open data 1.0. Open data 2.0 involves putting these data to good use. It means open data that informs policymaking budgetary decisions, that raises awareness of issues, and ultimately, that empowers communities.
Transparency is not the same as seeing. “Transparency” is an apt metaphor for describing the goals of the open data movement—and, as it turns out, its limitations. Traditionally, “government transparency” has involved regulatory requirements for publishing documents publicly. Open data is a newer and more complex concept, since it involves raw data that can be repurposed and made interactive, rather than just read-only documents. Of course, transparency is still the motive. Open data has created a vast window into the inner workings of government that citizens can look through—provided they have the skills to work with datasets.
But what do ordinary citizens lacking such technical skills see behind this window? Vast fields of data, billions of comma-separated values, that are meaningless without the tools of analysis—not to mention the time and resources to gather, open, and organize the data.
The data are far more transparent now than they were even a decade ago. Yet the open data movement has done little to change the fact that only a small percentage of citizens have the tools to see what the data represents. Glasses are better than a blindfold, but not much better without focused lenses. In many ways, transparent data is harder to understand than transparent government documents written in English.
If a dataset is published but never used, does it make a sound? Plenty of appmakers have taken up the call of open data and produced wonderful, user-friendly programs that help ordinary citizens navigate it. Open data has both enabled and encouraged their work.
But many of these apps are spare-time projects that tend to die out. And if nobody makes use of the data—even the focused data, made user-friendly in clever apps—then it is like the proverbial tree falling in the empty forest. Open government data needs to be made understandable. And it needs to be actively put in front of people who can make use of it.
And that requires more work. Governments can’t just hang their hat on publishing. To go beyond Open Data 1.0, data publishers and downloaders must both be more active in integrating data into local work and promoting its use among nongovernmental parties. Data must be released, but it also must be put in context. How can the data be used? What are historical and contemporary comparisons? If the data signals a problem, what can government or community actors do to help solve it?
Open Data 2.0 must take transparency and add focus. If there is a story to be told in a raw dataset’s numbers, governments must do more to translate the numbers into English—and help put the story in front of communities who can benefit from it.
Adnan Mahmud is part of the GovLoop Featured Blogger program, where we feature blog posts by government voices from all across the country (and world!). To see more Featured Blogger posts, click here.