What separates good data from bad data? Well, a couple of things: How the data is displayed, its accuracy, and useful context about the data are all key.
Whether we know it or not, these features keep us going back to our favorite websites and apps to get information. We all have go-to data that we check weekly or even daily. Maybe it’s weather data, sports statistics, or schedules about trash removal. What makes these data sets useful is the way they’re presented.
Heather Gittings, Director of Public Sector Solutions at Qlik, is in the business of helping agencies present their data in useful and visually appealing ways. “We offer a platform that pulls together all data — regardless of size or location —into one consolidated view,” Gittings said. “We do that in a visual way, so that end users can create and explore charts and graphs that answer their questions and help them make informed decisions.”
These capabilities, often referred to as visual analytics, are about empowering users to pour through data and present it in new ways.
But there are many solutions that agencies have to choose from. It’s hard to decide what analytics tools can best drive your agency’s open data strategy. To help you make the best choice for your agency, Gittings offered these tips:
- Consider the type of data you’re sharing with the public. Do you have large volumes of data from numerous data sources? “There are many tools out there that are excellent at creating beautiful pictures and visualizations from Excel data,” Gittings said. “But the limitation is that they can create those pretty pictures using a single set of simple data, such as relatively small Excel files.” But for many agencies it’s not that easy. To handle complex data, ensure you choose a tool that can analyze heavy volumes of data, from disparate and very complex sources.
- Consider how citizens will interact with the data. Will people view this information on a desktop computer, a laptop or a mobile device? It’s vital that agencies consider how their data will be displayed on different devices. “The odds are, if citizens are using your data, they’re going to be doing it from a mobile device,” Gittings said. “So make sure that the tool you choose is designed for mobility, has a responsive interface and enables citizens to build visualizations — not just consume them but also create and share them on the go.”
- Consider your future needs. Agencies want their investments to adapt to their needs as their data changes and grows. Equally important is ensuring that whatever tools they buy can meet citizens’ evolving needs. Open data initiatives today will be very different in the future, Gittings, noted. The key is choosing tools that will provide flexibility, as opposed to something fit only for today’s needs.
But what good is any tool if it doesn’t make your data easier to understand and use? Think about the weather apps you’ve either seen or used. If the app shows there’s a 70 percent chance of thunderstorms, you can plan accordingly and pack an umbrella. It didn’t take much analysis or expertise to view the data and use it to make decisions. This example is pretty straightforward, but it isn’t always obvious how information will be used or what data will be a hit among citizens.
“Agencies can’t necessarily predict what somebody’s going to want to find,” Gittings said. If they clearly state the dataset’s value and provide that data in a way that enables citizens to explore it, then “they’ve added value while empowering citizens to answer their own questions. It’s a win-win.”
When deciding what data to release, Monica McEwen, Federal Director at Qlik, suggests starting with data that is specific to your mission and informs the general public. Data sets released through Freedom of Information Act requests are prime candidates for publishing. “Make them very visible, so that they’re used for better decision-making,” McEwen said.
She encouraged agencies to consider how their data might appeal to a broad audience. It’s true that certain people may only be interested in very specific types of data, but there’s a way to make data more appealing to the masses. McEwen used this example: A scientist might have great interest in raw data about fault lines for earthquake predictions, but the average citizen would have a hard time making sense of the raw information. But that same data, presented in charts and graphics, would be very valuable to the general public.
“It goes back to making the data truly usable, as opposed to just checking a box that it’s out there and open,” McEwen said.