,

Coping with the Shutdown: Federal Data

The Federal Government has shut down operations. It’s not the first such event (there were three in 1977 alone), though it does have certain unique characteristics. Leaving the politics aside for the moment, what’s a civic hacker or data journalist to do when most of the government web and FTP presence is unreachable as well?

U.S. On Verge Of Full-Scale Government Hoedown (The Onion)

We have two responses brewing at Code for America.

Team 2013 Las Vegas and other fellows have been iterating on an API for the North American Industry Classification System (NAICS) for use in their city project. Normally available as a series of Excel spreadsheet files, Lou Huang and team have developed a simple JSON-based API for searching and accessing this useful data and made it available at NAICS.us. The source code behind the service is available under Github/CodeForAmerica. We’d been adding documentation and search capabilities when the government shutdown began, but it’s an opportune time to push it out of the nest early.

In the land of U.S. Census geographic and demographic data, the FTP server at ftp.census.gov is ordinarily a stable repository of extensive, high-quality files in downloadable ZIP form. This week, it’s unresponsive for the foreseeable future.

A community of census data users have worked with Code for America to pool their collected backups and made them available online in lieu of the normal Census.gov service. IRE Census Reporter developer Ian Dees has set up a selection of 2012 data hosted on Amazon S3. Experimental cartographer Eric Fischer has made available his complete collection of 2013 TIGER/Line edge shapefiles. CUNY Mapping Service director Steven Romalewski supplied 2013 districts. @OpenNebraska sent 2012 TIGER/Line edges. Darrell Fuhriman provided valuable demographic summary files from 2010. GeoCommons reminded us that 6,000 Census-derived datasets can be found on their service.

This potluck-style process highlights the survivalist-style utility of big, dumb physical servers with actual hard drives. I think of my neighbor’s rain-catchment system, the chest freezer in the basement or my recently-earned Ham Radio license and laugh, but Sunlight Foundation’s Eric Mill reminds us of the critical role of bulk data:

The only reliable way to preserve data online is to make copies — and the more copies, the better!

That’s why a government API will never be enough. It’s just so much easier to copy data when it’s directly downloadable in bulk. APIs can be extremely useful, but they also centralize control and form a single point of failure. Ultimately, APIs are optional — data is a necessity.

Just as importantly: hosting static files requires fewer people, smaller systems, and less technical expertise. It’s vastly simpler and cheaper than hosting a live, “smart” data service. In the face of hard funding decisions, that’s going to matter.

We’re not exactly getting our mail from Kevin Costner yet, but these real and direct consequences of the federal shutdown remind us of the fragile pieces loosely joined that make up our digital platforms. Whether caused by GOP deadlock or east coast electrical storms, the data and services we build on are periodically stressed. What are you doing to prepare for the unexpected?

Original post

Leave a Comment

Leave a comment

Leave a Reply