I work on employee surveys. For a variety of reasons, I'm the poor sap who has to read through thousands and thousands of comments that public servants append to these surveys (because we asked if they had any). In fact, that's what I'm doing this morning between sips of coffee.
I often get asked by folks in my organization, or those in the HR community in other agencies, whether we "do anything" with the comments. The presumption is that we should just be able to throw the 1667 pages (10pt Arial narrow, single space, 3/4" margins) that I'm staring at into some sort of text-mining software, and magical insight will squirt out the other end, which we will then transform into a report, and publish.
Doesn't work like that, though. Comments go off in a zillion directions. People use expressions I've never heard of, make references to things I've never heard of, leave out words, make spelling and grammatical errors, and more. To make sense of all this data (and that's 1667 pages from this year. We get similar amounts...every...stinking...year. That's where I acquire my perspective from.), and subject it to a coding scheme within text mining software, you have to have at least some rudimentary notion of what it is you're looking for, and what forms it may show up in, and in all the years I've been doing this, I still have no idea of what to expect. I suppose if one was simply interested in counting stuff, you could just let the software have its way and apply algorithms, but that doesn't mean that the stuff that was important for policy development would rise to the top of the heap. So, I have to read it, and apply my own "wet-ware".
And therein lies the problem with ALL THAT SURVEILLANCE DATA, and the reason why a great deal of what happens is fishing expeditions, almost of necessity. What folks ought to be looking for is not always specifiable, so they simply have to keep their eyes open for anything that might seem conspicuous. They feel a need to gather it all, hang onto it all, and keep running it through the pasta-maker, again and again and again, until it turns into something.
Many of you might be familiar with the Freakonomics books, in which a variety of meta-data and similar is considered to provide alternative explanations of social phenomena. The data are generally observational, rather than derived from experimental manipulation, and while the hypotheses and accepted wisdom the authors challenge may very well deserve challenge, that does not mean their inferences from the data they have looked at are necessarily better explanations.
Both economists and security specialists tend to embark on fishing expeditions, looking for conspicuous patterns. Sometimes they find something of use, and many times they don't. Yesterday's market predictions are forgotten about as quickly as Jean Dixon's horoscopes from last week, just as yesterday's terror suspects are shelved.
The bind that security culture finds itself in is that its "permission" to go fishing is predicated on public and legislative trust that the baited hook always comes up with something edible for dinner that night. To reveal that a weekend spent casting the line with nothing edible to show for it...yet again...is to cast doubt on the entire exercise. So security culture is very protective of itself, and especially of its failures, and is eminently capable of persuading decision-makers that it needs to be protective. Understand that this is completely independent of all that very understandable Valerie Plame stuff, where identities need to be concealed for operatives. But that said, it is a very slippery slope we arrive at when "I can't tell you why I can't tell you about what I'm doing or why I'm doing it".
The paradox is that we seek to build a society where the availability of trust is as commonplace and normative as getting a sip of water from the fountain in the hallway, yet we find ourselves in these places and circumstances where we are moving at breakneck speed to undermine citizen trust. And that's not at all what any of us wants.