,

A look at Government Computer News’ Four-part series on Text Analytics

By

Government Computer News has an in-depth examination of how text analytics are being used in the federal government. They examined how NASA is using text analytics for airline safety, how text analytics can “read between the lines” of terabytes of data, using text analytics to identify early signs of bio threats and using text analytics for agency data mining. The full four-part series can be found here, but we wanted to summarize and analyze it ourselves so we could give you our cut.

Bottom line: Great work by GCN. These guys are adding value to the dialog. Here are more thoughts:

NASA applies deep-diving text analytics to airline safety

NASA has created the Aviation Safety Program that uses text analytics to process hundreds of thousands of unstructured data reports. NASA collects data from pilot reports to mechanics logs in an attempt to identify problems, before they happen. This database was previously only viewed by human analysts, who do not have the time or cycles to process all the data. The machine processing starts with natural language processing (NLP) and machine-learning. For more, be sure to check out the full article here.

Text analytics: Reading between the lines of terabytes of data

DHS has started using text analytics to poll social media networks trying to identify signs of terrorism. Scanning social media is nothing new, but using machine learning text analytics is finding “hidden relationships” to highlight trends and public sentiment. Further details are scant, because of the pace at which adversaries adapt to the techniques, tactics and procedures (TTPs) of our governments. The article discusses capabilities that leverage Apache Hadoop, but doesn’t mention Hadoop for some reason. For the full article, check it out here.

Canary in a data mine: How analytics detects early signs of bio threats

The National Collaborative for Bio-Preparedness (NCB-Prepared) is using a system “to monitor emergency medical services reports, poison center data and a wide array of other data sets, including social media, to detect signs of biological threats.” By looking at reports, they were able to identify a gastrointestinal outbreak two months before it would have been identified by standard reporting. This system uses SAS text analytics running on North Carolina State University’s cloud-based Virtual Computing Lab. To read more, check out the full report here.

Text analytics ready for the heavy lifting of agencies’ data mining

The last article revolves around the growing need for unstructured data analytics in the federal government. It features one of our heroes, Chris Biow, CTO of MarkLogic.

Chris Biow, federal CTO at MarkLogic, agrees. “Any agency in the government that deals in any respect with the public should be to using text analytics now,” he told GCN. “It’s maybe only being used now in 20 percent of the cases where it should. It’s as broad as treaty compliance versus watching public sentiment toward the United States overseas to predict a riot. All of that is out there.”

MarkLogic’s Biow said the most critical thing in initial implementations of text analytics is to manage expectations because machines still are not nearly as good at analyzing text as humans are. “The machine’s advantage is that it can do all the text,” he explained. “[But] you don’t have enough human beings to read it all. The machines will make a pass-over and humans can then refine that. The machines are getting better in terms of the complexity and detail that they can extract, but not necessarily in terms of the quality. That’s why it’s important to set expectations.”

“The best practice here,” Biow said, “is setting reasonable expectations. And results can definitely be improved as your users, library scientists and text analytics vendors start working together.”

There are problems because many agencies do not talk about their text analytics out in public, it is hard to get data on solutions and successes. Biow further said managing expectations can be hard as machines have much left to learn. To continue this article, check out the report here.

There are many great points in this series that we liked and we most highly recommend the series. Thanks GCN.

We hope to see follow on work by GCN along these lines, perhaps diving into the new realm of Model Enabled Analysis and capabilities like Savanna from Thetus, which is showing a great path to helping humans interact with information like that described in this GCN series.


Original post

Leave a Comment

Leave a comment

Leave a Reply