,

Geospatial Intel Meets Entity Resolution: The Who, What, When, Where — and Quest for Why

Avatar of Ian Gilbert
Ian Gilbert

Inspired by the many conversations we had at the recent GEOINT Symposium about how geospatial data can become an ideal pivot point for an entity resolution system, the time is here to transition from When and Where to Who and What –as well as the elusive Why.

Eyes On the Earth

Geospatial imagery abounds. There are dozens of space-based imaging platforms collecting data around the globe, not to mention, airplane based systems, drones, blimps and who knows what’s next. These imaging solutions are not only collecting pictures using visible light, but also deploying ultraviolet and infrared sensors, and even radar and lasers. Image analysts apply many techniques to analyze these data, including visual inspection, feature extraction, change detection, elevation modeling and multispectral signal extraction.

The majority of image collection is understandably reactive, i.e. some event occurred in this region triggering the deployment of imaging platforms to assess the impact of that event, from a cyclone, to a bombing, to an oil spill. However, there is a considerable amount of passive image collection or routine monitoring that is being conducted as well.

Ears On the ground

News, Social Media, documents, intelligence intercepts, and other forms of unstructured text contain a wealth of information that could be used to develop insights. However, this content need to be structured in order to provide a means for exploration and discovery.

Creating a meaningful structure from this content involves first extracting mentions of people, places and organizations from unstructured text and then resolving these mentions to known entities in the real world. This entity-centric approach enables users to establish connections and build relationships across multiple streams of content. Just as feature extraction and change detection reveal information hidden within geospatial imagery, entity extraction and resolution reveals the key information necessary to understand and interpret unstructured text data.

The Surround-SoundApproach

It is easy to see how the combination of image analysis and text analysis would lead to greater insight. You can imagine a comprehensive system deployed to monitor a volatile border region, or urban center. The system uses image analysis to detect a change in the environment, identifying the “where” and “when”, and then pivots on the same entities in the unstructured text data to determine “who” (key individuals) and “what” (organizations) are related to the “where” and “when”.

For example, an analysis of a series of images over a rail yard might indicate a significant increase in activity. Using the location and time data from the imagery, an analyst could conduct a search of the unstructured text data, to reveal information about a rail company receiving a new contract to transport goods. The entity resolution system could help identify connections between the company, and key individuals in other organizations to provide the analyst with a fuller picture of the event.

The Collaborative Approach to Why

Combining results from both image processing and data mining activities would provide an opportunity for collaboration between a larger community of analysts with different expertise. While automated and semi-supervised systems can efficiently extract information from large quantities of data, there is still no substitute for human intuition. By providing more organization and structure to information, analysts can see the fuller picture of the Who, What, When and Where of a particular event allowing them to develop insights into the crucial “Why”.

At Basis Technology we’re actively developing an entity resolution system capable of handling large volumes of unstructured “Big Text”. And we’re looking forward to continuing the conversations that began at GEOINT, which will lead to a realization of this vision.

Ian Gilbert is Director of Marketing for Basis Technology.

Leave a Comment

No Comments

Leave a Reply