Fielding Artificial Intelligence for ISR: Part I

by  , November 1, 2017 

Years ago, data collection across the Department of Defense (DoD) was manageable – it was obtained and dispersed into command centers at a routine cadence. Specialized equipment was used to perform specific Intelligence, Surveillance, Reconnaissance (ISR) functions to understand the battlespace, and this information could be fused and synchronized in a timely manner to make informed and accurate decisions.

But that was years ago.

Today, sensors and data sources continue to grow exponentially, creating more work for resources to analyze them. The emergence of Activity-Based Intelligence associated with current activities such as tasking, collecting, processing, exploiting and disseminating (TCPED) force us to take a step back and rethink the way we are wrangling information from the battlefield.

There are not enough analysts to sift through all the data that is being generated from reports, sensors, video, and other data feeds. Additionally, false positives and false negatives can be unnoticed with the implementation of Artificial Intelligence (AI).

Analysts spend 80 percent of their time sifting through data, but only 20 percent of their time analyzing it. By the time information is sifted and sorted, it may be stale or unusable. The Department of Defense (DoD) is addressing these challenges and looking forward to creating solutions with two projects: Project Maven and The Combat Cloud.


Project Maven

By the end of this year, the Defense Department would like to deploy new and advanced algorithms to allow for the extraction of identified objects from large amounts of full motion video and still imagery. Initially started in response to the campaign against ISIS, Project Maven focuses on computer vision (a component of machine learning) to identify these objects without human oversight. The goal of the Project Maven is to apply automation and algorithms to free analysts from the mundane job of sifting through hours of clean, non-moving parts of still video – to focus on identifying objects and activity. Once implemented, analysts can be prompted to begin looking at video from a certain point in time – to focus their efforts on a single moment in time.


Combat Cloud

In years past, specialized aircraft performed ISR functions — and the data was manageable. But looking to the future shows multiple sensors on multiple aircrafts or other machinery with multiple capabilities will provide these same ISR functions — all connected via a “combat cloud.” Integrating battlespace management command and control (BMC2) data with all source data will allow better decision making by enabling better situational awareness of the current environment and the ability to predict situational awareness of future environments.


Challenges of Performing Automation Tasks

Both projects bet on AI algorithms from academia and are hopeful that industry will do the heavy lifting by performing pre-processing and automation tasks. Fielding these capabilities will create new challenges around access to operational data sources, interoperability with downstream systems and tactical edge infrastructure. Compounding the complexity is that most artificial intelligence tools need their own feeds and infrastructure, which creates interoperability challenges and increases fielding costs.

Despite these challenges, it is possible for organizations to achieve their automation goals without creating more pain in the process. Stay tuned for Part 2 of this blog series to learn how MarkLogic is solving these automation challenges.

Sponsored by MarkLogic

You may also like...