The following is a reprint of an article written by Ewen Levick for the Australian Defence Magazine – March 2023.
A little background
Air Battle Managers at No.114 Mobile Control and
Reporting Unit during Exercise Diamond Storm 2017.
AOS’S origins date back to the 1990s, when it emerged from the Australian Artificial Intelligence Institute, the Melbourne arm of the Stanford Research Institute. AOS soon found its niche working on air-combat modelling for the Defence Science and Technology Group (then DSTO).
In this environment, AOS continued the Institute’s research into the novel concept of Beliefs, Desires
and Intentions or BDI, intelligent software agents. AOS released its first BDI agent platform, named JACK, in 1997 and continued collaborating with DSTO and the UK Ministry of Defence with the development of C-BDI, AOS’s new intelligent software agent platform written in C++.
BDI intelligent software agents are an example of symbolic AI, which is based on formal logic, sometimes referred to as “top-down” AI. This is contrasted with pattern-matching approaches, such as deep neural networks or machine learning, described as “bottom-up” AI.
Top-down AI, being based on formal logic, offers explainability. The BDI agents’ reasoning comes from the execution of their plans, which are understandable to subject matter experts, not just software engineers. This allows the experts to inspect the intelligent agents’ reasoning as it is executed by tracing their intentions, beliefs and goals (or desires). With the addition of a query engine, C-BDI has a powerful explainability capability, in contrast to bottom-up approaches.
Intelligent software agent
The IBA system can significantly reduce the number of false positives , leaving Air Battle Managers more time to focus on those tracks that represent actual threats .
C-BDI underpins the Intelligent Battlespace Advisor (IBA), one of the principal decision assistants that provide the “intelligence” in NGA’s JABMS. IBA combines data streams from a multitude of networked sensors across all warfighting domains and uses this information to assess the behaviour of each and every airborne or maritime track.
“With an ‘agent for every track’, analogous to having one person for each, IBA identifies suspicious track behaviour and then highlights these occurrences to the Air Battle Managers,” Dr Andrew Lucas, AOS Managing Director, said.
“Now if we only use a pattern matching system, a flight that is running half an hour early would likely get flagged as a positive alert,” Lucas explained. “But because we ingest a range of data streams from Air Services, we’re able to obtain ground and air speeds from civil aircraft and incorporate this information into our calculations.
“This provides a dynamic wind model, which tells us that the reason why a flight is ahead of its flight plan is due to a 100-knot tailwind, and so this track does not pose a threat. As these types of scenarios occur regularly, the IBA system can significantly reduce the number of false positives, leaving Air Battle Managers more time to focus on those tracks that represent actual threats.”
Building trust in the system
“AOS has sought to build a high degree of transparency and explainability into the decision-making processes that provide the alerts in IBA”.
A significant concern, however, is fostering trust. One operator, relating their experience with computer-based decision-support systems, reportedly re-marked: “We have had experience with these systems – they generate nuisance alerts, and we can’t switch them off!”
Recognising this, AOS has sought to build a high degree of transparency and explainability into the decision-making processes that provide the alerts in IBA.
“What we want is to be able to show the reasoning,” Lucas said. “So, when the question is asked, why didn’t the system flag that track with the early running time? IBA can report that there was a 100-knot tailwind, and consequently the plane was 30 minutes ahead of its flight plan.”
In addition, the graphical representation of IBA’s reasoning plans allows the Air Battle Managers to work with system analysts to build the reasoning that underlies each alert. In this way the operational staff determine the alerts and the situations that will trigger them. If during trials IBA is generating inappropriate alerts, the system analysts and Air Battle Managers can graphically trace the alert reasoning in IBA, and identify the cause. They agree on the modified logic and this is incorporated directly into IBA. The result is that IBA only provides the alerts that the Air Battle Managers want, and these can be modified as the operational circumstances change.
This unique capability distinguishes IBA from automated alerting systems, or pattern-of-life AI approaches, ensuring that the Air Battle Managers determine the alerts that IBA provides.