C-BDI™
Building Artificial Intelligence
C-BDI™ is an artificial intelligence framework for building human-machine teams and is the core of all AOS products. This is the fourth-generation of our software which utilises the Belief, Desire, Intention (BDI) model for explainable decision-making and provides distributed team reasoning, resilience and scalability.
Rational agents are used by C-BDI™ to model how intelligent entities interact with each other and the environment. A rational agent could be described as a self-contained program that can make its own decisions and act upon them to achieve a goal.
C-BDI™ agents are modelled using a graphical interface without the need to write any complex code. This visual modelling of agents bridges the gap between the operators and domain specialists and helps to build trust in the system.
Building Human-Machine Teams
C-BDI™ supports Human-Machine Teaming by reducing the cognitive load on humans and allows them to focus on the more subtle reasoning tasks that humans are better at.
There’s an inherent correlation between team performance and explainable AI (XAI), which allows humans and machines to coordinate with increasing levels of efficiency. XAI builds trust in the system by providing the human with the appropriate situational information so that a decision can be made.
The advantages of automation can quickly be lost due to a lack of understanding or confusion about the decisions made by an AI. While trust can be built over time as the system continues to make rational decisions, the transparency and understanding gained through XAI can greatly improve the level of trust felt by a human performing a role.
C-BDI’s™ ‘explainable AI’ enables a human to ‘step into’ a system and work with the autonomous operations, or, view the process from the outside. This is often referred to as operating ‘in’ or ‘on the loop’.
Building Ethical AI
When an autonomous system is making decisions that will affect the lives of individuals it is important that its reasoning can be easily understood by a human.
Recent trends in AI have been pushing the boundaries of statistical machine learning models. However, as impressive as these are, the decisions they make cannot be explained. C-BDI™ agents make rational decisions based on symbolic AI and therefore are capable of explaining their reasoning. C-BDI™ is capable of tracing the execution of its reasoning and showing this to a human operator in a way that they can understand why a decision was made.
To support ethical AI, C-BDI™ aims to adopt the 3 levels of explainable AI given in:
The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems.
- Level 1: XAI for Perception – explanations of what an AI system did or is doing and the decisions made by the system.
- Level 2: XAI for Comprehension – explanations of why an AI system acted in a certain way or made a particular decision and what this means in terms of the system’s goals.
- Level 3: XAI for Projection – explanations of what an AI system will do next, what it would do in a similar scenario, or what would be required for an alternate outcome.
Building Resilience
As autonomous systems become more complex and distributed it is becoming increasingly difficult to keep them resilient against disruptions.
C-BDI™ uses ‘Resilient Teaming’, which allows the system to continue operating in a disrupted environment even when some of its team members are no longer able to communicate or perform their respective roles.
Redundancy is a core tenet of C-BDI™ allowing teams to continue operating even during disruption and partitions.
It is able to re-delegate tasks from software agents to humans if the system fails or the task becomes too complex for an agent to handle.
C-BDI’s™ built-in protocols for synchronisation and conflict management allow a C-BDI™ agent to continue working even when communication with other agents is lost.
Key Features
We built C-BDI™ to help tame complexity in our products without compromising on ambition or ethics and it can do the same for your software, simulation or robotics.
From monitoring flight patterns, detecting intrusions or replicating human actors, C-BDI™ allows for smarter intelligent agents that boost productivity and lets human operators focus on what matters most.
Open Agent Protocol
C-BDI’s™ SDK provides programmable hooks, which enable explainable AI to be distributed, without compromise, across any network configuration such as DDS, WebSockets, IPC or your own routing layer.
Agent Modelling Tools
Interactive modelling tools are provided to help design and visualise the relationships of multi-agent systems, allowing for the creation of their reasoning, team hierarchies and actions through a user-friendly interface. These tools help capture the knowledge of domain experts and allow for the building of autonomous systems without needing low-level programming knowledge.
Explainable BDI Agents
C-BDI’s™ multi-agent approach adopts the Belief, Desire, Intention (BDI) model popularised by Michael Bratman, which presents cognition in terms that are familiar to humans. Providing transparency to cognition and the steps leading to intentionality, promotes traceability and inspectability, which are the foundations for explainable AI.
Agent Teaming
Agents are grouped in teams that adapt to a given situation by taking account of the available resources. Agents within teams collaborate according to their roles, which, in turn, defines the way they interact with humans to achieve a common goal. Dynamic teams provide resilience in adversarial environments where agents may be partitioned or lack bandwidth to communicate. C-BDI™ agents can understand future intentionality of agents and dynamically disband and/or regroup to complete the goal with the available resources.
Proactive Reasoning and Team Coordination
C-BDI™ advances the previous generations of BDI agent platforms by utilising planning and constraint solvers. C-BDI™ agents can visualise the future consequences of their actions and allow for proactive intelligence.