Human users need to be able to comprehend and check over the work of intelligent agents. AI explainability refers to the processes and methods in place that enable this understanding.
Our intelligent agents have been built with these processes in mind, enabling testing processes and scenario replication within our software.
Many of us don’t realise just how much Artificial Intelligence (AI) plays a part in our daily lives. Whether it’s doing a simple Google search, posting on social media or accepting suggestions from our favourite music provider – all use some form of AI to produce their results. We generally don’t question their output as the consequences for these being wrong are usually minimal.
However, AI has a big role in other areas such as medicine, finance and air traffic where the consequences of returning incorrect answers could have catastrophic results. To avoid this, the AI must be put through an extensive phase of ‘machine learning’ where it is exposed to potentially millions of examples and evaluations, so that it can provide reliable answers in the future.
For AI to continue growing, a high level of trust must develop between it and the user. This means the user must have confidence in the AI’s training, reasoning and ability to adapt to a changing environment. For this to happen the system must be able to display, in human terms, how it reached its results. This is the essence of ‘explainability’.
Consider a scenario where a plane flying in auto-pilot makes unscheduled changes to its flight plan. This may be in response to it detecting first – that the wings were icing-up – and second, that the de-icing system had failed to start. After evaluating the situation, the on-board AI adopted one of the many solutions (plans) available to it and chose to fly at a lower (warmer) altitude to dissolve the ice. This variation in flight path would normally have caused concern for the local air traffic control centre as there was no way for them to discern the reason behind the change. However, with a system operating with ‘explainable AI’ (white box), the control tower can interrogate the reasons behind the change and determine that the plan does not present a threat.
Joint Cognitive Thinking
While each of AOS Group’s intelligent agents are capable of operating independently, they can also interact and work together to achieve designated objectives.
AOS agents have qualities over and above those found in simple agent systems. Most importantly, our agents have intelligent reasoning capabilities and are proactive, anticipating future events. They are also reactive, responding in a timely fashion to environmental change.
‘Joint Cognitive Thinking’ can be described as the process of humans and machines working together to achieve a common goal.
Both bring their own strengths and advantages, which when combined, produce a more complete solution.
While machines can process huge volumes of data, it takes a human to provide the context, which could vary the way results are used in different situations.