Autonomy

AOS Group develops software systems built for Autonomy, facilitating intelligent agents to observe, track and make decisions based on the world around them.

Defining Autonomy

Autonomous systems differ from automatic ones in that they make rational decisions based on knowledge of the current situation.

An autonomous system evaluates its current circumstances to decide on the possible courses of action to take.

Artificial Intelligence Robot Computer Chip

Autonomous systems can make independent decisions to pursue a goal. This may include gathering information from its environment, exploring potential outcomes and then making balanced decisions to achieve that goal.

How Does It See The World?

Like an automatic system, the autonomous system accepts sensor data and user commands but operates with more abstract concepts instead of a knee-jerk reaction to inputs.

As with humans, such an autonomous decision-making system should balance proactive (goal-directed) and reactive (responsive) aspects when making decisions.

Automated Security Visualisation

While humans may take their ability to instantly view and interpret a situation for granted, it is a much more complex process for an AI computing system. For Autonomy to be genuinely reliable and capable of responding to unpredictable situations, it must gather and process data from many sources. These may come from digital feeds like weather and traffic reports to sensing the world through cameras, radar, LiDAR and infrared sensors. By processing these different ‘views’ of the world, the system can quickly develop a very accurate understanding of its environment and use this as part of the autonomous decision-making process.

The Future of Autonomy: Human-Machine Teams

The world of autonomous technology is rapidly evolving from smart homes to autonomous vehicles. Projects, be they consumer, commercial or military, are seeing increased levels of Autonomy integration.

Our Beliefs, Desires and Intentions modelling framework C-BDI underpins all AOS Group’s products. This allows for smarter, more engaging intelligent agents in autonomous software applications.

Autonomous Vehicle Visualisation

The SAE International organisation has set out six stages of Autonomy with respect to  vehicle control.

  • Level 0 – No driver assistance –  driver in control
  • Level 1 – Driver assistance –  low-level assistance – driver in control
  • Level 2 – Partial automation – advanced driver assistance – driver monitors systems and can resume control
  • Level 3 – Conditional automation – AI controlling systems – driver remains alert and available to take control if requested by the system
  • Level 4 – High automation –  used for driverless transport – no driver control but may have restricted routes and speeds
  • Level 5 – Full automation –  the vehicle can drive anywhere without intervention

 

The step up from Level 2 to Level 3 is significant, as Level 3 is the first stage that does not require the driver to monitor the AI technology but they must still be available to resume control if needed. Level 3 autonomous systems are not yet legal in America; however, Mercedes Benz will seek limited approval for use in California by the end of 2022.

AOS is currently developing the Kelpie® vehicle for off-road use for defence, agriculture and mining applications. It can successfully navigate to waypoints while detecting and avoiding obstacles. Kelpie® achieves this using onboard processing of live data obtained from cameras, LiDAR and positioning sensors. AOS anticipates that Kelpie® will play a vital role in surveillance, agriculture and military logistics support.

Autonomous Cognitive Systems

In simple terms, autonomous cognitive systems can ‘think’ for themselves by processing data from their sensors and then preparing and executing a course of action independent of external control.

Autonomous Vehicle Forklift Robot Driving Through Warehouse

There are many levels of cognition, which would generally include some of the following characteristics:

  • Establish goals and formulate plans to achieve them
  • Learn from experience by assessing the outcome of actions
  • Communication and interaction with other agents
  • Adjust actions and goals according to changing circumstances
  • Monitor, understand performance and improve actions as required

How Do We Ensure Autonomy Is Safe For Everyone?

Rigorous testing and safety mechanisms should be a core focus when designing autonomous software and are something AOS Group prioritises on every one of our projects.

Cars with autonomy software crossing intersection with pedestrians

The introduction of Autonomy to the motor vehicle industry has been a major shift in the way we relate to vehicles and it may take a generation before it is fully adopted. Countless kilometres of testing have been conducted and will continue to be undertaken to ensure the technology is safe for everyone.

Levels 1 and 2 of motoring Autonomy represent a great opportunity to build confidence in the new technology. The addition of ‘advanced driver assistance’ with vehicle detection, pre-emptive braking and lane monitoring, enhances a driver’s capabilities while still leaving them in control. The gradual introduction of AI’s life-saving systems helps to build trust between drivers and emerging vehicle autonomy.

While Levels 1 and 2 are readily adaptable for use on existing roads, moving to Levels 3, 4 and 5 – where AI is effectively in control of the vehicle – is still some way off. This is largely due to the unpredictable nature of roads, pedestrians and weather conditions, which present significant challenges for AI.

Fully autonomous vehicles really come into their own when they are used in restricted areas or on purpose-built roadways that have been designed to minimise these variables.

Robotics

Integrating autonomous software into robotic hardware is one of our specialities at AOS Group, where we develop applications that a range of different devices can utilise.

Kelpie Autonomous Ground Vehicle Maintenance

Robotics is the design, construction and use of machines (robots) to perform tasks usually done by humans. Typically, robots are designed for highly repetitive tasks or those that are dangerous for humans to carry out.

Robots have evolved from being programmed to perform ‘simple’ tasks on an assembly line to a fully autonomous state capable of operating without human supervision. Its decision-making process pursues planned objectives while considering sensor inputs such as weather, terrain, obstacles and human safety.

Obstacle Avoidance

Kelpie®, our autonomous vehicle platform, is built with obstacle avoidance technology and can detect drivable terrain and avoid hazards.

This is key for operations around large worksites. Kelpie® is able to navigate tricky terrain and ensure that human workers can travel around the site safely.

Road Cones

Obstacle detection and avoidance are two of the most critical requirements for an autonomous vehicle. A variety of sensors feed information about the surrounding environment into the onboard computing system, enabling it to make informed choices about the best way to achieve its objectives.

AOS’s autonomous vehicle, the Kelpie®, fuses LiDAR, stereo cameras, GNSS and IMU data to plot its position accurately in real-time. Based on the sensory feedback, Kelpie® can ‘visualise’ the terrain, recognise obstacles and modify its route to achieve its goal.

Machine Learning

Machine learning is a subset of artificial intelligence that enables systems to learn patterns in data by scanning ‘like’ samples. Algorithms developed from the scanning process provide a basis for predicting future outcomes. One of the most significant advantages of machine learning is that it can quickly scan large quantities of data that would be impractical for a human. Machine learning is often found in applications supporting medicine, email filtering, image and speech recognition. The Google search engine is one of the most recognisable examples of machine learning today.

Artificial Intelligence Motherboard Visualisation
Supervised Learning

With supervised machine learning, models process high volumes of labelled data sets to help them develop their object recognition. For example, AOS developed an algorithm that has been trained to recognise different types of invasive weeds on farmlands. The process included feeding the learning system with thousands of weed images featuring variations in width, height, density, colour, etc. The algorithm has now ‘learnt’ what a particular weed looks like and can trigger an instruction for treatment when encountered. Supervised machine learning is the most common type used today.

Unsupervised Learning

In unsupervised machine learning, a program looks for patterns of unlabelled data. As there are fewer constraints on the data being processed, the program can find other patterns and trends that the enquirer was not specifically looking for. For example, an unsupervised machine learning program could process sales data and return financial institutional groupings of customers in addition to identifying those that paid by debit or credit.

Reinforcement Learning

Reinforcement machine learning trains programs, using a trial-and-error method, to complete tasks while adhering to a set of clearly defined rules. Reinforcement learning develops as the program is given positive or negative cues as it works out how to complete a task.

Contact us

Get in contact with us to discuss how our autonomous software and intelligent agents can be used to help your organisation.