We build our lives not around humans, but around machines. In order to cross a road, open a door or dry our hands, we rely on pressing buttons or activating sensors so a machine can recognise and read our movements. Our cities are built not for walking across, but for driving, and our machines are taught to communicate with each other. It’s clear that technological sciences have thus far focused on ‘functionality’ and finding ways to replace humans in the workforce. Yet, a new movement using behavioral sciences to make machines human-centric hopes to change the path of technological innovation. Behavioral Artificial Intelligence aims to develop machines to aid rather than replace people, to improve communication, and most importantly to read and understand the behavior of people and respond accordingly.

This is no simple feat, particularly given our inherent irrationality: only 53% of our decisions reflect our intentions. The field of behavioral analysis seeks to address this ‘intention-action gap’ by analysing empirical data about human cognition and behavior, rather than relying on assumptions about rational behavior. The combination of behavioural analysis with technological sciences allows for the collection and processing of rich data to identify predictable patterns in our seemingly irrational behavior. This has led to the creation of AI systems that directly interact with humans or needs to understand human behavior for further decision-making.

Such a system could, for example, be implemented in autonomous vehicles to read pedestrian body language and make accurate predictions on future behavior. If the AI is trained purely on a physics model, it may determine the velocity of a runner crossing the street and make physical calculations accordingly. Yet, if it is also equipped with a behavioral intelligence model then it may be able to read the body language of the runner and determine whether the runner is planning to slow down or stop entirely. This added layer of insight could certainly harmonize coordination on roads and improve safety.

Although, it should be noted that these systems come with some risks. It is important for these systems to make predictions based on behavior rather than on the characteristics of individuals, as this could otherwise result in biases. Furthermore, given the increased use of visual sensors the models should not have facial recognition capacity, nor should they capture identity, and they must comply with the General Data Protection Regulation (GDPR). Aside from the GDPR regulation, protection in this area is virtually non-existent, and as is often the case with AI technologies regulation is lagging very behind. Worryingly enough, it is still such a new field that there has been very little expert discussion of the dangers that this type of technology could present.

Yet, when we are faced with shortages such as those of mental health workers in the world, where in half of the countries in the world there are four mental health workers per 100,000 people, AI seems like our only hope and the potential dangers are dismissed. By tracking the expressions of individuals who are asked to watch videos of stereotypically ‘happy’ things, a study found that an AI can accurately diagnose depression in individuals. Another study found that by de-coding therapy sessions and comparing the techniques used to the outcomes reported by the patients, it could not only provide immediate feedback to clinicians but could potentially be used to train an AI therapy chatbot. These chatbots could provide some of the essential elements of care to people who can’t afford therapy or who simply don’t have access to it in their part of the world. Although research on these chat bots is still in the initial stages, preliminary research has shown successful patient outcomes. Yet, before chatbots are universally adopted some of the next steps to be considered include: creating universal standards of reporting; creating a universal evaluation standard for chatbots; and increasing transparency.

Analysing behavioural patterns gives us amazing insight into the way the human mind works. AI systems trained on these patterns can help us identify behavioural anomalies and to identify their cause and solution, not to mention the potential of harmonizing interactions between humans and machines. However, freely providing our behavioral patterns could also allow them to be used as a powerful tool against us. Ensuring morality and ethics are at the forefront of all computer-science-driven approaches is essential looking forward. Yet, we must not forget that it is also our responsibility to be aware of how valuable our individual behavioral patterns are and to be careful with whom we choose to share this information.

smartR AI is committed to developing ethical and responsible AI. Products created by smartR AI are HIPPA, GDPR and ITAR compliant. smartR AI also ensures safe AI deployment and the highest standards of data privacy and protection. Choose smartR AI for AI you can trust.

Written by Celene Sandiford, smartR AI