Skip to main content
Knowledge4Policy
KNOWLEDGE FOR POLICY

Competence Centre on Behavioural Insights

We support policymaking with evidence on human behaviour

Topic / Tool | 09 Jul 2024

Behavioural insights for artificial intelligence (AI)

Why behavioural insights matter

Artificial intelligence (AI) is revolutionizing numerous aspects of our lives by providing data-driven insights and improving efficiency in fields like education, healthcare, law enforcement, and recruitment. Nevertheless, widespread adoption of AI systems can perpetuate existing biases or create new ones due to factors such as biased training data and flawed algorithms. To mitigate these risks, behavioral insights can help identify how users perceive and trust AI recommendations, what they think is fair and just use of AI, and other essential aspects of human-AI interaction.

By understanding how individuals interact with AI, policymakers and technologists can design systems that support human agency, promote fairness, and avoid unintended consequences. These behavioural insights can inform the creation of AI that is transparent, accountable, and aligned with ethical standards, ensuring that the technology serves to enhance, rather than undermine, human dignity and societal values.

In this context, we started exploring different dimensions:

  • Impact of AI on education: We study the impact of Large Language Models (LLMs), such as ChatGPT, on students and educators, trying to address the lack of scientific evidence in educational settings despite their rapid adoption.
  • Discrimination in hiring/lending money: We study how human oversight affects AI-based Decision Support Systems in tasks prone to discrimination, such as hiring or lending money, exploring the interplay between individual and AI biases, resistance to AI recommendations, and ethical considerations in AI-human decision-making.

 

Ongoing projects

 

Selected publications

Cover page

This study, carried out for the European Commission (DG JUST), examines the link between national civil liability rules and consumers’ attitudes towards AI-enabled products and services (AI applications). The study examines, based on behavioural analysis, the following two dimensions:

  • As regards the societal acceptance of AI applications, the study aims to assess the current level of acceptance of AI applications, the factors shaping it, as well as the awareness of potential challenges in obtaining compensation for damage caused by these applications and its effect on societal acceptance.
  • With respect to consumers’ trust and willingness to take up AI applications, the study aims to generate insights on the potential impact regulatory alternatives adapting the liability regime might have on consumers’ trust and their willingness to take up such applications, and on the causal mechanisms underlying this impact.

The behavioural experiment was built around three types of AI applications and reflected two scenarios of damage caused by such applications: damage caused to the owners of the AI application and damage caused to a third party. Within each of these scenarios, three alternative liability regimes (from the following: fault-based liability with the burden of proof on the victim, a shift of the burden of proof regarding fault, strict liability of the owner = consumer, strict liability of another party) were presented in the form of fictional interviews with a lawyer. A reduced likelihood of obtaining compensation for damage caused by AI was assumed with respect to fault-based liability regimes putting the burden of proof on the injured party. In line with this study’s focus on Member States’ national liability rules, none of the posited alternative liability regimes corresponds to the existing Product Liability Directive.

 

Latest knowledge