Skip to main content
Knowledge4Policy
KNOWLEDGE FOR POLICY

Supporting policy with scientific evidence

We mobilise people and resources to create, curate, make sense of and use knowledge to inform policymaking across Europe.

Projects and activities | 15 May 2023

Preventing algorithmic discrimination and promoting the responsible use of artificial intelligence

Context

In recent years, Artificial Intelligence (AI) has become more widely used to aid human decision making in areas of great societal importance, such as credit lending and recruitment. While this can make decision more efficient and accurate by avoiding human cognitive biases and limitations, it also carries the risk of automating and perpetuating discrimination against socially marginalized groups.

The General Data Protection Regulation (GDPR) gives the right not to be subject to a decision based solely on automated processing (Article 22). Along this line, the European Commission’s proposal for a regulation of AI (AI Act) requires human oversight in order to prevent and minimise risks to fundamental rights. The human overseer must be able to fully understand and interpret the AI system’s output and must have the option not to use it (Article 14).

Goals

The purpose of the study is to understand how AI decision support systems affect human decision making. To do so, we must consider that AI is not intended to “replace” human judgment. Users of AI can choose to follow AI recommendations or not. We therefore focus on how human decision makers interact with and rely on AI depending on whether the AI is fair or biased.

We ask whether giving unbiased AI advice makes human decisions less discriminatory than unaided human decisions, and conversely whether biased AI advice makes human decisions more discriminatory. We also consider whether discriminatory use of AI is conscious and intended, or rather the result of mistaken over-confidence or under-confidence in AI.

Methods

We run an online experiment that mimics the employer-employee and the lender-borrower relationship.

Human Resources (HR) and banking professionals are asked to choose whom to hire or lend to among a pool of job and loan applicants. These professionals are given help in their decisions from an AI decision support system, which we programmed to be either fair or discriminatory. We then measure the rate at which professionals follow AI recommendations. We follow up this quantitative experimental study with a qualitative study whereby decision-makers are invited to give feedback on their experience in the experiment. They are asked to “think aloud” while replaying the experiment, and co-design improvements in the AI recommendation system using prototypes.

Expected outcomes

The research will allow us to progress beyond the state of the art by generating evidence of whether, how and why AI ends up being used to discriminate. We will study to what extent this involves otherwise well-meaning unbiased decision makers who do not understand AI, or biased decision-makers who make use of AI to support their prejudices.

Knowing what drives inappropriate and unfair use of AI will give us insights on whether hard interventions to oversee AI are needed (to restrict badly intentioned use), or whether soft interventions are more appropriate (to help well-intentioned use). We will thus generate recommendations to enhance gains in effectiveness and reduce possible harms from the use of AI.