Skip to main content
Knowledge4Policy
KNOWLEDGE FOR POLICY

Competence Centre on Foresight

We foster a strategic, future-oriented and anticipatory culture in the EU policymaking process.

Page | Last updated: 07 Feb 2023

Automated decision-making impacting society

The pervasiveness of digital technologies is leading to an increase in automated decision-making.

timeline and images small adm
(© Photo by Franki Chamaki on Unsplash)

Trend: Automated decision-making impacting society

A trend indicates a direction of change in values and needs which is driven by forces and manifests itself already in various ways within certain groups in society.

The digital transformation of government means the further modernisation of public administration, seamless cross-border mobility and enhanced digital interactions. An increasing number of governments around the world are using increasing numbers of digital tools. The pervasiveness of digital technologies is leading to an increase in automated decision-making (ADM). ADM is a process where decisions are made by automated means and without human involvement. What is particularly important is that these systems often have in-built fairness, accountability, transparency and ethical biases. Therefore, they have an impact on human rights.

The use of ADM poses a series of social and policy questions and challenges. For example, there is a growing use of big data analysis collected from users’ everyday lives, as well as systems based on biometrics. How open, transparent, accountable and participatory will the governments be in the future? How much can digital tools sustain or disrupt this?

This Trend is part of the Megatrend Increasing influence of new governing systems

 


 

Manifestations

Developments happening in certain groups in society that indicate examples of change related to the trend.

A datafied society

The last decade has been characterised by new business models of digital companies based on the collection of users’ data and big data analysis. Big data has been increasingly collected from users’ everyday lives via the ‘smart’ devices they use as insights for behavioural influence, personalized advertising, or social discrimination.

It is being applied to predict for example criminal recidivism, to provide welfare benefits, assess job applications, customize feeds in social media, or calculate insurance (based on a ‘heathy lifestyle’ or ‘good driving behaviour’), loan risks, with results that showed certain biases and unethical outcomes (e.g. discrimination, privacy interference).

This is leading to a new social and economic order with a potentially high impact on human autonomy, as well as on new relations between different actors in society and the asymmetry of power - one based on who possesses the data.

Signals of change: Brookings, Technology Innovation Management Review, Profile Books, Kendra Fluegeman

 

Increased use of biometrics

Biometric systems (e.g. facial recognition) for the identification or verification of a person’s identity based on biometrics are increasingly being used by both public and private entities. They are used for accessing public services, travel, shopping and other everyday activities (for e.g. at airports in the US and China). In China they are also employed to monitor the attentiveness of pupils at school, or to ‘tag’ people on social media and unlock smartphones. In Russia, these systems are used to enter and pay for the subway (Facepay). Based on facial recognition, the affect recognition technology, looking at the micro-expressions, has been developed for job interviews, criminal suspects’ examination (similar to a lie detector), or for setting insurance prices. 

Signals of change: EDPS, Euronews, The Guardian, Emerging Europe

 

The question of human rights

The use of automated decision-making increasingly leads to ethical and social questions and challenges. Data-driven credit scoring systems have shown signs of social biases and racial or ethnic discrimination (Angwin, Larson, Mattu, & Kirchner, 2016). These systems have an increasing capacity to (re)produce and amplify already existing discriminatory practices. For e.g. the Hague district court has ordered the suspension of such an automated decision-making system and put emphasis on the lack of transparency of risk models and factors applied. The court’s decision has set a legal precedent by halting the use of digital technologies on human rights grounds.

Signals of change: Global Campus of Human Rights, Monash University

 

Trustworthy AI

While AI offers huge advantages to the society, its use might create ethical and societal challenges. Trust is essential to facilitate the uptake of AI, as well as taking into account social and environmental impact of AI. According to the EU approach, the use of AI should be lawful (respecting all applicable laws and regulations), ethical (respecting ethical principles and values) and robust (taking into account its social environment). This includes a human-centric way of developing and using AI, the protection of EU values and fundamental rights such as non-discrimination, privacy and data protection, and the sustainable and efficient use of resources.

Signals of change: European Commission

 

More democratic data governance for the public interest

There is an increased interest in alternative approaches for the management, control and use of data (especially personal data and digital footprints). The emerging alternative models of data governance seem to be more democratic, because they allow more actors to control the data and decide how it is used. Also, they focus on using data for the public interest (e.g. better public services, addressing social causes, empowering users). 

Signals of change: Intereconomics, Big Data & Society

 


 

Interesting questions

What might this trend imply, what should we be aware of, what could we study in more depth? Some ideas:

  • How much will public services be user driven in the future?
  • How much digital and data skills will be needed for the public sector?
  • What if digital technologies increased agility and efficiency of services, but disempowered citizens through constant tracking and surveillance, treating them as data providers (for their own purposes) rather than citizens?
  • What are the most important skills for the public sector in the future?
  • How do we prevent the misuse of automated systems and prevent citizens’ rights (beyond privacy)?
  • What if decision-making fully relied on analytical processing of big data through a partnership between governments and private digital companies?
  • How can citizens intervene in the development and implementation of scoring systems and other forms of data analytics, and how they can civic participation advance in an increasingly datafied society?
  • How to ensure procedural fairness and avoid biases of ADM?
  • How can governments be more transparent about the use of collected citizens’ data?