Emilia Gómez, scientist at the JRC, explains the concept of trustworthy AI.
Brief me
Advancements in AI are significantly impacting European society and offer efficiency gains and innovation across a wealth of sectors. With the acceleration in adoption comes also new challenges and risks that need to be addressed, and the JRC is working to support policymakers to make the right decisions on how to bring about AI's benefits while protecting people from harm.
As part of this work, our teams advised throughout the process that led to the AI Act, from definitions, cyber security considerations, liability questions and standardisation. We are currently looking into AI benchmarking methodologies, as well as the categorisation of general-purpose AI models.
In addition to contributing with scientific expertise for regulatory initiatives, our work also aims to foster greater trustworthy AI more broadly in industry and research communities.
Explore further
AI benchmarking: Nine challenges and a way forward
A recent JRC paper explores AI benchmarks, which are considered an essential tool to evaluate performance, capabilities, and risks of AI models. Through a comprehensive literature…
New JRC collection of external scientific reports to inform the implementation of the EU AI Act on general-purpose AI models
The reports present methodologies and technical insights that will help identify, classify, and assess compute, capabilities and risks of GPAI models operating within the European regulatory framework.
