Five EU justice and home affairs agencies, in collaboration with researchers from CENTRIC, have created the world’s first “AI accountability system” that will help cybersecurity professionals apply artificial intelligence tools.
The Accountability Principles for Artificial Intelligence (AP4AI) project is being developed jointly by the Center of Excellence in Terrorism, Resilience, Intelligence and Organized Crime Research (CENTRIC) and the Europol Innovation Laboratory with the support of Eurojust, the EU Asylum Agency (EUAA) and the EU Law Enforcement Training Agency (CEPOL) with the support and assistance of the EU Agency for Fundamental Rights (FRA) within the framework of the EU Innovation Center for Internal Security.
The aim of the AP4AI project is to create practical tools designed to directly support the accountability of AI when implemented in the internal security field. The AI accountability Principles were developed in consultation with experts from 28 countries representing law enforcement officials, lawyers and prosecutors, data protection and fundamental rights experts, as well as technical and industry experts.
Within the framework of the AP4AI project, consultations were held with more than 5.5 thousand citizens from 30 countries to study public expectations regarding AI accountability. Although there are concerns about the use of AI by the police, many citizens also see great potential in using AI for internal security purposes. More than 87% of respondents agree that AI should be used to protect children and vulnerable groups of people, as well as to identify criminals and criminal organizations. More than 90% of citizens expect the police to be responsible for how they use AI and for its consequences.