EUs AI Act gives citizens the right to challenge algorithms
Martin Seckar
The world’s first comprehensive law on artificial intelligence is now in force, giving individuals the power to demand explanations for opaque automated decisions in areas like finance and public services.
A man applies for a bank loan online. He submits his forms and documents. Days later, an email arrives, rejecting his application. No reason is given. No one is named to contact. He is left to wonder what unseen data point in his digital profile marked him as unworthy.
This is the kind of situation the European Union’s AI Act aims to prevent. The law, which began entering into force in 2024, is the world’s first comprehensive regulation for artificial intelligence. Its core purpose is to protect the fundamental rights of citizens from the powerful and often opaque logic of machines.
A right to an explanation
For citizens, the law translates abstract principles into tangible protections. It establishes a right to know when you are interacting with an AI system, such as a chatbot, instead of a person. Content generated by AI, from articles to videos, must be clearly labelled. This serves as a defence against deepfakes and disinformation.
Most critically, for “high-risk” decisions like the loan application, a person has the right to an explanation. According to the Act, a user can obtain “clear and meaningful information about the role of the AI system in the decision-making procedure”. A bank can no longer hide behind its algorithm. It must explain the general logic behind an AI-assisted choice, empowering individuals to demand accountability.
Banned ‘unacceptable risk’ systems
The Act goes further by banning certain uses of AI deemed to pose an “unacceptable risk”. These include government-led “social scoring” systems that rate citizens based on their behaviour. The use of real-time facial recognition and other remote biometric identification in public spaces by authorities is also prohibited, with narrow, judicially approved exceptions for serious crimes like terrorism or searching for a missing child.
Companies are also forbidden from using AI to manipulate people’s behaviour in harmful ways or to exploit the vulnerabilities of specific groups, such as children.
Yet the law is not without its critics. Industry groups and some policymakers argue its strict rules could stifle innovation, creating a “chilling effect” that harms smaller companies and startups. They fear it may widen the competitive gap with the United States and China, where approaches to AI regulation are different.
How to seek redress
Should these rules be broken, the law provides a direct path for individuals to act. Any person can file a complaint with their national supervisory authority. They are guaranteed access to the courts if an AI system causes them harm.
To prevent harm, public bodies and private companies in essential sectors like banking and insurance must conduct a Fundamental Rights Impact Assessment before deploying a high-risk AI system. This requires them to evaluate and mitigate its potential dangers.
The AI Act is a declaration that technology must serve people, not the other way around. It grants Europeans a new set of rights for a new era, ensuring that as machines grow more powerful, an individual’s right to dignity, fairness, and an explanation remains paramount.