The Rules of the Machine
Martin Seckar
This is Bratislava, August 2025. Across the European Union, a new reality is taking hold as the landmark AI Act moves from paper to practice. This year has seen two critical deadlines activate: a hard ban on technologies deemed an unacceptable risk to society, followed by a new mandate for transparency from the creators of the world’s most powerful AI models. For millions of citizens, it marks the beginning of enforceable rights in the age of the algorithm.
A Line Drawn in the Code
On the banks of the Danube, where history is layered like stone, a woman crosses Hviezdoslav Square, her face upturned to the August sun. A security camera mounted on a Baroque facade watches her pass. Until recently, the unseen power behind that lens was a matter of speculation. Now, it is a matter of law.
The calendar turned to February this year. Across the European Union, a new line was drawn, enforced by the AI Act. It began with a series of prohibitions. The state can no longer score its citizens based on their behavior or character. The use of real-time facial recognition by police in public spaces is now illegal, save for the rarest, judicially approved exceptions.
The law, born of years of debate in Brussels, reached into the daily lives of millions. It banned AI that manipulates human behavior to cause harm. It forbade systems that categorize people by sensitive data—their race, their politics, their faith. In workplaces and schools, emotion-recognition software was outlawed. These were deemed unacceptable risks to a free society. For the companies building this technology, the directive was simple: stop. The penalties for non-compliance are severe, reaching up to €35 million or 7% of a firm’s global turnover. The message was clear. Some technological paths are not to be taken.
A Mandate for Transparency
Now it is August. The focus has shifted from what is forbidden to what must be revealed. This month’s deadline targets the engines of the new economy: General-Purpose AI, the large models that write text, generate images, and power countless applications. How are these powerful tools built? What information have they consumed? Before, the answers were guarded secrets. Now, transparency is a mandate.
Providers of these models must produce detailed technical documentation. They must inform developers who use their technology about its limits and capabilities. And crucially, they must publish summaries of the copyrighted data used to train their systems. It is an attempt to trace the ghost in the machine back to its source.
For a citizen, the change is not always visible. It is felt in the assurance that the camera on the square is not judging them, that the news app is not built on a foundation of stolen work, that the tools of the future must answer to the values of the present.
This is the year Europe began to regulate artificial intelligence in earnest. It is a profound assertion of a principle: that technology must serve human dignity, not the other way around. The abstract code of programmers has become the concrete law of the land.