The EU AI Act in Practice

What the regulation actually means and why it demands more than a compliance check.

The EU AI Act has been in force since August 2024. Most companies know it by name. Many have followed the reporting on risk categories and prohibited practices. But few have understood that the regulation does not only concern high-risk systems. It also concerns the everyday use of AI in the workplace.


Media coverage has contributed to framing the AI Act as a topic for developers and providers of AI systems. In reality, it concerns every company that uses AI. Which means: practically every company in the EU.


The regulation is often reduced to risk categories. High-risk here, minimal risk there. But for most organizations, the real challenge does not lie in classification. It lies in documentation, traceability, and a question that has rarely been asked until now: who in the organization is actually responsible for how AI is used?


What most overlook

The obligations established by the AI Act take effect in stages. Since February 2025, two areas apply to all companies, regardless of whether they deploy high-risk systems.


The first area concerns prohibited practices. Certain applications of AI are fundamentally forbidden: subliminal manipulation, exploitation of vulnerability, social scoring, certain forms of biometric identification. This sounds like a topic for specialized providers. But in practice, every company must verify that it does not have such applications in use, even unintentionally.


The second area is more far-reaching and affects significantly more organizations: Article 4 of the AI regulation. This article obliges providers and deployers of AI systems to take measures ensuring that their staff possess a sufficient level of AI competence. This does not only concern the IT department. It concerns every person in the company who uses AI systems during their workday. Anyone who opens ChatGPT in the morning falls under this provision.


The wording of the regulation is deliberately open. There are no prescribed training contents, no minimum hours, no certificate. Instead, Article 4 requires companies to take context-appropriate measures: depending on the person's role, their prior knowledge, the area of deployment, and the type of AI system. That sounds flexible. In practice, it means every company must determine for itself what sufficient AI competence means in its specific context.


Why Article 4 is underestimated

At first glance, Article 4 appears harmless. There are no immediate fines for violations. The wording is aspirational, not punitive. Many companies have therefore decided that Article 4 is not a priority.


That is a miscalculation. While there is no direct penalty for violating Article 4, the obligation is binding law. And it takes effect through a different mechanism: liability. If an AI-generated output causes harm and it turns out that the person who used the system was neither trained nor informed, this can be deemed a breach of the duty of care. The consequences then fall not on the individual, but on the company.


From August 2026, the enforcement provisions of the AI Act take effect. National supervisory authorities can then sanction violations. And even if enforcement is initially restrained in practice: companies that cannot demonstrate any measures toward AI competence will have a weak position in any dispute.


There is a further effect. Article 4 does not require a one-time training session, but an ongoing assurance of competence. AI systems evolve. New models, new capabilities, new risks. What counts as sufficient knowledge today may be outdated in a year. Companies that treat Article 4 as a one-off measure will not meet the requirement.


The transparency question

Beyond the competence obligation, the AI Act establishes a second requirement that becomes relevant in everyday business: transparency.


When people interact with AI systems, they must be informed. This concerns chatbots on websites, but also less obvious cases. If a sales representative sends an email to a client that was written with AI: does the client need to know? If a proposal is based on an AI-generated analysis: does that need to be disclosed? If a report contains passages authored by a language model: is that transparent?


The regulation does not provide a simple answer to each of these questions. But it establishes a principle: if AI-generated content is such that it could be mistaken for human-made, a labeling obligation exists. For many companies, this is new territory. And it requires internal rules that do not yet exist in most organizations.


Why a compliance check is not enough

The most common response to the AI Act is to commission a one-time assessment. An external consultant or the legal department evaluates the current state, produces a report, formulates recommendations. The report is then filed, and the topic is considered resolved.


The problem: the AI Act does not describe a state. It describes a process. The regulation does not require that a company is compliant on a particular day. It requires that a company is continuously capable of meeting the requirements.


In concrete terms, this means: the organization must know which AI systems are in use. It must document what they are used for. It must ensure that users are competent. It must make traceable how decisions involving AI are reached. And it must do all of this not once, but on an ongoing basis.


A one-time check meets none of these requirements. It describes the current state. But it does not establish a structure that maintains the desired state permanently.


The organizational dimension

Many companies treat the AI Act as a legal topic. The legal department analyzes the regulation, produces a summary, issues recommendations. That is correct and necessary. But it is not sufficient.


The requirements of the AI Act cannot be implemented through legal analysis alone. They require organizational changes. And these changes affect multiple areas simultaneously.


IT must know which AI systems are deployed where. HR must coordinate and document training measures. Business units must understand which rules apply to their respective area of use. Communications must clarify how transparency requirements are implemented in external messaging. And leadership must decide who is responsible for the overall topic.


In practice, this last point is the greatest obstacle. The AI Act does not create a new responsibility. It assumes that one exists. But in most companies, there is nobody whose explicit task it is to coordinate how AI is used. IT does not feel responsible for training. HR does not feel responsible for technology questions. Legal can describe the requirements but not implement them.


The result is a topic that affects everyone but belongs to no one.


What organizations can do now

The good news: the requirements of the AI Act are not impossible to meet. They do not require specialized technology or multi-year projects. But they do require a deliberate decision to approach the topic in a structured way.


A first step is an overview of which AI systems are used in the organization. Not only the officially procured ones, but also those that employees use on their own. Without this overview, none of the further requirements can be implemented.


A second step is defining competence requirements, differentiated by role. What a senior leader needs to know about AI differs fundamentally from what someone in an operational role needs. A standard training for everyone is not only inefficient, it also misses the regulation's requirement for context-appropriate competence.


A third step is establishing documentation. Not as bureaucratic overhead, but as evidence that the company is meeting its responsibility. Which trainings were conducted? Who participated? Which AI systems are deployed where? How are decisions involving AI traced?


And a fourth step is clarifying ownership. Someone in the organization must own this topic. Not as a side project, not as an additional task, but as a clearly defined responsibility.


Not a legal problem

The AI Act is not merely a legal regulation. It is an organizational mandate. It requires companies not only to know the rules, but to build structures that govern how AI is used on an ongoing basis.


Treating the regulation as a legal problem will not satisfy the requirements. Treating it as an organizational task offers the opportunity not only to be compliant, but to genuinely improve how AI is used across the organization.


The AI Act forces companies to ask a question they should have been asking anyway: do we know how AI is used in our organization? And are we capable of shaping that use responsibly?


There are platforms that provide exactly the infrastructure the AI Act implicitly presupposes: visibility, documentation, and governance of AI usage in one place.