How can we stop Deepfakes?
- Sara Farahmand-Nejad

- Sep 25
- 4 min read
Updated: Sep 27
Artificial intelligence not only brings progress but can also work against us humans. This becomes particularly clear with deepfakes: deceptively real manipulated images, videos, or audio recordings. They can be deliberately used to expose individuals, spread false information, and even divide entire societies.
From a legal perspective, several areas are affected simultaneously: copyright and data protection law when protected content is used, personal rights in cases of infringements on identity and honor, as well as new EU regulations specifically targeting AI and platform providers.
It is important to communicate openly and clearly about this issue in order to educate those affected. This includes explaining which rights provide protection and where legal gaps still exist
Recognizing deception
Deepfakes are not illegal per se. What matters is their use: if they are used or published without consent, they may violate existing law.
According to the legal definition in Art. 3 No. 30 of the AI Regulation (AI Act), deepfakes are any image, audio, or video content generated or manipulated by AI that appears so real it can deceive recipients. For this reason, a deepfake is considered potentially misleading or unlawful information.
In practice, if such deception occurs, typical claims under the German Civil Code (BGB) such as injunctions, deletion, or damages may apply.
Example: If a person’s face is inserted into a compromising video and shared online, this can constitute a serious violation of personality rights under Art. 2 I in conjunction with Art. 1 GG, which may give rise to claims for injunction or monetary compensation.
Rights of those affected
There are several legal claims that may apply:
Copyright law: If a deepfake is created from copyright-protected material (e.g., photos, videos, music), rights holders may demand injunctions and damages.
General right of personality (Art. 2 I GG in conjunction with Art. 1 GG): This protects a person’s identity, dignity, and honor. Deepfakes that portray someone in a false light may infringe this right. Those affected can demand deletion, injunctions, and, if applicable, compensation.
Right to a name (§ 12 BGB): If a person’s name is misused in a deepfake, this may also constitute an infringement and allow for legal action.
Criminal law: Creating and distributing violent depictions under § 131 I No. 1, 2 StGB or § 184a StGB; violation of the highly personal sphere of life and personal rights through image recordings under § 201a I No. 1, No. 4, II StGB; and data espionage under § 202a I StGB are punishable.
While current law provides ways to defend against deepfakes, it is often burdensome for victims to enforce their rights - especially since content can spread quickly and be created anonymously.
Regulation through platforms and laws
Since the emergence of AI-driven manipulations, there have been legal regulations at both EU and national levels that place greater responsibility on platforms and providers.
DSA (Digital Services Act):
Requires large online platforms to clearly label deceptive content such as deepfakes and provide reporting mechanisms. Users can thus more easily flag fakes, and platforms must respond.
Example: Under Art. 16 DSA, platform providers must offer a “notice-and-action” procedure so that users can report unlawful content.
EU AI Regulation (AI Act):
Requires companies and authorities to comply with transparency obligations. Deepfakes must be clearly and visibly marked as artificially generated or manipulated.
Example: Arts. 71 ff. AI Act regulate market surveillance by national authorities and sanctions for violations.
General Data Protection Regulation (GDPR):
Affected persons may demand deletion since deepfakes use personal data such as faces or voices without consent.
Example: If personal data such as faces or voices are processed in a deepfake without consent, those affected can assert their “right to be forgotten” under Art. 17 GDPR. The Federal Commissioner for Data Protection and Freedom of Information (BfDI) is responsible in such cases.
§ 201b StGB (draft):
A new criminal offense is intended to penalize the “violation of personal rights through digital forgery.” The aim is to capture the specific wrongfulness of deepfakes under criminal law. Anyone who creates or distributes a deepfake that appears like a genuine image or audio recording infringes personal rights. The draft foresees up to two years’ imprisonment or a fine.
The draft law has been criticized by the federal government and the Federal Ministry of Justice (BMJ) for lacking precision. In some areas, it is too vague and does not sufficiently address existing civil law claims, meaning it does not provide adequate protection. The draft is currently before the Bundestag for further debate.
Together, these laws form a package: while the AI Act and DSA primarily aim to ensure prevention and transparency, § 201b StGB (draft) may in the future enable criminal prosecution of serious violations.
Conclusion: Outlook and concrete measures
Deepfakes are a serious challenge for law, politics, and society. They cannot be completely prevented. However, by strengthening victims’ rights, establishing clear legal frameworks, and holding platforms accountable, their misuse can be contained. The decisive factor is that every affected person knows their rights, secures evidence, and acts consistently. At the same time, society as a whole must take responsibility: only through informed and critical engagement with digital content can we prevent deepfakes from undermining trust and dividing communities. Everyone can contribute by speaking transparently about the issue and questioning suspicious content.
Deepfakes cannot be fully prevented, but their misuse can be significantly curtailed and sanctioned through:
1. Strengthening victims’ rights: Expanding and consistently enforcing injunctions, deletion, and compensation claims (e.g., Art. 17 GDPR).
2. Binding obligations for platforms: Large online platforms must quickly label and delete deepfakes and prevent re-uploads (e.g., Art. 16 DSA).
3. Closing gaps in criminal law: Introducing a dedicated offense such as the proposed § 201b StGB to cover targeted deepfake manipulation under criminal law.
These regulations are an important step, but they are not yet sufficient. Particularly in cases of rapid dissemination of anonymous content and international prosecution of offenders, further action is needed. Improved international cooperation and simplified evidence preservation must be negotiated. Achieving full protection will take time, but inaction or stagnation would only worsen the problem.
The content of this article is for general information purposes only and does not constitute legal advice.
Recht logisch: KI trifft Gesetz ist eine Reihe von PANTA, in der Sara Farahmand-Nejad, KI Fellow bei PANTA und angehende Juristin, die Rechtsfragen rund um Künstliche Intelligenz verständlich einordnet. Es geht um Haftung und Verantwortung, Datenschutz und Urheberrecht, um verschobene Normen, neue Grauzonen und anstehende Regulierung. Klar, kompakt, praxisnah: Was gilt, was kommt und was das für Unternehmen, Verwaltung und Alltag bedeutet.



