Legal Challenges of Deepfakes: Liability, Harm, and Regulatory Responses

Authors

    Aleksandra Nowak Department of Criminal Law, University of Warsaw, Warsaw, Poland
    Bálint Tóth * Department of Criminal Law, Eötvös Loránd University, Budapest, Hungary balint.toth@elte.hu
    Andrei Ionescu Department of Private Law, University of Bucharest, Bucharest, Romania

Keywords:

Deepfakes, Synthetic Media, Intermediary Liability, Platform Governance, AI Regulation, Privacy, Defamation, Disinformation, Digital Policy, Content Authenticity

Abstract

Deepfake technologies have rapidly evolved from experimental artificial intelligence innovations into widely accessible tools capable of producing hyper-realistic synthetic media that blur the boundaries between truth and fabrication. Their ability to manipulate identity, falsify expressive acts, and fabricate audiovisual content presents unprecedented challenges for legal systems structured around assumptions of authenticity, traceability, and human agency. This narrative review synthesizes current scholarship and regulatory developments to examine the multifaceted harms associated with deepfakes, including reputational injury, privacy violations, political manipulation, economic fraud, and broader societal erosion of trust. These harms expose significant doctrinal gaps in defamation, privacy, tort, and criminal law, which struggle to account for the speed and anonymity of synthetic media production. The analysis further explores complexities within liability frameworks, focusing on creators, distributors, platforms, AI developers, and malicious users whose involvement complicates traditional models of responsibility. Platform governance, intermediary liability, and cross-border enforcement challenges reveal structural weaknesses in existing regulatory approaches. National, regional, and international responses—including criminal prohibitions, civil remedies, the EU Digital Services Act, and emerging AI governance initiatives—offer partial solutions, yet they remain fragmented and inconsistently enforced. Soft-law interventions, including platform policies, content labeling, and authenticity tools, provide additional layers of protection but lack uniformity and transparency. To move toward a coherent legal framework, this review highlights the need for harmonized standards integrating civil, criminal, technological, and administrative mechanisms. Proposed directions include duty-of-care obligations for AI developers and platforms, mandatory watermarking systems, cross-border cooperation tools, and experimental regulatory sandboxes. Overall, the study emphasizes that addressing deepfake risks requires an interdisciplinary approach combining technological safeguards, legal reform, and multi-stakeholder governance to preserve individual rights, societal trust, and democratic resilience in an era of increasingly sophisticated synthetic media.

References

Ali, A. H. S., Saidin, O. K., Roisah, K., & Ediwarman, E. (2021). Liability of Internet Intermediaries in Copyright Infringement: Comparison Between the United States and India. https://doi.org/10.4108/eai.29-6-2021.2312595

Dyson, M. R. (2022). Combatting AI’s Protectionism &Amp; Totalitarian-Coded Hypnosis: The Case for AI Reparations &Amp; Antitrust Remedies in the Ecology of Collective Self-Determination. Smu Law Review, 75(3), 625. https://doi.org/10.25172/smulr.75.3.7

Gorwa, R. (2020). The State of Global Harmful Content Regulation: Empirical Observations. Aoir Selected Papers of Internet Research. https://doi.org/10.5210/spir.v2020i0.11221

Gorwa, R. (2021). Elections, Institutions, and the Regulatory Politics of Platform Governance: The Case of the German NetzDG. https://doi.org/10.31235/osf.io/2exrw

Guess, A. M., Barberá, P., Siegel, A., Woolley, S., Fowler, E. F., Nielsen, R. K., Wittenberg, C., Fukuyama, F., Keller, D., Hwang, T., Gorwa, R., & Persily, N. (2020). Social Media and Democracy. https://doi.org/10.1017/9781108890960

Hörnle, J. (2021). The Jurisdictional Challenge Answered—Enforcement Through Gatekeepers on the Internet. 33-80. https://doi.org/10.1093/oso/9780198806929.003.0003

Kira, B., Tavengerwei, R., & Mumbo, V. (2022). Points À Examiner À l'Approche Des Négociations De Phase II De La ZLECAf: Enjeux De La Politique Commerciale Numérique Dans Quatre Pays d'Afrique Subsaharienne. https://doi.org/10.35489/bsg-dp-wp_2022/01

Machado, C., & Aguiar, T. H. (2023). Emerging Regulations on Content Moderation and Misinformation Policies of Online Media Platforms: Accommodating the Duty of Care Into Intermediary Liability Models. Business and Human Rights Journal, 8(2), 244-251. https://doi.org/10.1017/bhj.2023.25

Paglietti, M. C., & Rabitti, M. (2022). A Matter of Time. Digital-Financial Consumers’ Vulnerability in the Retail Payments Market. European Business Law Review, 33(Issue 4), 581-606. https://doi.org/10.54648/eulr2022027

Ray, A. (2021). Disinformation, Deepfakes and Democracies: The Need for Legislative Reform. University of New South Wales Law Journal, 44(3). https://doi.org/10.53637/dels2700

Reich, R., Sahami, M., & Weinstein, J. M. (2022). System Error: Where Big Tech Went Wrong and How We Can Reboot. Perspectives on Science and Christian Faith, 74(1), 62-64. https://doi.org/10.56315/pscf3-22reich

Watney, M. (2022). Regulation of Social Media Intermediary Liability for Illegal and Harmful Content. European Conference on Social Media, 9(1), 194-201. https://doi.org/10.34190/ecsm.9.1.104

Downloads

Published

2023-04-01

Submitted

2023-02-10

Revised

2023-03-10

Accepted

2023-03-24

How to Cite

Nowak, A., Tóth, B., & Ionescu, A. (2023). Legal Challenges of Deepfakes: Liability, Harm, and Regulatory Responses. Legal Studies in Digital Age, 2(2), 49-60. https://jlsda.com/index.php/lsda/article/view/306

Similar Articles

1-10 of 158

You may also start an advanced similarity search for this article.