Current Issue
Vol. 4 No. 3 (2025): Serial Number 12
Published:
2025-09-26
The rapid integration of artificial intelligence into contemporary decision-making processes has fundamentally altered the structure of agency, accountability, and risk within modern legal systems. Human–AI collaboration now characterizes critical domains such as healthcare, finance, transportation, governance, and criminal justice, producing decisions through complex interactions between human judgment and algorithmic inference. This article examines how these hybrid decision structures destabilize the classical foundations of legal responsibility, particularly the doctrines of fault, intent, and causation. Employing a narrative review methodology grounded in descriptive–analytical inquiry, the study synthesizes interdisciplinary scholarship from law, philosophy of action, AI governance, and socio-technical systems theory to reconstruct the conceptual architecture of responsibility under conditions of distributed cognition. The analysis demonstrates that traditional anthropocentric models of responsibility—premised on individual agency, linear causation, and coherent intentionality—are increasingly inadequate for explaining harm and allocating accountability in algorithmically mediated environments. The article proposes a systemic reorientation of legal responsibility, emphasizing shared and layered accountability, institutional governance, and risk-based causation frameworks. By reframing responsibility as a property of socio-technical systems rather than isolated individuals, the study offers a coherent theoretical foundation for adapting liability regimes to the realities of human–AI collaboration. The findings suggest that the future legitimacy and effectiveness of legal systems depend on their capacity to evolve beyond event-based blame toward governance-centered models capable of sustaining accountability amid technological complexity.
This article investigates the transformation of data into the core asset of contemporary capitalism and examines the legal, economic, and political consequences of this shift within platform-based economies. The study argues that existing legal frameworks governing data ownership remain structurally misaligned with the realities of data-driven accumulation, allowing digital platforms to consolidate unprecedented control over markets, labor relations, and informational infrastructures. Through a narrative review and descriptive–analytical methodology, the article synthesizes interdisciplinary scholarship from political economy, law, and digital governance to expose the limitations of current legal classifications that treat data as personal right, intellectual property, or contractual asset without articulating a coherent ownership architecture. The analysis demonstrates how these fragmented approaches legitimize asymmetric power relations, reinforce monopolistic market structures, and undermine democratic accountability in the digital economy. The article further explores alternative models of digital property design, including collective governance frameworks, public-interest data infrastructures, and hybrid ownership regimes, and evaluates their capacity to rebalance economic power, protect individual autonomy, and preserve social welfare. By situating data ownership within broader struggles over sovereignty, market regulation, and social justice, the study highlights the political economy consequences of legal design choices and their impact on innovation, competition, and institutional legitimacy. The article concludes that reconstructing data ownership is not merely a technical regulatory task but a foundational project for shaping the future trajectory of platform capitalism and for ensuring that digital transformation advances collective prosperity, democratic governance, and long-term economic sustainability.
The rapid integration of artificial intelligence into social, economic, and legal processes has fundamentally disrupted the traditional architecture of legal responsibility and subjectivity. Contemporary legal systems, grounded in a binary distinction between natural persons and juridical persons, increasingly struggle to regulate autonomous algorithmic systems whose decisions shape rights, obligations, and social outcomes. This article investigates whether artificial intelligence can be coherently conceptualized as a legal subject and examines the philosophical and institutional consequences of such recognition. Employing a descriptive narrative review methodology, the study synthesizes philosophical theories of personhood and agency, classical doctrines of legal personality, and emerging comparative legal approaches to AI regulation. The analysis demonstrates that legal personality has historically functioned as an adaptive construct shaped by evolving social realities rather than a fixed metaphysical category. While artificial intelligence does not satisfy traditional human-centered criteria of personhood such as consciousness and moral autonomy, it increasingly exhibits functional forms of agency, autonomy, and causal power that challenge the adequacy of existing legal classifications. The article further explores the systemic implications of AI legal subjectivity across civil law, criminal responsibility, governance, and public policy, highlighting both the potential benefits of enhanced accountability and risk management and the ethical dangers of diluting human dignity and redistributing responsibility. The findings suggest that the central challenge is not whether AI should become a legal person in an absolute sense, but how legal systems can construct a flexible framework of legal subjectivity capable of accommodating artificial agency while preserving the moral and political foundations of law. The study concludes that rethinking legal ontology is essential for maintaining coherence, legitimacy, and justice in the age of intelligent machines.
The rapid expansion of neurotechnology is transforming the foundational relationship between law, technology, and the human person. Unlike earlier technological developments that primarily affected external behavior or information flows, contemporary neurotechnologies directly intervene in the neural mechanisms of thought, emotion, memory, and decision-making. This shift generates unprecedented risks to mental autonomy, personal identity, and moral agency, exposing the structural inadequacy of existing legal doctrines centered on bodily integrity and informational privacy. Using a narrative review with descriptive analytical methodology, this study examines the technological landscape of neurotechnology, the emerging concept of cognitive liberty in contemporary legal thought, and the growing gap between technological capability and legal protection. The analysis demonstrates that current regulatory frameworks, including human rights law, constitutional law, criminal law, and data protection regimes, fail to address the unique ontological status of neural data and the profound vulnerabilities introduced by direct cognitive intervention. In response, the study develops a comprehensive legal architecture for mental autonomy grounded in the principles of mental inviolability, cognitive self-determination, neural due process, and the categorical prohibition of non-consensual cognitive interference. It further conceptualizes a system of fundamental neuro-rights, including mental privacy, psychological continuity, identity integrity, and freedom from algorithmic mental manipulation, and proposes institutional and regulatory mechanisms for their implementation at domestic and international levels. The findings underscore that the protection of mental autonomy constitutes the next frontier of human rights and represents a decisive challenge for legal systems in the digital age. Without proactive legal reconstruction, neurotechnology risks institutionalizing new forms of domination over the human mind.
The rapid expansion of digital surveillance and biometric governance has fundamentally transformed the architecture of contemporary governance, reshaping the relationship between the state, the individual, and constitutional law. This article examines how emerging surveillance infrastructures—particularly those grounded in biometric technologies and algorithmic decision-making—challenge traditional constitutional doctrines of privacy, dignity, autonomy, and democratic accountability. Using a narrative review combined with descriptive-analytical methodology, the study traces the evolution of surveillance technologies from analog observation to predictive and biometric systems, and analyzes their institutional integration within modern governance frameworks. The findings demonstrate that biometric surveillance constitutes a new mode of constitutional power characterized by data-driven governance, algorithmic sovereignty, and the redefinition of individuals as data subjects rather than legal subjects. This transformation erodes informational privacy as a structural condition of constitutional democracy and weakens established safeguards of proportionality, due process, and judicial oversight. The article further identifies key structural risks associated with biometric governance, including function creep, irreversible data compromise, normalization of permanent identification, and deepening asymmetries of power and transparency. Through a comparative examination of jurisprudential trends and human rights frameworks, the study reveals persistent doctrinal inconsistencies and unresolved constitutional tensions surrounding surveillance practices. Finally, the article proposes a normative reconstruction of constitutional limits on biometric surveillance, emphasizing the need for substantive restrictions, strengthened procedural safeguards, robust institutional oversight, and democratic governance of surveillance infrastructures. The study concludes that safeguarding informational privacy in the digital age is not merely a regulatory challenge but a constitutional imperative essential for preserving democratic legitimacy, the rule of law, and human freedom in increasingly data-driven societies.
The digital transformation of contemporary societies has fundamentally altered the conditions under which legal proof is produced, evaluated, and legitimized. This article examines how the emergence of cyber-evidence reshapes the epistemological foundations of adjudication and challenges the adequacy of traditional evidentiary doctrines. Through a narrative review employing a descriptive–analytical method, the study synthesizes interdisciplinary scholarship from legal epistemology, evidence law, and cyberforensics to trace the conceptual evolution of proof from classical models grounded in human perception and material continuity to contemporary regimes of technologically mediated fact-production. The analysis demonstrates that cyber-evidence constitutes a distinct evidentiary category whose properties—volatility, algorithmic generation, platform dependency, and cryptographic validation—destabilize established doctrines of authentication, admissibility, probative value, and standards of persuasion. The article further explores the institutional consequences of this transformation, including the growing epistemic asymmetry between courts and technical experts, the reconfiguration of judicial authority, and the erosion of traditional mechanisms of adversarial testing such as cross-examination in algorithmic contexts. Building on this critique, the study proposes a reconceptualization of legal proof grounded in an updated epistemology that integrates technological mediation while preserving the normative commitments of procedural justice, transparency, and contestability. The article concludes that without systematic doctrinal recalibration and institutional reform, the legitimacy of adjudication in digitally mediated litigation remains at risk, and that the future of evidence law depends on the development of coherent normative principles for digital evidentiary governance.
The rapid integration of predictive analytics into criminal justice systems has transformed the architecture of legal decision-making by introducing algorithmic risk assessment tools into pre-trial processes, sentencing, parole, probation, and policing. While these technologies promise increased efficiency, consistency, and anticipatory capacity, they simultaneously generate profound jurisprudential and constitutional challenges. This article offers a comprehensive narrative review and descriptive–analytical examination of the theoretical foundations, technical architecture, and normative consequences of predictive criminal justice. Drawing upon interdisciplinary scholarship in law, criminology, data science, and political theory, the study traces the shift from classical legal rationality toward algorithmic governance and evaluates its implications for core legal principles. The analysis demonstrates that predictive systems fundamentally destabilize the principles of legality, due process, equality before the law, and the presumption of innocence by substituting probabilistic forecasting for individualized legal judgment. Moreover, structural bias, feedback amplification, and algorithmic opacity undermine procedural fairness and intensify social inequality, particularly for marginalized populations. The article further argues that predictive governance redistributes legal authority from courts to opaque technical systems and private actors, eroding democratic accountability and judicial autonomy. Through comparative constitutional analysis, the study highlights divergent regulatory responses across jurisdictions and emphasizes the urgent need for a renewed constitutional framework capable of constraining algorithmic power. Ultimately, the article contends that the legal limits of predictive criminal justice are anchored in the normative foundations of constitutionalism itself, requiring a reassertion of human judgment, transparency, and rights-based adjudication in the governance of emerging technologies.
Predictive policing algorithms have become an increasingly prominent feature of modern law-enforcement systems, reshaping operational decision-making through data-driven forecasting and automated risk assessment. As these technologies expand, they introduce complex legal, ethical, and societal challenges that demand critical evaluation. This narrative review synthesizes current knowledge on the functioning of predictive policing systems, highlighting how algorithmic processes rooted in historical crime data, surveillance infrastructures, and machine-learning models influence patterns of policing. The analysis demonstrates that algorithmic bias can reinforce racial profiling, socioeconomic disparities, and spatialized over-policing, raising concerns about compliance with equality principles, due-process protections, and human-rights standards. It also examines the structural mechanisms—such as feedback loops, model opacity, and proprietary constraints—that complicate efforts to contest discriminatory outcomes or ensure evidentiary fairness in judicial proceedings. Furthermore, the review explores the governance challenges shaping the regulatory landscape, including limitations of existing data-protection laws, weaknesses in administrative oversight, and the growing influence of private vendors over public-sector policing practices. These gaps, combined with limited transparency, insufficient technical literacy, and uneven democratic oversight, create significant obstacles to achieving accountability. By analyzing the intersection of technology, law, and institutional practice, this article offers a comprehensive framework for understanding how predictive policing affects civil liberties, public trust, and the legitimacy of law enforcement. The review concludes by emphasizing the need for robust regulatory reforms grounded in transparency, human-rights protections, and meaningful public oversight to ensure that algorithmic policing evolves in ways that support fairness, democratic governance, and societal well-being.
Number of Volumes
3
Number of Issues
9
Acceptance Rate
29%