Current Issue
Vol. 4 No. 3 (2025): Serial Number 12
Published:
2025-09-26
The rapid advancement of artificial intelligence has challenged the traditional concept of the “mental element of crime” within criminal justice systems. This study aims to conduct a comparative analysis of the mental elements of crimes arising from artificial intelligence within two frameworks: Imami jurisprudence and international criminal law. The research adopts a descriptive–analytical method with a comparative approach, and data have been collected through library research and documentary analysis. According to the findings, both systems currently regard artificial intelligence as lacking an independent mental element (intent, knowledge, or mens rea) as well as criminal capacity. In Imami jurisprudence, artificial intelligence is classified as an “object” or “property,” and liability is transferred to human agents (such as designers, manufacturers, and users) based on principles including liability for destruction (ḍamān al-itlāf), causation (tasbīb), and the no-harm rule (qāʿidat lā ḍarar). The focus of this system is on individual, duty-based moral responsibility. In contrast, international criminal law, drawing on its experience with organized crimes, has moved toward developing novel concepts such as command responsibility, risk-based liability, and the notion of electronic personality, in order to address the complexity and distributed nature of decision-making in the development of advanced artificial intelligence. The comparative conclusion indicates that Imami jurisprudence emphasizes the individual transfer of responsibility, whereas international criminal law adopts a functionalist perspective and moves toward mechanisms of collective and institutional responsibility. It is therefore recommended that the Iranian legal system, while preserving its jurisprudential foundations, draw on the capacities of both approaches to enact specific legislation and recognize “chain liability” and a “duty of care” for actors in the field of artificial intelligence.
Under multimodal transport documents, the transport operator bears integrated liability for the occurrence of any loss, and liability is presumed against the operator. Nevertheless, by concluding a liability limitation agreement with the cargo owner, the multimodal transport operator determines a ceiling for liability with respect to potential future losses. Pursuant to international multimodal transport instruments, such an agreement is deemed ineffective and unenforceable against the cargo owner in cases where intentional misconduct or gross fault is attributed to the operator. Where intentional or gross fault is committed by the servants, agents, or representatives of the multimodal transport operator, these instruments adopt differentiated approaches depending on the nature of the claim. If a claim for damages is brought on a contractual basis, the limitation of the multimodal transport operator’s liability is not available. However, where a claim for damages is brought on a non-contractual (tortious) basis against the servants, agents, or employees involved in multimodal transport operations, only the culpable individuals are deprived of the right to invoke the limitation of liability, while the multimodal transport operator retains the right to rely on the liability limitation agreement. In all cases, in assessing the concept of gross fault, due regard must be paid to its objective characterization as well as to the level of expertise and professional skill of the transport operator and its agents.
The rapid integration of artificial intelligence into contemporary decision-making processes has fundamentally altered the structure of agency, accountability, and risk within modern legal systems. Human–AI collaboration now characterizes critical domains such as healthcare, finance, transportation, governance, and criminal justice, producing decisions through complex interactions between human judgment and algorithmic inference. This article examines how these hybrid decision structures destabilize the classical foundations of legal responsibility, particularly the doctrines of fault, intent, and causation. Employing a narrative review methodology grounded in descriptive–analytical inquiry, the study synthesizes interdisciplinary scholarship from law, philosophy of action, AI governance, and socio-technical systems theory to reconstruct the conceptual architecture of responsibility under conditions of distributed cognition. The analysis demonstrates that traditional anthropocentric models of responsibility—premised on individual agency, linear causation, and coherent intentionality—are increasingly inadequate for explaining harm and allocating accountability in algorithmically mediated environments. The article proposes a systemic reorientation of legal responsibility, emphasizing shared and layered accountability, institutional governance, and risk-based causation frameworks. By reframing responsibility as a property of socio-technical systems rather than isolated individuals, the study offers a coherent theoretical foundation for adapting liability regimes to the realities of human–AI collaboration. The findings suggest that the future legitimacy and effectiveness of legal systems depend on their capacity to evolve beyond event-based blame toward governance-centered models capable of sustaining accountability amid technological complexity.
This article investigates the transformation of data into the core asset of contemporary capitalism and examines the legal, economic, and political consequences of this shift within platform-based economies. The study argues that existing legal frameworks governing data ownership remain structurally misaligned with the realities of data-driven accumulation, allowing digital platforms to consolidate unprecedented control over markets, labor relations, and informational infrastructures. Through a narrative review and descriptive–analytical methodology, the article synthesizes interdisciplinary scholarship from political economy, law, and digital governance to expose the limitations of current legal classifications that treat data as personal right, intellectual property, or contractual asset without articulating a coherent ownership architecture. The analysis demonstrates how these fragmented approaches legitimize asymmetric power relations, reinforce monopolistic market structures, and undermine democratic accountability in the digital economy. The article further explores alternative models of digital property design, including collective governance frameworks, public-interest data infrastructures, and hybrid ownership regimes, and evaluates their capacity to rebalance economic power, protect individual autonomy, and preserve social welfare. By situating data ownership within broader struggles over sovereignty, market regulation, and social justice, the study highlights the political economy consequences of legal design choices and their impact on innovation, competition, and institutional legitimacy. The article concludes that reconstructing data ownership is not merely a technical regulatory task but a foundational project for shaping the future trajectory of platform capitalism and for ensuring that digital transformation advances collective prosperity, democratic governance, and long-term economic sustainability.
The rapid integration of artificial intelligence into social, economic, and legal processes has fundamentally disrupted the traditional architecture of legal responsibility and subjectivity. Contemporary legal systems, grounded in a binary distinction between natural persons and juridical persons, increasingly struggle to regulate autonomous algorithmic systems whose decisions shape rights, obligations, and social outcomes. This article investigates whether artificial intelligence can be coherently conceptualized as a legal subject and examines the philosophical and institutional consequences of such recognition. Employing a descriptive narrative review methodology, the study synthesizes philosophical theories of personhood and agency, classical doctrines of legal personality, and emerging comparative legal approaches to AI regulation. The analysis demonstrates that legal personality has historically functioned as an adaptive construct shaped by evolving social realities rather than a fixed metaphysical category. While artificial intelligence does not satisfy traditional human-centered criteria of personhood such as consciousness and moral autonomy, it increasingly exhibits functional forms of agency, autonomy, and causal power that challenge the adequacy of existing legal classifications. The article further explores the systemic implications of AI legal subjectivity across civil law, criminal responsibility, governance, and public policy, highlighting both the potential benefits of enhanced accountability and risk management and the ethical dangers of diluting human dignity and redistributing responsibility. The findings suggest that the central challenge is not whether AI should become a legal person in an absolute sense, but how legal systems can construct a flexible framework of legal subjectivity capable of accommodating artificial agency while preserving the moral and political foundations of law. The study concludes that rethinking legal ontology is essential for maintaining coherence, legitimacy, and justice in the age of intelligent machines.
The rapid expansion of neurotechnology is transforming the foundational relationship between law, technology, and the human person. Unlike earlier technological developments that primarily affected external behavior or information flows, contemporary neurotechnologies directly intervene in the neural mechanisms of thought, emotion, memory, and decision-making. This shift generates unprecedented risks to mental autonomy, personal identity, and moral agency, exposing the structural inadequacy of existing legal doctrines centered on bodily integrity and informational privacy. Using a narrative review with descriptive analytical methodology, this study examines the technological landscape of neurotechnology, the emerging concept of cognitive liberty in contemporary legal thought, and the growing gap between technological capability and legal protection. The analysis demonstrates that current regulatory frameworks, including human rights law, constitutional law, criminal law, and data protection regimes, fail to address the unique ontological status of neural data and the profound vulnerabilities introduced by direct cognitive intervention. In response, the study develops a comprehensive legal architecture for mental autonomy grounded in the principles of mental inviolability, cognitive self-determination, neural due process, and the categorical prohibition of non-consensual cognitive interference. It further conceptualizes a system of fundamental neuro-rights, including mental privacy, psychological continuity, identity integrity, and freedom from algorithmic mental manipulation, and proposes institutional and regulatory mechanisms for their implementation at domestic and international levels. The findings underscore that the protection of mental autonomy constitutes the next frontier of human rights and represents a decisive challenge for legal systems in the digital age. Without proactive legal reconstruction, neurotechnology risks institutionalizing new forms of domination over the human mind.
The rapid expansion of digital surveillance and biometric governance has fundamentally transformed the architecture of contemporary governance, reshaping the relationship between the state, the individual, and constitutional law. This article examines how emerging surveillance infrastructures—particularly those grounded in biometric technologies and algorithmic decision-making—challenge traditional constitutional doctrines of privacy, dignity, autonomy, and democratic accountability. Using a narrative review combined with descriptive-analytical methodology, the study traces the evolution of surveillance technologies from analog observation to predictive and biometric systems, and analyzes their institutional integration within modern governance frameworks. The findings demonstrate that biometric surveillance constitutes a new mode of constitutional power characterized by data-driven governance, algorithmic sovereignty, and the redefinition of individuals as data subjects rather than legal subjects. This transformation erodes informational privacy as a structural condition of constitutional democracy and weakens established safeguards of proportionality, due process, and judicial oversight. The article further identifies key structural risks associated with biometric governance, including function creep, irreversible data compromise, normalization of permanent identification, and deepening asymmetries of power and transparency. Through a comparative examination of jurisprudential trends and human rights frameworks, the study reveals persistent doctrinal inconsistencies and unresolved constitutional tensions surrounding surveillance practices. Finally, the article proposes a normative reconstruction of constitutional limits on biometric surveillance, emphasizing the need for substantive restrictions, strengthened procedural safeguards, robust institutional oversight, and democratic governance of surveillance infrastructures. The study concludes that safeguarding informational privacy in the digital age is not merely a regulatory challenge but a constitutional imperative essential for preserving democratic legitimacy, the rule of law, and human freedom in increasingly data-driven societies.
The digital transformation of contemporary societies has fundamentally altered the conditions under which legal proof is produced, evaluated, and legitimized. This article examines how the emergence of cyber-evidence reshapes the epistemological foundations of adjudication and challenges the adequacy of traditional evidentiary doctrines. Through a narrative review employing a descriptive–analytical method, the study synthesizes interdisciplinary scholarship from legal epistemology, evidence law, and cyberforensics to trace the conceptual evolution of proof from classical models grounded in human perception and material continuity to contemporary regimes of technologically mediated fact-production. The analysis demonstrates that cyber-evidence constitutes a distinct evidentiary category whose properties—volatility, algorithmic generation, platform dependency, and cryptographic validation—destabilize established doctrines of authentication, admissibility, probative value, and standards of persuasion. The article further explores the institutional consequences of this transformation, including the growing epistemic asymmetry between courts and technical experts, the reconfiguration of judicial authority, and the erosion of traditional mechanisms of adversarial testing such as cross-examination in algorithmic contexts. Building on this critique, the study proposes a reconceptualization of legal proof grounded in an updated epistemology that integrates technological mediation while preserving the normative commitments of procedural justice, transparency, and contestability. The article concludes that without systematic doctrinal recalibration and institutional reform, the legitimacy of adjudication in digitally mediated litigation remains at risk, and that the future of evidence law depends on the development of coherent normative principles for digital evidentiary governance.
Number of Volumes
3
Number of Issues
9
Acceptance Rate
29%