Current Issue
Vol. 4 No. 4 (2025): Serial Number 13
Published:
2026-02-12
Arbitration, as one of the alternative methods of dispute resolution, has gained widespread acceptance in the field of private law—particularly in international commercial relations—due to its private nature, flexibility, and in some cases, procedural efficiency. In the course of arbitral proceedings, the respondent may raise claims against the claimant which, from a legal perspective, may be characterized as counterclaims. While counterclaims enjoy a clear and well-established legal status in judicial proceedings, their position in arbitration is considerably more complex. This complexity arises from the fact that the arbitral tribunal’s jurisdiction is confined to the scope defined by the arbitration agreement, and the introduction of new claims may fall outside this agreed framework. Employing a descriptive-analytical methodology, this article examines the feasibility of bringing counterclaims in arbitral proceedings, with a focus on the Iranian legal system and international arbitration rules. The findings indicate that, under Iranian law, despite the absence of an explicit statutory provision addressing counterclaims in arbitration, it is possible—by relying on general principles governing jurisdiction and the unity of the cause of action—to accept the admissibility of counterclaims in many cases. In international arbitration law, counterclaims are likewise regarded as permissible, provided that they are closely connected to the principal claim and are brought within the jurisdiction of the arbitral tribunal. Finally, by identifying existing gaps in the Iranian legal framework, the article proposes recommendations for improving arbitral practice and legislative reform.
Digital transformation has turned digital services contracts into one of the most prevalent legal instruments governing relationships between service providers and users. In these contracts, users typically provide their personal data as consideration to service providers. One of the fundamental challenges in this field is the inclusion of restrictive clauses that limit users’ rights with respect to the use, access, transfer, or deletion of personal data. Employing a descriptive–analytical and comparative methodology, this study examines the validity and effects of such clauses in Iranian law and compares them with the approaches adopted in the European Union and the United States. The findings indicate that, despite users’ apparent acceptance, restrictive clauses face serious challenges to their validity, as information asymmetry, the complexity of legal language, and the absence of genuine opportunities for negotiation undermine the realization of informed consent. Moreover, these clauses have extensive effects on users’ rights, including restrictions on data access and portability, and may lead to reduced competition in digital markets. The results further show that Iran lacks a comprehensive framework for personal data protection, making it necessary to enact comprehensive legislation in this area, amend consumer protection law, and establish transparency requirements in terms of use.
The rapid growth of emerging technologies, particularly artificial intelligence and big data processing, has fundamentally transformed policies for the prevention of cybercrimes. Within the situational crime prevention approach, the objective is to reduce opportunities for the commission of crime through the deployment of technological tools; however, the use of intelligent systems in identifying criminal patterns, analyzing behavioral data, and predicting the occurrence of crime has generated complex legal and ethical concerns. Under Iranian positive law, although the Computer Crimes Act and higher-level regulatory instruments related to cyberspace refer in general terms to data security and privacy requirements, a comprehensive regulatory framework governing automated and algorithmic decision-making in preventive processes has not yet been developed. The present study adopts a descriptive–analytical approach and employs a library-based research method to examine the legal challenges arising from the application of artificial intelligence and big data in the situational prevention of cybercrimes. The findings indicate that the most significant challenges include the absence of explicit regulations concerning civil and criminal liability arising from algorithmic decisions, threats to privacy and data protection rights, lack of transparency and explainability in automated decision-making, and the risk of algorithmic bias or discrimination. Moreover, the tension between the efficiency of data-driven predictive mechanisms and the requirements of fundamental rights of citizens—such as the presumption of innocence and the rule of law—constitutes a central challenge. Accordingly, it is recommended that the Iranian legislator, drawing inspiration from international models such as the European Union Artificial Intelligence Act and the OECD Principles on Artificial Intelligence, enact specific regulations concerning data governance, algorithmic transparency, technical–legal auditing of artificial intelligence systems, and the establishment of an independent supervisory authority. The realization of such a regulatory framework can enhance the effectiveness of the cybercrime prevention system while safeguarding citizens’ rights in the age of artificial intelligence.
The accused’s right to silence, as one of the fundamental guarantees of a fair trial, plays a decisive role in preventing the acquisition of unlawful evidence and in preserving human dignity. Although Iran’s Code of Criminal Procedure, in recent years, has explicitly recognized this right, its violation during the preliminary investigation stage remains a prevalent phenomenon. The central question of this study is what effects the violation of the accused’s right to silence has on the evidentiary value of confessions and other criminal evidence, and what enforcement mechanisms the Iranian legal system has envisaged to address such violations. Using a descriptive–analytical method and a jurisprudential–legal approach, this article explains the concept of violating the right to silence and its manifestations, and examines the legal consequences arising from such violations. The findings indicate that breaching the right to silence seriously undermines the validity of the accused’s confession and may also affect the legality of other evidence obtained. At the same time, the weakness of legislative and executive enforcement mechanisms has led, in many instances, to the reduction of the right to silence to a merely formal or symbolic right.
The rapid advancement of artificial intelligence has challenged the traditional concept of the “mental element of crime” within criminal justice systems. This study aims to conduct a comparative analysis of the mental elements of crimes arising from artificial intelligence within two frameworks: Imami jurisprudence and international criminal law. The research adopts a descriptive–analytical method with a comparative approach, and data have been collected through library research and documentary analysis. According to the findings, both systems currently regard artificial intelligence as lacking an independent mental element (intent, knowledge, or mens rea) as well as criminal capacity. In Imami jurisprudence, artificial intelligence is classified as an “object” or “property,” and liability is transferred to human agents (such as designers, manufacturers, and users) based on principles including liability for destruction (ḍamān al-itlāf), causation (tasbīb), and the no-harm rule (qāʿidat lā ḍarar). The focus of this system is on individual, duty-based moral responsibility. In contrast, international criminal law, drawing on its experience with organized crimes, has moved toward developing novel concepts such as command responsibility, risk-based liability, and the notion of electronic personality, in order to address the complexity and distributed nature of decision-making in the development of advanced artificial intelligence. The comparative conclusion indicates that Imami jurisprudence emphasizes the individual transfer of responsibility, whereas international criminal law adopts a functionalist perspective and moves toward mechanisms of collective and institutional responsibility. It is therefore recommended that the Iranian legal system, while preserving its jurisprudential foundations, draw on the capacities of both approaches to enact specific legislation and recognize “chain liability” and a “duty of care” for actors in the field of artificial intelligence.
Under multimodal transport documents, the transport operator bears integrated liability for the occurrence of any loss, and liability is presumed against the operator. Nevertheless, by concluding a liability limitation agreement with the cargo owner, the multimodal transport operator determines a ceiling for liability with respect to potential future losses. Pursuant to international multimodal transport instruments, such an agreement is deemed ineffective and unenforceable against the cargo owner in cases where intentional misconduct or gross fault is attributed to the operator. Where intentional or gross fault is committed by the servants, agents, or representatives of the multimodal transport operator, these instruments adopt differentiated approaches depending on the nature of the claim. If a claim for damages is brought on a contractual basis, the limitation of the multimodal transport operator’s liability is not available. However, where a claim for damages is brought on a non-contractual (tortious) basis against the servants, agents, or employees involved in multimodal transport operations, only the culpable individuals are deprived of the right to invoke the limitation of liability, while the multimodal transport operator retains the right to rely on the liability limitation agreement. In all cases, in assessing the concept of gross fault, due regard must be paid to its objective characterization as well as to the level of expertise and professional skill of the transport operator and its agents.
The rapid integration of artificial intelligence into contemporary decision-making processes has fundamentally altered the structure of agency, accountability, and risk within modern legal systems. Human–AI collaboration now characterizes critical domains such as healthcare, finance, transportation, governance, and criminal justice, producing decisions through complex interactions between human judgment and algorithmic inference. This article examines how these hybrid decision structures destabilize the classical foundations of legal responsibility, particularly the doctrines of fault, intent, and causation. Employing a narrative review methodology grounded in descriptive–analytical inquiry, the study synthesizes interdisciplinary scholarship from law, philosophy of action, AI governance, and socio-technical systems theory to reconstruct the conceptual architecture of responsibility under conditions of distributed cognition. The analysis demonstrates that traditional anthropocentric models of responsibility—premised on individual agency, linear causation, and coherent intentionality—are increasingly inadequate for explaining harm and allocating accountability in algorithmically mediated environments. The article proposes a systemic reorientation of legal responsibility, emphasizing shared and layered accountability, institutional governance, and risk-based causation frameworks. By reframing responsibility as a property of socio-technical systems rather than isolated individuals, the study offers a coherent theoretical foundation for adapting liability regimes to the realities of human–AI collaboration. The findings suggest that the future legitimacy and effectiveness of legal systems depend on their capacity to evolve beyond event-based blame toward governance-centered models capable of sustaining accountability amid technological complexity.
This article investigates the transformation of data into the core asset of contemporary capitalism and examines the legal, economic, and political consequences of this shift within platform-based economies. The study argues that existing legal frameworks governing data ownership remain structurally misaligned with the realities of data-driven accumulation, allowing digital platforms to consolidate unprecedented control over markets, labor relations, and informational infrastructures. Through a narrative review and descriptive–analytical methodology, the article synthesizes interdisciplinary scholarship from political economy, law, and digital governance to expose the limitations of current legal classifications that treat data as personal right, intellectual property, or contractual asset without articulating a coherent ownership architecture. The analysis demonstrates how these fragmented approaches legitimize asymmetric power relations, reinforce monopolistic market structures, and undermine democratic accountability in the digital economy. The article further explores alternative models of digital property design, including collective governance frameworks, public-interest data infrastructures, and hybrid ownership regimes, and evaluates their capacity to rebalance economic power, protect individual autonomy, and preserve social welfare. By situating data ownership within broader struggles over sovereignty, market regulation, and social justice, the study highlights the political economy consequences of legal design choices and their impact on innovation, competition, and institutional legitimacy. The article concludes that reconstructing data ownership is not merely a technical regulatory task but a foundational project for shaping the future trajectory of platform capitalism and for ensuring that digital transformation advances collective prosperity, democratic governance, and long-term economic sustainability.
Number of Volumes
3
Number of Issues
9
Acceptance Rate
29%