Artificial intelligence has significantly transformed technology and is now reshaping global dispute resolution. Although AI promises to boost arbitration efficiency, it also presents potential risks. Our article explores the CIArb’s detailed policy paper, which provides guidance on balancing AI’s advantages with the necessity of preserving the integrity and fairness of the arbitration process.
In depth
The rapid advances in artificial intelligence over the last few years have not only transformed the technological landscape but are reshaping dispute resolution worldwide. Ongoing research shows AI’s potential to improve efficiency in arbitration through sophisticated data analytics, streamlined legal research and increased access to justice. However, these developments also raise pressing concerns over data security, transparency and procedural fairness. The Chartered Institute of Arbitrators (CIArb’s) Guideline on the Use of AI in Arbitration (2025) responds to this changing context by establishing robust protocols that balance innovation with the preservation of core legal principles. In a time when digital transformation challenges traditional frameworks, this policy paper provides a timely roadmap for integrating AI into arbitration while upholding justice and accountability.
The CIArb has released a comprehensive policy paper titled “Guideline on the Use of AI in Arbitration (2025)“. This document, prepared by the Policy and Professional Practice Team and the Technology Thought Leadership Group, provides a detailed framework for the integration of AI in arbitration proceedings. The guideline aims to balance the benefits of AI with the need to maintain the integrity and fairness of the arbitration process.
The adoption of AI in the legal profession has accelerated dramatically, driven by the pursuit of greater efficiency and the sophisticated language capabilities of modern AI models. The CIArb AI Guideline empowers dispute resolvers, parties and other participants to take advantage of these technological breakthroughs while rigorously safeguarding procedural rights and ensuring the enforceability of awards.
AI-driven tools can drastically improve key processes such as data analysis, text generation, evidence collection, translation, transcription and case analysis, driving efficiency and precision in arbitration. These tools also promote equitable access by enabling under-resourced parties to leverage cutting-edge technology at an affordable cost. However, integrating AI is not without challenges. Risks include breaches of confidentiality, compromised data integrity, cybersecurity vulnerabilities and concerns over impartiality and due process. The opaque nature of AI decision-making often referred to as the “black box” problem remains a critical issue, though recent advancements in agentic reasoning are beginning to shed light on these processes. Furthermore, the rapid advancement of AI and divergent legal frameworks across jurisdictions could impact the enforceability of arbitral awards.
CIArb’s General Recommendations
The CIArb have provided recommendations that can assist with navigating the plethora of intricacies associated with the use of AI. These recommendations include that:
- Parties and arbitrators are encouraged to understand the technology and potential risks associated with AI tools. They should also be aware of any AI-related laws, regulations, and institutional rules applicable in their jurisdictions.
- The use of AI should not diminish the responsibility and accountability of parties and arbitrators. They must remain fully responsible for their actions and decisions.
- Arbitrators have the power to give directions and make procedural rulings on the use of AI by parties, subject to any express prohibitions agreed by the parties or mandated by applicable laws. Parties can exercise their autonomy to agree on the use of AI in arbitration, including specifying or limiting the AI tools that may be used.
- Disclosure of AI use may be required to ensure transparency and preserve the integrity of the arbitration process. Arbitrators may impose AI-related disclosure obligations on parties, including party-appointed experts or factual witnesses. This essentially means that arbitrations may make certain directions relating to the type of AI that would trigger the disclosure obligations, including the circumstances in which disclosure is required, who parties should disclose to and within which timeframe.
- Arbitrators may use AI tools to enhance the efficiency and quality of their decision-making process but should not delegate their decision-making powers to AI. They must independently verify the accuracy of AI-generated information and maintain responsibility for all aspects of an award. The CIArb recommends that arbitrators discuss the usage of AI tools with parties at the commencement of the proceedings to allow for a discourse on the matter. In the event the parties disagree on the use of AI tools, then the arbitrators should refrain from using the AI tools.
The CIArb AI Guideline marks a pivotal advance in modernising arbitration. By balancing the transformative benefits of AI with stringent safeguards, it offers a clear and principled framework for dispute resolution. This policy not only enhances efficiency and access to justice but also addresses critical challenges such as transparency, confidentiality, and regulatory divergence. As the legal landscape evolves, the Guideline sets a robust standard for responsibly harnessing technology while upholding the enduring principles of fairness and accountability.