Litigation and Alternative Dispute Resolution

Limitations and Challenges in the Application of Artificial Intelligence in Arbitration

Introduction*

While AI has aided in documentation review, prediction, data analytics and the transcription of proceedings, its adoption presents unique ethical and legal challenges such as; bias,[1] fairness,[2] and accountability. In the first article of our series accessible here, we discussed the emergence of alternative modes of dispute resolution, specifically focusing on the legal framework of International Commercial Arbitration (ICA) and the definition of AI. In the second article accessible here, we set out how AI has been applied in ICA-documentation review, transcription, complementing the roles of counsel—and explored whether a robot/AI can perform the role of an arbitrator.

This article examines the ethical and legal challenges that may arise in the application of AI in ICA. First, we consider the legal challenge as to whether a robot/AI can be appointed as an arbitrator within Uganda law. The article also draws from the recently enacted European Union AI law which underscores the risks associated with AI in dispute resolution. Second, we explore how the use of AI through robot arbitrators may undermine the right of a fair hearing/trial, which is guaranteed under the Uganda Constitution[3]. Third, the article discusses other challenges such as confidentiality, bias, and transparency. The limitations and challenges in the use of AI, should inform the design of policies that can minimize the risk and enable the harnessing of the advantages of AI, rather than being a basis for rejecting AI programs.

Legal Limitations
Legal limitations are the most evident barrier to AI integration in ICA. The hesitance to introduce new procedures into ICA is attributable to the challenges likely to arise at the enforcement stage of the arbitral award.[4] The New York Convention provides that enforcement of an award may be refused by a court where the appointment of the arbitral tribunal was not done in accordance with the law of the country where the arbitration took place,[5] or, where the award is contrary to public policy of the seat of arbitration.[6] Can an award by a robot/AI arbitrator be enforceable under Ugandan law or, is it in line with Uganda public policy?

Uganda Arbitration Law
In Uganda, the Arbitration and Conciliation Act Cap 5 (herein after the Arbitration Act) is the principal law governing arbitration. The Supreme Court of Uganda in Babcon Uganda Ltd vs Mbale Resort Hotel Ltd[7] stated that the Arbitration Act was enacted based on a recommendation by the Justice Harold Platt Commission of Inquiry Report, which recommended the incorporation into Ugandan laws international instruments and modernised arbitration laws. Thus, the Arbitration Act is modelled after the UNICTRAL Model Law which does not include an express provision permitting or forbidding machine (AI) arbitration.[8] Thus, the definition of an arbitral tribunal in the Arbitration Act does not specifically state that the arbitrators or umpires ought to be human.[9]

However, the Arbitration Act provides that when one arbitrator is to be appointed, the person to be appointed shall be agreed upon by the parties.[10] The definition of a person under the Interpretation Act Cap 2 does not include a machine, a device or an application.[11] Therefore, appointing a robot or AI program as a sole arbitrator is not legally advisable under Section 11(2)b of the Arbitration Act. It should be noted, however, that the Arbitration Act is not as explicit as to whether AI can form part of an arbitral tribunal.[12]

Need for Amendment
There is scholarly consensus that in order to integrate robots/AI programs into ICA, there is need to amend the domestic laws, the arbitration rules and international agreements.[13] However, the amendment of the New York Convention does not appear realistic in the near future, in light of the fact that only 10 states signed up in 1958, when it was opened for signature, and it took over three decades for other signatories to sign and bring the total number of current state parties to 172.[14]

The EU Approach on AI
Globally, the European Union Artificial Intelligence Act[15] (herein after the “EU AI Act”) enacted on 1st August 2024 is the first specific global regulation on the use of AI. The EU AI Act classifies the use of AI into different classes of risks ranging from minimal risk to unacceptable risk. The use of AI in Alternative Dispute Resolution, which includes ICA, is classified as “high risk”.[16] This is due to the likely impact on individual freedoms such as the right to an effective remedy and right to a fair trial considering the risk of potential biases and errors.[17] This high-risk classification, however, does not extend to the use of AI for purely administrative tasks such as communication between personnel.[18]

As seen above, there is still a large legal vacuum regarding how AI can be used in ICA. However, the Silicon Valley Arbitration and Mediation Centre recently released guidelines that are aimed at offering the best ways in which AI may be incorporated in International Arbitration.[19] These suggestions include non-delegation of decision-making roles to AI, prohibition of using evidence forged/falsified through AI, disclosure of AI use in appropriate circumstances among others.[20] While these guidelines are not legally binding, they can inform a design for the legal architecture for AI use in ICA.

Confidentiality
Confidentiality is one of the cardinal reasons that parties choose ICA as a mode of dispute resolution. As discussed in our previous article, AI can be used to analyze large volumes of data/information/ facts related to an international commercial dispute.[21] In the event that the law is amended to accommodate robot arbitrators/AI, challenges are bound to arise where parties to the arbitration are unwilling to share their data to inform future predictions,[22] as AI algorithms require data to learn patterns and make decisions and predictions based on it.[23] This raises the question on confidentiality of information which can only be preserved through anonymization of the data.[24] AI technologies are also subject to cyber security risks, such as hacking,[25] as well as the predicted risk that AI technologies may one day become counterproductive and turn against humans.[26] These technical challenges, highlight the reasons as to why adoption of AI in ICA faces stern criticism.

Compromise on the Right to Fair Hearing/Trial
Applying AI in the arbitration process is likely to impede to parties’ right to a fair trial.[27] This right is non-derogatory in Uganda,[28] and in most legal regimes. A fair trial embodies hearing of the dispute by combining the law with the subtle considerations of equity, of which robots may lack due to the absence of human emotions in their code.[29] Therefore, an arbitral award made by a robot/AI may be challenged for violation of the right to fair trial, this would not only be against the law, but also against public policy, thus unenforceable under the New York Convention.[30]

Transparency
The decision-making criteria of AI backed systems is often concealed behind the wall of complicated mathematical code.[31] This is unpredictable as some of these systems evolve with new data,[32] are prone to error and may occasionally come to unjustified decisions.[33] To ensure transparency in automated decision making, parties would be required to understand the workings behind the algorithm and how it makes decisions [34] which would inevitably result in the earlier posited complication of confidentiality and the private nature of the information that AI is trained on.[35] Thus, AI use in ICA may be difficult to implement due to the well-founded mistrust in decisions made by these AI algorithms.

Furthermore, the Data Protection and Privacy Act Cap 97 grants a data subject in Uganda the right to require a data controller to ensure that any decision made by or on behalf of the data controller which significantly affects the data subject, is not based solely on the processing by automatic means of personal data in respect of the data subject.[36] If this provision is read alongside the section of the Arbitration Act that grants parties to arbitration the freedom to select their arbitrator,[37] it becomes clear that even with the adoption of AI in ICA, a party would still have the right to reject an arbitration overseen by a robotic arbitrator.

Risk of “AI Hallucinations”
An “AI hallucination” is where AI algorithms perceive patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are inaccurate.[38] AI may confidently assert incorrect answers, which may contradict the diligence, accuracy and credibility required in the legal services sector.[39] Thus, arbitrators relying on AI in ICA ought to approach information generated by AI, especially large language models, as though it were a first draft made by an inexperienced junior.[40] Two American lawyers were sanctioned and ordered to pay a $5000 fine for citing six non-existent and fictitious case citations generated and availed to them by a generative AI chatbot, ChatGPT.[41]

It is possible that AI may generate inaccurate outputs. This raises serious concerns especially if it is to be applied in a dispute resolution process like ICA, where recourse against an arbitral award can only be by way of an application to set it aside as provided for under Section 34(1) of the Arbitration Act, on the grounds specified under Section 34(2) of the Arbitration Act.

Bias and fairness
AI technology is likely to replicate and perpetuate bias present in the data on which it was trained.[42] As earlier discussed, this may negatively affect the right to a fair trial. How can one ascertain that a decision made by AI was not tainted with the biases entrenched in the data on which it was trained? In 2016, it was reported that the COMPAS AI, an AI powered risk assessment tool used in the United States of America judicial system to determine the risk of a convict in committing crimes, was racially biased.[43] COMPAS AI was more likely to falsely “flag” black defendants as future criminals, wrongly labeling/categorizing them this way at almost twice the rate of white defendants.[44] This illustrates bias that might manifest from relying entirely on AI to make arbitral awards.

Conclusion

This article has examined the legal and ethical challenges that may arise from the application of AI in ICA. These include; public policy considerations, fairness, bias, lack of transparency, AI hallucinations, confidentiality and privacy concerns, among others. There is considerable debate as to whether AI will render human input from lawyers, obsolete. However, as discussed, the legal services sector, requires diligence, certainty and basic guarantees of a fair hearing, which may for the time being prove difficult for AI to guarantee.

Humans and their contribution still remain relevant. Empathy is a necessary part of ICA and this ultimately limits the possibility machines from replacing humans.[45] As stated in the European Union Artificial Intelligence Act, AI tools can support the decision-making power of judges or judicial independence, but should not replace it: the final decision-making must remain a human-driven activity.[46] Ultimately, as the technologies evolve to remedy the challenges discussed, it remains to be seen, whether they will be adopted at full scale with ICA. An amendment to the arbitration treaties and domestic arbitration laws, will quicken this process if they address the risks explored in these series.

Contacts for publication

John F. Kanyemibwa, Team Leader jfk@handgadvocates.com 
John is the Interim Chairperson of the International Centre for Arbitration & Mediation in Kampala (ICAMEK). He has over 30 years post qualification experience and has been ranked as a leading lawyer, “Band 1” by Chambers and Partners. John holds a Master of Laws (LLM) degree in Commercial & Corporate Law from the University of London, UK. He also possesses an LLM in Oil & Gas from the Uganda Christian University. He completed his Bachelor of Laws degree from Makerere University. He holds a Post Graduate Diploma in Legal Practice from the Law Development Centre.

Joel Basoga, Senior Associate and Head TMT, jbasoga@handgadvocates.com 
Joel is a graduate of the University of Oxford. He holds an advanced master’s degree (LLM) in technology, competition and data privacy law from the University of Michigan, among others. He writes and advises clients regularly on legal aspects pertaining to technology. He routinely handles disputes in corporate finance, antitrust law/competition, debt management, international commercial arbitration, employment and technology related matters. He worked in the Dispute Resolution Department of Freshfields Bruckhaus Deringer in London, United Kingdom.

Disclaimer:

This article is developed as an information resource summarizing pronouncements issued by the Uganda Parliament, Courts and other international sources of law such as treaties/conventions. The application of this article's contents to specific situations will depend on the particular circumstances involved. While every care has been taken in its presentation, personnel who use this document to assist in evaluating compliance with applicable laws and regulations should have sufficient training and experience to do so. No person should act specifically on the basis of the material contained herein without considering and taking professional advice.

Neither H&G Advocates, nor any of its personnel nor their partners or employees, accept any responsibility for any errors this document might contain, whether caused by negligence or otherwise, or any loss, howsoever caused, incurred by any person as a result of utilising or otherwise placing any reliance upon it.

Sources

* This serialized article is based on an earlier version originally prepared under the supervision of Mr. Gerald Batanda, Lecturer in International Arbitration, Institute of Petroleum Studies Kampala. We acknowledge and appreciate the initial guidance provided by Mr. Batanda in shaping the foundational ideas and the encouragement to publish the work.

1 Buolamwini Joy & Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.’ In Conference on Fairness, Accountability and Transparency, 77–91. PMLR, 2018 at 77.
2 Cathy O’Neil, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’(Allen Lane 2016) 52.
3 Article 28(1) of the Uganda Constitution, 1995.
4 Philippe Billiet and Filip Nordlund, ‘A new beginning –artificial intelligence and arbitration, Korean Arbitration Review (2017).http://www.kcab.or.kr/jsp/comm_jsp/BasicDownload.jsp?FilePath=arbitration%2Ff_0.140140034811391261521536471556&orgName=04.+A+new+beginning+%26%238211%3B+artificial+intelligence+and+arbitration+%28Philippe+Billiet%2C+Filip+Nordlund%29.pdf accessed on 1st August 2024.
5 Article V(d).
6 Article V 2(b).
7 Civil Appeal No. 6 of 2016.
8 Billiet (n 4).
9 Section 1(e).
10 Section 11(2)(b).
11 Section 2.
12 Section 11(2)(a).
13 Thomas Snider, Sergejs Dilevka and Camelia Aknouche ‘Artificial Intelligence and International Arbitration: Going Beyond E-mail’, Dubai International Financial Centre, (2018).
14 Ibid; United Nations Treaty Convention, ‘State Parties to the New York Convention’, https://treaties.un.org/pages/ViewDetails.aspx?src=TREATY&mtdsg_no=XXII-1&chapter=22&clang=_en , accessed on 25 August 2024.
15 Regulation (EU) 2024/1689 of the European Parliament.
16 Ibid, Annex III, Article 8(a).
17 Ibid, Recital 61.
18 Ibid.
19 ‘SVAMC Publishes Guidelines on the Use of Artificial Intelligence in Arbitration’ 30th April 2024 https://svamc.org/svamc-publishes-guidelines-on-the-use-of-artificial-intelligence-in-arbitration/ accessed on 22nd August 2024.
20 Ibid.
21 Maria Joao Mimoso, 'Artificial Intelligence in International Commercial and Investment Arbitration' (2023) 3 Int'l Inv LJ 156, 161.
22 Lucas Bento, ‘International Arbitration and Artificial Intelligence: Time to Tango?’ Kluwer Arbitration Blog (2018) https://arbitrationblog.kluwerarbitration.com/2018/02/23/international-arbitration-artificial-intelligence-time-tango/ accessed on 7th August, 2024.
23 Aldoseri, A.; Al-Khalifa, K.N.; Hamouda, A.M. ‘Re-Thinking Data Strategy and Integration for Artificial Intelligence: Concepts, Opportunities, and Challenges’, Applied Sciences 2023, https://www.mdpi.com/2076-3417/13/12/7082 accessed on 22nd August 2024.
24 Bento (n 22).
25 Claire Morel de Westgaver, ‘Cybersecurity In International Arbitration-A Necessity And An Opportunity for Arbitral Institutions’, Kluwer Arbitration Blog (2017) https://arbitrationblog.kluwerarbitration.com/2017/10/06/cyber-security/ accessed on 22nd August 2024.
26 Muller Vincent C and Bostrom Nick, ‘Future Progress in artificial intelligence: A survey of expert opinion’ in Vincent C Muller (Ed), Fundamental Issues of Artificial Intelligence, Springer, 553 (2016) https://philpapers.org/rec/MLLFPI accessed on 22nd August 2024.
27 The EU AI Act (n 15).
28 Article 44 (c) of the Constitution of the Republic of Uganda, 1995.
29 Winston Maxwell, ‘The future of arbitration: New technologies are making a big impact-and AI robots may take on “human” roles’ Hogan Lovells Publications (2018) https://www.lexology.com/library/detail.aspx?g=35261188-a709-4560-930c-55c7b5cdce80 accessed on 22nd August, 2024.
30 Article V 2(b)
31 M. Perel and N. Elkin - Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Fla. L. Rev. 181, 183.
32 Ibid.
33 S. Barocas and A. D. Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 Calif. L. Rev. 671, 673.
34 Perel (n 31).
35 Bento (n 22).
36 Section 27(1).
37 Section 11(2).
38 IBM, ‘What are AI hallucinations?’, International Business Machines (IBM) https://www.ibm.com/topics/ai-hallucinations accessed on 22nd August 2024.
39 Magal, Calthrop, Limond, ‘Artificial intelligence in arbitration: evidentiary issues and prospects’ 12th January 2024, A&O Shearman, https://www.aoshearman.com/en/insights/artificial-intelligence-in-arbitration-evidentiary-issues-and-prospects accessed 22nd August 2024.
40 Ibid.
41 Sara Merken, ‘New York lawyers sanctioned for using fake ChatGPT cases in legal brief’, Reuters, 26th June 2023, https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/ accessed 22nd August 2024.
42 Emilio Ferrara, ‘Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies’, Sci 2024, https://www.mdpi.com/2413-4155/6/1/3#:~:text=In%20the%20context%20of%20AI,in%20unfair%20or%20discriminatory%20outcomes, accessed on 22nd August 2024.
43 ProPublica, ‘Machine Bias’, 23rd May 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing , accessed on the 22nd August 2024.
44 Ibid.
45 Billiet (n 4).
46 The EU AI Act (n 15).

< Back