The Role of Artificial Intelligence in International Arbitration
This is an Insight article, written by a selected partner as part of GAR's co-published content. Read more on Insight
In 1920, Czech play writer Karel Capek coined the term ‘robot’. In his science fiction play Rossum’s Universal Robots, he described a factory that employed human workers and robots. Initially happy to work for the humans, the robots ultimately revolted and took over the factory. The play ends with robots destroying the humankind and finding the meaning of love.
Did Karel Capek’s science fiction play from the beginning of last century prophesise the technological revolution we are witnessing today? We rely on artificial intelligence (AI) in our daily lives. From guidance bestowed by iPhone’s Siri and Amazon’s Alexa, to predictive Google searches, to facial recognition programmes – technology is ubiquitous. It is impacting our lives in ways we did not think was possible just several years ago.
Technology and AI are also transforming the legal profession. In the field of dispute resolution, the work of all key service providers – from legal counsel to arbitrators – is being transfigured by the technological revolution. Against this backdrop, AI is now poised to fundamentally change all aspects of the dispute resolution process. Lawyers now have the ability to have machines do their legal research. Arbitrators can get a computer’s assessment of the merits of a dispute. And litigants are able to get their disputes decided faster and with more uniformity.
This article considers the looming question of whether robots are taking over the legal profession and discusses the impact of automation on the work of legal counsel and arbitrators.
The impact of AI on legal counsel
The discourse about lawyers’ use of technology has risen to new heights in the past several years. Technological assistance is no longer limited to e-filing, e-discovery and electronic research, but now includes a host of sophisticated new services – from predictive analytics to deep learning features, which allow a machine to ‘learn’ while performing tasks in a way that conceptually are not that dissimilar from how humans learn. These innovations have allowed technology to make inroads into the legal profession, automatising a fairly substantial portion of tasks that were previously performed by lawyers. It has even been suggested that legal work that is repetitive and requires minimal human intervention will soon become the sole province of automated systems.
To embrace the opportunity, the tech sector has partnered up with the legal sector in developing cutting edge technological solutions for legal processes. Many large law firms have created in-house legal incubators to explore, develop and put into practice new technologies. Dentons’ NextLaw initiative is an illustrative example. NextLaw was created based on Dentons’ vision to be at the forefront of the transformation of the practice of law via the use of technology, instead of merely responding to technological disruptions. NextLaw, an affiliate of Dentons, acts as a catalyst of new technological solutions for the legal practice by identifying and supporting promising start-ups in the legal tech sector. Notably, its key ideas are developed in collaboration with, and based on the experience of, Dentons’ lawyers and clients around the globe.
We are seeing similar advancements in the public sector. Governments are beginning to encourage legal tech innovation by creating subsidy programmes designed to promote it. Singapore’s government, for example, in partnership with the Law Society of Singapore, has launched a S$3.68 million legal tech subsidy programme that will pay law firms for developing new applications. The programme is named ‘Tech-celebrate for Law’ and it allows law firms to qualify for financing ranging between S$30,000 (for baseline technology solutions, which include online legal research and document management) and S$100,000 (for advanced technology solutions, which are powered by AI and can help the law practices strengthen capabilities in document assembly, document review, e-discovery and client engagement).
Recognising the power of technology and its transformative effect on the legal profession, a question naturally arises: whether, when and to what extent robots will take over the legal profession. McKinsey Global Institute, the research arm of McKinsey & Co, tackles that question in its report ‘A Future That Works: Automation, Employment, and Productivity’ and provides some consolation. The report concluded that only 23 per cent of a lawyer’s tasks can be automated with current technology. That means that 77 per cent of lawyer’s tasks are here to stay. At least for now.
What technology can do is unbundle various aspects of legal work. Machine-appropriate tasks could be automated, leaving only the most critical – and highly valued – tasks for the human lawyers. Machines can perform such functions as legal research, transcription, interpretation, translation and even drafting of factual summaries or chronologies. They can even provide critical information that will assist with strategic decisions. For example, the predictive capabilities of AI can provide an early assessment of the likelihood of success on a claim, range of damages, timing of the proceeding in a particular forum and its likely costs. Machines can also assist with arbitrator due diligence, research arbitrator track records and stated views, and predict his or her likely position on a particular issue.
On the other hand, an essential aspect of a lawyer’s role is that requiring quintessentially human characteristics, such as judgment, intuition, sense of equity, diplomacy, discretion and empathy, among many others. No matter how ‘intelligent’, a machine will not be able to emulate them. The art of persuasion, which embodies all of the aforementioned characteristics, is also an inherently human skill.
The unbundling of legal tasks that AI promotes offers substantial benefits both for the client and the legal profession. Clients get the benefit of the reduced cost and speed of automated services, provided the automation does not detract from the quality of legal representation. And lawyers get to devote their time to the most gratifying and intellectually stimulating aspects of legal work – the strategic and executive tasks – while relegating the tedious and mundane ones to machines. In a way, it is because of – and not despite – automation that the value and importance of the human aspect of legal services should only increase.
The shift towards using technology for routine legal tasks may also impact the way in which clients compensate lawyers for their services. As machines take over the labour-intensive aspects of legal representation, traditional hourly billing arrangements may no longer be representative of the value provided by counsel. As the legal profession adapts to the new technology, it will also need to adapt to alternative pricing structures, such as fee arrangements tied to the value provided by the lawyer rather than simply the hours worked.
The impact of AI on arbitrators
In their article ‘The March of the Robots,’ Paul Cohen and Sophie Nappert argue that ‘the way we conduct international arbitrations may be on the verge of fundamental transformation’. Mr Cohen and Ms Nappert point to ‘widespread dissatisfaction among arbitration users with the time and cost of proceedings’ and posit that ‘technology is becoming available and affordable to address users’ grievances about the process’. There is little doubt that technology will continue to transform the dispute resolution process and, indeed, we are already seeing automation of dispute resolution functions in a number of contexts.
Online dispute resolution (ODR) platforms have already been adopted by major e-commerce companies. Companies such as eBay, PayPal and Amazon have created simple, efficient and easy dispute resolution processes for consumer disputes. The eBay platform, for example, is handling up to US$60 million disputes a year and settles 90 per cent of them with no human input on eBay’s side. Financial institutions are turning to automated dispute resolution as well. ODR systems have been percolating even outside the e-commerce space. Modria is an online system that claims to effectively and efficiently resolve debt claims, landlord–tenant disputes, small claims and even child custody matters.
ODR has also been extensively used to resolve disputes under smart contracts. A smart contract is a contract captured in code that automatically performs the obligations the parties have committed to in the agreement. Smart contacts are self-executing. For example, in a sale contract, once contracted-for goods or services have been delivered, the smart contact enforces the counterparty’s corresponding obligation by withdrawing the payment from its account. In this closed-circuit environment, basic dispute resolution functions may be automated as well – if the promised goods or services have not been delivered, contractual penalties may be automatically implemented through an on-chain, integrated dispute resolution process.
Automated dispute resolution has clear and ample advantages. It offers improved efficiency, expedited processing times, lower costs and increased convenience – all of which expand users’ opportunities for access to justice. It also offers highly accurate results when it comes to the processing of large volumes of information, thereby increasing users’ trust and comfort with the underlying transaction. It is, therefore, not surprising that automated dispute resolution has proved highly advantageous in the context of high-volume, low-cost controversies where the speed and convenience of resolution take precedence in the users’ eyes over obtaining a perfectly correct result.
But is it equally suitable for complex, high-stakes disputes? Let us consider how automated decision-making impacts the validity of resulting awards outside of self-contained and self-effectuating systems of e-commerce and smart contracts.
Enforceability of arbitral awards constitutes the most fundamental purpose of the arbitral process. Indeed, the parties’ efforts in resolving disputes through arbitration would be rendered futile if the resulting award turns out to be non-enforceable. But the enforceability of awards rendered by an algorithm – rather than a human arbitrator – is not entirely clear within the existing legal infrastructure. Of course, neither of the most frequently used instruments for enforcement of arbitral awards, the New York Convention on the Recognition and Enforcement of Foreign Arbitral Awards (New York Convention), nor the Washington Convention on the Settlement of Investment Disputes between States and Nationals of Other States (ICSID Convention), expressly states that arbitral awards must be rendered by humans. This is indeed unsurprising given that the New York Convention’s adoption in 1958 and the ICSID Convention’s adoption in 1965 well predated the technological revolution we are experiencing today. Despite the absence of express rules proscribing machine-presided arbitrations, the underlying requirements towards the arbitral process and the arbitral award may give rise to an argument that awards issued by machines are not enforceable.
For example, both the New York Convention and the ICSID Convention require reasoned awards. An algorithm’s decision-making process fundamentally differs from that of a human being. Modern technology can process information through arithmetic reasoning, decision tree reasoning, analogical reasoning or, currently the most sophisticated process that involves culling information from a large spectrum of data sources, data mining. Although AI is capable of following a prescribed type for reasoning in each particular case, it won’t necessarily be able to pick a correct type or shift between them as necessary. The algorithm would need to be programmed in a certain way to allow for that selection to occur. This raises the question of whether the resulting award is one reasoned by the algorithm or the human behind the programme.
Similarly, both the New York Convention and the ICSID Convention provide that the award may be refused enforcement or annulled, respectively, if the composition of the arbitral tribunal is not in accordance with the parties’ agreement. The ability to select arbitrators is, indeed, one of the most significant features of international arbitration. In the traditional arbitration setting, the process for selecting human arbitrators is well established – the default method under many rules is that each party appoints its own arbitrator and then the two party-appointed arbitrators select the chair. But how is that process to work in the context of machine arbitration? Is each party to code its own arbitrator? If so, would each party be given the opportunity to inspect the ‘co-arbitrator’ coded by the other party and, if so, does that require each party’s team to now include not only legal counsel but also programmers? Would the parties jointly programme the ‘chair’ and, if so, at what stage of the arbitration are the parties to agree on the code for it? Or would the parties rely on robo-arbitrators obtained from a third party, say, an arbitral institution? How will any disputes between the parties over the proposed code be handled? These questions demonstrate the many unknowns that are embedded in the robo-arbitration process, which, in turn, may give rise to an argument by the losing party that the tribunal was not constituted according to the parties’ agreement, which, in turn, may jeopardise the enforceability of the arbitral award.
Equally important, due process is a cornerstone of any dispute resolution mechanism. The effectiveness and viability of arbitration and the enforceability of the resulting award hinges on the arbitrator’s ability to ensure due process. But are robo-arbitrators capable of doing so? Guerilla tactics and abusive conduct are becoming all the more frequent in arbitration. These practices rarely follow well-established tracks, as the culprit party gambles on being able to persuade the tribunal that its situation is genuinely exceptional. A question arises whether a machine, which is programmed to either respond to a specific trigger in a pre-determined way or ‘learn’ from analogous situations, is able to handle the non-linear tasks of discerning a genuinely exceptional situation from an abusive tactic or draw an adverse inference.
Although errors of law or fact generally do not constitute grounds for refusal of enforcement under the New York Convention or annulment under the ICSID Convention, they may impact the legitimacy of the robo-arbitrator’s award, which, in turn, may have a negative effect on the legitimacy of the entire arbitration process.
An old adage holds that arbitration is only as good as the arbitrators. Robo-arbitrators are as good as the programme data on which they are based. Given that robo-arbitrators would have superior data processing capabilities, some commentators have suggested that they may perform better than their human counterparts. After all, a robo-arbitrator will have no personal agenda, no preferences, no prejudices. It will not get moody, tired or cranky. It will efficiently and accurately digest large volumes of factual data and legal authorities. Yet, it will also lack the one key characteristic that is essential for virtually any decision-making process: sound human judgment formed by real-world experience and that will impact the robo-arbitrator’s decision in a number of ways.
First, it is not clear how the robo-arbitrator will be able to apply intangible concepts, such as, for example, good faith, reasonableness, materiality or best efforts. Such concepts are frequently incorporated into legal norms and the application of a given norm may turn on these concepts. A robo-arbitrator that has difficulty analysing and applying such concepts will be unable to correctly decide the application of the relevant legal norm.
Second, as noted above, new generation machines have deep learning capabilities allowing them to learn the ropes while performing a task. However, in the context of international arbitration, the process of ‘deep learning’ is complicated by the confidentiality of the material that the robo-arbitrator is to learn. Arbitral awards rendered in investment treaty cases receive more publicity than their counterparts in commercial cases. Even then, such awards themselves do not provide the full picture of the issues arising in a particular arbitration and, therefore, cannot constitute a reliable guidepost for the robo-arbitrator on the thinking and reasoning of the human mind.
Third, even where some information can be gleaned from the publicly available material, the absence of the doctrine of stare decisis occasionally leads to inconsistent, and sometimes contradictory, arbitral decisions. In the absence of a formal hierarchy of arbitral tribunals, the robo-arbitrator will be unable to ascribe weight to such contradictory decisions to determine which one is take precedence, which, in turn, hinders the robot’s ability to learn the rules of the game. While this predicament is less notable in the context of commercial arbitrations where disputes are typically decided under the substantive law of a particular jurisdiction that a machine can ‘learn’ from publicly available sources, in practice, thorny issues oftentimes involve either procedural aspects or ‘twilight’ issues (term coined by Professor George Bermann) that black letter rules may not necessarily reach.
Automation of the arbitration process has an undeniable superficial appeal. Not only does the prospect of it sounds incredibly progressive, but it also promises to resolve well-recognised drawbacks of the arbitration process such as its increasing time and costs. These goals are laudable – but the practicality of referring complex disputes to robo-arbitrators currently raises more questions than it offers solutions.
That notwithstanding, although complete automation of arbitral tribunals is an uncertain proposition, the use of AI to facilitate arbitrators’ performance of their adjudicatory functions should be encouraged. There are a number of ways legal technology can – and should – be incorporated into the work of arbitral tribunals.
First, many arbitral tribunals rely on administrative secretaries for their work. While it is well established that the secretaries are not allowed to participate in the decision-making process, they are allowed to assist with technical and administrative aspects of the case. Many tasks that were previously delegated to administrative secretaries could now be delegated to AI. Research, organisation of the record, drafting of factual chronologies and other non-substantive tasks could be automatised under the tribunal chair’s supervision.
Second, arbitral tribunals may consider employing analytical algorithms to facilitate, or confirm, the arbitral tribunal’s decision. Such processes have already been adopted in a number of jurisdictions. A well-publicised case in the United States, Loomis v Wisconsin, involved a somewhat similar delegation. There, a judge in Wisconsin used the State of Wisconsin’s proprietary risk assessment software in the sentencing of Eric Loomis to six years in prison. Mr Loomis challenged the use of such technology, alleging that it violated his right to due process because he could not challenge the accuracy of such technology. Mr Loomis lost because the appellate court found that the sentencing judge had not relied too heavily on the software and that other factors supported the conclusion reached by the software.
Such use of technology has the obvious advantage of heightened confidence in the tribunal’s decision, but it, too, should be used with caution. Humans tend to give strong deference to machine findings, on the premise that a machine is less prone to making mistakes. Such deference would be dangerous in the arbitral context, as it could lead to the creation of robo-arbitrations with a human face. It is therefore important for the arbitral tribunal to be cognisant of the need to perform its own, independent analysis. A practical solution that would allow the arbitral tribunal to minimise any undue influence is to use the AI to confirm or ‘stress test’ the arbitrator’s findings (as opposed to adopting the Loomis court’s approach, whereby the court confirmed the machine’s finding).
Outlook for the future
AI has tremendous power and it will continue to transform the world we live in. It will also continue to disrupt the way we conduct business. Although the full extent of AI’s impact on our lives in the next decade is unclear, two things are certain: further advancement of AI is inevitable, but it is not infinite. Rather than resisting AI, legal professionals must find a way to harness its power and learn to utilise it to their clients’ and users’ advantage.
The views or opinions expressed in the article are solely those of the authors and they do not represent the views of Dentons or its clients. The article is prepared solely for informational purposes. It is not intended as legal advice and should not be taken as such.