.
Presenters: Nicolas Economou (Director, The Future Society, CEO, H5, Co-chair, Law Committee, IEEE Global Initiative), Jiyu Zhang (Associate Professor, Renmin University), Lee Tiedrich (AI Initiative Co-Chair, Covington & Burling), Jia Gu (Researcher, Legal Intelligence in China), Robert Silvers (Partner, Paul Hastings, Former Assistant Secretary for Cybersecurity at U.S. Department of Homeland Security), Mike Philips (Associate General Counsel, Microsoft), Deepa Krishna (Director of Business Development, ClearAccessIP)
To read the full report click here for the digital edition.
AI has unrelentingly ascended into American and Chinese legal systems. As citizens’ liberty, wellbeing, privacy and rights depend on the strength of a nation’s judiciary, many apprehend AI’s increasingly prominent role and its implications for due process. Today, the rule of law by humans and for humans, which Nicolas Economou characterized as the foundation of civilized society, is increasingly challenged, as decisions that affect humans are progressively surrendered to machines in the world’s legal systems. While AI has significant potential to better society and the administration of justice, its deployment must follow norms that will ensure lawyers, courts, other institutions of state, and civil society at large can trust it.
The “AI in the Law” panelists shared perspectives on how AI can benefit American and Chinese justice systems while mitigating its risks. Noting that technology has the potential to improve access to and the quality of justice, the speakers’ collective perspectives highlighted the need for artificial intelligence systems, in order to be trusted in the legal system, to be appropriately transparent, effective at meeting the specific purpose for which they are intended, competently operated, and accountable. It is important that AI systems (and their operators) deployed in support of the administration of justice and enforcement of the law remain under the ultimate supervision and control of legal practitioners and courts.
KEY TAKEAWAYS
China’s judiciary system applies AI to improve efficiency and fairness. China faces both efficiency and fairness challenges. The recent round of legal reforms lowered the number of practicing judges in China by a third in the face of a 30 percent increases in the amount of legal cases sent to Chinese courts annually. At the same time, recent studies show that judges in different geographical regions within China make dissimilar judgements for similar cases. As there is a significant mismatch between the number of judges and litigations and also uneven judgements made across the nation, the Supreme People’s Court (SPC) has taken a proactive role in introducing AI into China’s judicial system. To overcome these mismatches, the Supreme People’s Procuratorate and SPC are developing intelligent courts and intelligent procuratorates.
Technology can streamline tedious court processes. Chinese judges are overwhelmed by the large amount of litigations they confront, with Beijing appellate court judges concluding only a third of their presented cases each year. To improve this statistic, Jiyu Zhang notes that the SPC is introducing automated technologies to complete cumbersome tasks. Specifically, dictating technologies intended to transform court speeches into text will improve efficiency. Additionally, automated technologies that not only correct errors in judicial documents, but also generate parts of the documents themselves will allow judges to spend their time doing more meaningful work.
Artificial intelligence can provide judges with more information. Specifically, the SPC hopes to apply AI’s predictive and research capabilities to intelligent courts. By predicting the number of different kinds of cases, judges and court staff can better allocate resources. Also, improving court search capabilities and implementing an AI-fueled profiling system will not only make judicial processes more efficient, but also provide judges in different regions with similar information to make more consistent judgements. Legal profiling systems, Jia Gu notes, reduce information gaps and help judges find similar cases with similar arguments. This technology then assists legal practitioners to sift through large quantities of data. Though these types of technologies are highly contested, they allow judges to search similar cases electronically, provide sentencing guidelines and help complete risk evaluations.
Attention to data can overcome biases. Freedom from bias is essential to justice systems. The U.S. in particular questions to what extent AI applications need to reduce bias in the outputs they produce. Lee Tiedrich notes that as technology continues to advance, communities must comprehensively analyze AI’s impacts on legal bias issues. Since bias is not only morally reprehensible, as Robert Silvers states, but illegal under fair employment, housing and lending laws, robust protections need to be implemented to ensure that AI does not replicate or exacerbate existing biases. As AI becomes more prominent in legal systems, data is vital to containing and detecting bias.
Legal data and information need to be accessible. To enable citizens to participate in increasingly AI-enabled legal systems and to be fully aware of what AI systems are doing, as Mike Philips highlighted, individuals need access to adequate data and information within their legal systems. China’s SPC published all of its judgements since 2013 (approximately 48 million cases) online, giving inclusive access to case conclusions. The SPC also opened a trial network with access to court proceeding videos. As data barriers are torn down and individuals play more involved roles in the legal system, knowing AI’s data inputs will help determine whether outputs are biased.
Algorithms are only as good as their data. As AI enters the courtroom, defendants are increasingly assessed by algorithms (for example for bail decisions). In the event that data is undesirably biased, the accompanying algorithm is likely to be adversely affected, producing erroneous outcomes. In addition, as raw data must often be labeled by subject matter experts, undesirable human bias can be introduced. This risk underscores the need for great caution in determining the norms under which courts and judges should feel confident in relying on the assessments made by artificial intelligence.
Qualified experts should operate AI and interpret the data. AI can be used in the service of the law, but it is a scientific discipline distinct from the law, that requires specific expertise in order to be competently operated. It is important that AI operators, as well as data labelers, have the right domain expertise. As noted, AI requires data labeling, which should be entrusted to those qualified to do so. Additionally, algorithms involved in legal decision-making require expert human operation and interpretation to make AI measurably effective and its findings understandable. Thus, experts with the appropriate scientific, technical, and subject matter competencies play a key role in ensuring the effective and safe operation of artificial intelligence systems in the law.
Artificial intelligence systems complicate liability risks. Companies developing AI systems must account for the technology’s accompanying risks and liabilities. Silvers links liability and regulatory exposure to companies’ AI-centered business strategies. As AI-driven products can adversely affect people and cause individuals to come forward saying artificial intelligence injured or disadvantaged them, with State v. Loomis serving as an example, companies need to fortify themselves against such risks via contracts and procedures.
AI shifts liability to companies. Currently, more than 90 percent of car accidents are caused by human error. In this context, individuals are often held legally accountable. But as automated vehicles surface and societies come to rely on AI systems, humans adopt more passive roles, raising questions of how responsibility for accidents should be apportioned. In the context of driverless cars and other entirely automated technologies, liability may be shifted from humans to technology. However, there are underlying convolutions and complexities. While liability shift to companies, Silvers questions “which companies?” As AI technologies have extensive value chains, comprised of OEMs, sensor manufacturers, infotainment companies, operating system manufacturers and others—a single product such as an autonomous vehicle may produce a vastly distributed network of liability.
Companies need to develop protection and containment architecture. In order for companies to enjoy the benefits of AI while monetizing its technologies, companies should build an “architecture of protection and containment.” Simultaneously, contracts play an important role in shifting liability on to other companies in the AI ecosystem. As artificial intelligence grows, not only is the legal system more automated, but judiciaries struggle with more difficult decisions in courts. The lack of different domains to account for AI in the judicial system causes uncertainty, but also enables creativity since there are not many mandating rules.
Societal values of transparency in decision-making and of protection of intellectual property must be balanced. A salient challenge that the increased reliance on artificial intelligence produces for the law is the balancing of transparency with intellectual property rights. To illustrate this challenge, panelists cited The Loomis matter, a case in Wisconsin where a man received a lengthy sentence, in part on account of an algorithm that assessed him to be at high risk of recidivism. The court denied the defendant’s request to examine the underlying data and algorithms, in part on intellectual property grounds, a decision that was affirmed on appeal by the Wisconsin Supreme Court. Whereas the balancing of many complex societal values and case-specific information led to these decisions, they nonetheless leave open the fundamental question associated with the deployment of AI in the legal system: on what grounds should society trust that such systems are effective for the purpose for which they are used? In common parlance: “Does it work? And how do we know?” Providing sufficient information to the institutions of state and to the general public in order to enable adequately informed action is an important principle. At the same time, society must protect entrepreneurship and innovation. Balancing these two considerations with complementary conceptual instruments, including ensuring evidence of effectiveness of AI and accountability for its operators is an important topic for further exploration.
The legal community must conscientiously harness AI’s benefits in the future. Law practitioners need to do their research to ensure that AI is deployed in a manner that improves efficiency and is consistent with their profession’s ideals and obligations. Gu highlights China’s research to overcome transparency, efficiency, privacy and fairness issues in its legal system by analyzing the Loomis case, ProPublica’s machine bias article and IEEE’s Global Initiative reports on automated and intelligent systems, amongst others. Understanding changing privacy frameworks in California and Europe, as well as past AI legal cases and the technologies themselves, will better equip legal systems to benefit most from AI.
AI usage should align with lawyers’ professional ethics obligations. Lawyers are subject to ethical obligations. In the United States in particular, attorneys have duties of competence, confidentiality and a duty to honestly represent clients under the rule of law. When deciding how to use AI tools in their legal practice, lawyers need to do their due diligence in understanding the AI tools available and their accompanying strengths and limitations, as well as the specific competence needed to operate such systems safely and effectively. In addition to choosing the correct AI technologies, legal practitioners must make informed decisions when integrating such systems into practice. The ultimate responsibility for the incorporation and effective operation of AI systems used in the provision of legal services rests with lawyers, not AI systems.
Businesses and lawyers must adapt to new privacy and security frameworks. As privacy laws proliferate globally, the large supply of data that fuels AI brings potentially serious privacy and security considerations. Though many fear that these laws will prevent innovative data usage, companies can still monetize data and use it to fuel machine learning and other AI applications. However, they must be more conscientious in how they operate within the legal framework. Specifically, companies need to adopt a set of consumer notices, obtain consent to use customer data, have certain opt-ins and -outs rights and give individuals the right to erasure. In regards to data security, companies need to prevent “AI poisoning” related to cyber-attacks that impair data pools’ integrity. As data privacy and security are at the forefront of legal AI discussions, critical information systems need strong cyber protections and companies must learn to operate in a growing legal framework that prioritizes privacy.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.
a global affairs media network
Report: AI in the Law
hi technology earth futuristic hologram abstract background illustration
November 8, 2018
Presenters: Nicolas Economou (Director, The Future Society, CEO, H5, Co-chair, Law Committee, IEEE Global Initiative), Jiyu Zhang (Associate Professor, Renmin University), Lee Tiedrich (AI Initiative Co-Chair, Covington & Burling), Jia Gu (Researcher, Legal Intelligence in China), Robert Silvers (Partner, Paul Hastings, Former Assistant Secretary for Cybersecurity at U.S. Department of Homeland Security), Mike Philips (Associate General Counsel, Microsoft), Deepa Krishna (Director of Business Development, ClearAccessIP)
To read the full report click here for the digital edition.
AI has unrelentingly ascended into American and Chinese legal systems. As citizens’ liberty, wellbeing, privacy and rights depend on the strength of a nation’s judiciary, many apprehend AI’s increasingly prominent role and its implications for due process. Today, the rule of law by humans and for humans, which Nicolas Economou characterized as the foundation of civilized society, is increasingly challenged, as decisions that affect humans are progressively surrendered to machines in the world’s legal systems. While AI has significant potential to better society and the administration of justice, its deployment must follow norms that will ensure lawyers, courts, other institutions of state, and civil society at large can trust it.
The “AI in the Law” panelists shared perspectives on how AI can benefit American and Chinese justice systems while mitigating its risks. Noting that technology has the potential to improve access to and the quality of justice, the speakers’ collective perspectives highlighted the need for artificial intelligence systems, in order to be trusted in the legal system, to be appropriately transparent, effective at meeting the specific purpose for which they are intended, competently operated, and accountable. It is important that AI systems (and their operators) deployed in support of the administration of justice and enforcement of the law remain under the ultimate supervision and control of legal practitioners and courts.
KEY TAKEAWAYS
China’s judiciary system applies AI to improve efficiency and fairness. China faces both efficiency and fairness challenges. The recent round of legal reforms lowered the number of practicing judges in China by a third in the face of a 30 percent increases in the amount of legal cases sent to Chinese courts annually. At the same time, recent studies show that judges in different geographical regions within China make dissimilar judgements for similar cases. As there is a significant mismatch between the number of judges and litigations and also uneven judgements made across the nation, the Supreme People’s Court (SPC) has taken a proactive role in introducing AI into China’s judicial system. To overcome these mismatches, the Supreme People’s Procuratorate and SPC are developing intelligent courts and intelligent procuratorates.
Technology can streamline tedious court processes. Chinese judges are overwhelmed by the large amount of litigations they confront, with Beijing appellate court judges concluding only a third of their presented cases each year. To improve this statistic, Jiyu Zhang notes that the SPC is introducing automated technologies to complete cumbersome tasks. Specifically, dictating technologies intended to transform court speeches into text will improve efficiency. Additionally, automated technologies that not only correct errors in judicial documents, but also generate parts of the documents themselves will allow judges to spend their time doing more meaningful work.
Artificial intelligence can provide judges with more information. Specifically, the SPC hopes to apply AI’s predictive and research capabilities to intelligent courts. By predicting the number of different kinds of cases, judges and court staff can better allocate resources. Also, improving court search capabilities and implementing an AI-fueled profiling system will not only make judicial processes more efficient, but also provide judges in different regions with similar information to make more consistent judgements. Legal profiling systems, Jia Gu notes, reduce information gaps and help judges find similar cases with similar arguments. This technology then assists legal practitioners to sift through large quantities of data. Though these types of technologies are highly contested, they allow judges to search similar cases electronically, provide sentencing guidelines and help complete risk evaluations.
Attention to data can overcome biases. Freedom from bias is essential to justice systems. The U.S. in particular questions to what extent AI applications need to reduce bias in the outputs they produce. Lee Tiedrich notes that as technology continues to advance, communities must comprehensively analyze AI’s impacts on legal bias issues. Since bias is not only morally reprehensible, as Robert Silvers states, but illegal under fair employment, housing and lending laws, robust protections need to be implemented to ensure that AI does not replicate or exacerbate existing biases. As AI becomes more prominent in legal systems, data is vital to containing and detecting bias.
Legal data and information need to be accessible. To enable citizens to participate in increasingly AI-enabled legal systems and to be fully aware of what AI systems are doing, as Mike Philips highlighted, individuals need access to adequate data and information within their legal systems. China’s SPC published all of its judgements since 2013 (approximately 48 million cases) online, giving inclusive access to case conclusions. The SPC also opened a trial network with access to court proceeding videos. As data barriers are torn down and individuals play more involved roles in the legal system, knowing AI’s data inputs will help determine whether outputs are biased.
Algorithms are only as good as their data. As AI enters the courtroom, defendants are increasingly assessed by algorithms (for example for bail decisions). In the event that data is undesirably biased, the accompanying algorithm is likely to be adversely affected, producing erroneous outcomes. In addition, as raw data must often be labeled by subject matter experts, undesirable human bias can be introduced. This risk underscores the need for great caution in determining the norms under which courts and judges should feel confident in relying on the assessments made by artificial intelligence.
Qualified experts should operate AI and interpret the data. AI can be used in the service of the law, but it is a scientific discipline distinct from the law, that requires specific expertise in order to be competently operated. It is important that AI operators, as well as data labelers, have the right domain expertise. As noted, AI requires data labeling, which should be entrusted to those qualified to do so. Additionally, algorithms involved in legal decision-making require expert human operation and interpretation to make AI measurably effective and its findings understandable. Thus, experts with the appropriate scientific, technical, and subject matter competencies play a key role in ensuring the effective and safe operation of artificial intelligence systems in the law.
Artificial intelligence systems complicate liability risks. Companies developing AI systems must account for the technology’s accompanying risks and liabilities. Silvers links liability and regulatory exposure to companies’ AI-centered business strategies. As AI-driven products can adversely affect people and cause individuals to come forward saying artificial intelligence injured or disadvantaged them, with State v. Loomis serving as an example, companies need to fortify themselves against such risks via contracts and procedures.
AI shifts liability to companies. Currently, more than 90 percent of car accidents are caused by human error. In this context, individuals are often held legally accountable. But as automated vehicles surface and societies come to rely on AI systems, humans adopt more passive roles, raising questions of how responsibility for accidents should be apportioned. In the context of driverless cars and other entirely automated technologies, liability may be shifted from humans to technology. However, there are underlying convolutions and complexities. While liability shift to companies, Silvers questions “which companies?” As AI technologies have extensive value chains, comprised of OEMs, sensor manufacturers, infotainment companies, operating system manufacturers and others—a single product such as an autonomous vehicle may produce a vastly distributed network of liability.
Companies need to develop protection and containment architecture. In order for companies to enjoy the benefits of AI while monetizing its technologies, companies should build an “architecture of protection and containment.” Simultaneously, contracts play an important role in shifting liability on to other companies in the AI ecosystem. As artificial intelligence grows, not only is the legal system more automated, but judiciaries struggle with more difficult decisions in courts. The lack of different domains to account for AI in the judicial system causes uncertainty, but also enables creativity since there are not many mandating rules.
Societal values of transparency in decision-making and of protection of intellectual property must be balanced. A salient challenge that the increased reliance on artificial intelligence produces for the law is the balancing of transparency with intellectual property rights. To illustrate this challenge, panelists cited The Loomis matter, a case in Wisconsin where a man received a lengthy sentence, in part on account of an algorithm that assessed him to be at high risk of recidivism. The court denied the defendant’s request to examine the underlying data and algorithms, in part on intellectual property grounds, a decision that was affirmed on appeal by the Wisconsin Supreme Court. Whereas the balancing of many complex societal values and case-specific information led to these decisions, they nonetheless leave open the fundamental question associated with the deployment of AI in the legal system: on what grounds should society trust that such systems are effective for the purpose for which they are used? In common parlance: “Does it work? And how do we know?” Providing sufficient information to the institutions of state and to the general public in order to enable adequately informed action is an important principle. At the same time, society must protect entrepreneurship and innovation. Balancing these two considerations with complementary conceptual instruments, including ensuring evidence of effectiveness of AI and accountability for its operators is an important topic for further exploration.
The legal community must conscientiously harness AI’s benefits in the future. Law practitioners need to do their research to ensure that AI is deployed in a manner that improves efficiency and is consistent with their profession’s ideals and obligations. Gu highlights China’s research to overcome transparency, efficiency, privacy and fairness issues in its legal system by analyzing the Loomis case, ProPublica’s machine bias article and IEEE’s Global Initiative reports on automated and intelligent systems, amongst others. Understanding changing privacy frameworks in California and Europe, as well as past AI legal cases and the technologies themselves, will better equip legal systems to benefit most from AI.
AI usage should align with lawyers’ professional ethics obligations. Lawyers are subject to ethical obligations. In the United States in particular, attorneys have duties of competence, confidentiality and a duty to honestly represent clients under the rule of law. When deciding how to use AI tools in their legal practice, lawyers need to do their due diligence in understanding the AI tools available and their accompanying strengths and limitations, as well as the specific competence needed to operate such systems safely and effectively. In addition to choosing the correct AI technologies, legal practitioners must make informed decisions when integrating such systems into practice. The ultimate responsibility for the incorporation and effective operation of AI systems used in the provision of legal services rests with lawyers, not AI systems.
Businesses and lawyers must adapt to new privacy and security frameworks. As privacy laws proliferate globally, the large supply of data that fuels AI brings potentially serious privacy and security considerations. Though many fear that these laws will prevent innovative data usage, companies can still monetize data and use it to fuel machine learning and other AI applications. However, they must be more conscientious in how they operate within the legal framework. Specifically, companies need to adopt a set of consumer notices, obtain consent to use customer data, have certain opt-ins and -outs rights and give individuals the right to erasure. In regards to data security, companies need to prevent “AI poisoning” related to cyber-attacks that impair data pools’ integrity. As data privacy and security are at the forefront of legal AI discussions, critical information systems need strong cyber protections and companies must learn to operate in a growing legal framework that prioritizes privacy.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.