AI Mediation — Ethically Questionable?
By Steven Nappi
Artificial Intelligence (“AI”) is an imperative aspect of modern society. However, in the legal field, AI must be used with the utmost discretion.[1] Earlier this year, an attorney from Utah used ChatGPT to create a brief which produced a case that does not exist.[2] In response, the Utah Court of Appeals sanctioned the attorney.[3] Nevertheless, at the 2025 ABA TECHSHOW, a legal tech startup launched an artificial intelligence mediator.[4]
Evidently, an attorney cannot ethically use AI to write a brief defending their client in litigation.[5] Yet, the underlying question remains: why should AI be replacing human mediators?
The use of AI in practice is a debated, contested topic. Scholars make strong arguments for using AI in mediation, citing efficiency and cost-effectiveness rationales.[6] Contrarily, skeptics argue that using AI to mediate a disputant’s issue will result in lack of transparency, confidentiality, and neutrality; yet, most importantly, removing the human emotion and empathy from mediation.[7] These few positives are significantly outweighed by a multitude of negative factors.
Confidentiality is a major selling point of mediation, and is at its ethical core.[8] AI mediation cannot guarantee this imperative factor.[9] Fairness to disputants is deeply rooted in confidentiality, since absent confidentiality, mediation can be used by a party with more knowledge or resources to obtain information “used as a discovery device against legally naive persons if the mediation communications were not inadmissible in subsequent judicial actions.”[10] Furthermore, when legal practitioners use AI, the previously privileged information put into those systems are possibly stored, breaching the imperative aspect of mediation: confidentiality.[11] Even more problematic, AI often cannot distinguish inputted information that is confidential or not.[12] Thereby under all circumstances, compromising the confidentiality expected by those mediating disputes.[13] A cyber security expert claims AI regulations will be stricter and there will be a real commitment to ethical data handling.[14] Thus, only if AI becomes secure and confidential should there be any considerations of implementing AI mediation bots.
Another ethical standard of mediation is neutrality.[15] A mediator has the responsibility to be unbiased nor favor either party.[16] However, AI is proven to have biases, with one study showing AI models are shaped by the “cultural patterns of the data they were trained on.”[17] AI is biased in the same manner humans are biased.[18] In one study, AI perpetuated negative stereotypes about marginalized groups, once again going against the fairness rationale of mediation.[19] While humans are also biased, mediators who fail to abide by neutrality can be reported and sanctioned.[20] Therefore, using an AI mediator directly violates the duty of neutrality.[21]
One final consideration is that empathy and human emotion are quintessential qualities of a good mediator.[22] Empathy, an ability to grasp and experience another’s feelings, is vital when resolving a dispute between possibly contentious parties when settling on a common ground.[23] The human touch of breaking down barriers between parties through effective communication skills, actively listening, understanding the message conveyed (not merely the words being spoken), and observing nonverbal cues are imperative aspects of effectively mediated disputes.[24] Nevertheless, AI completely lacks emotion, empathy, and a moral compass.[25] While AI can effectively make logistic decisions and using it in mediation will help efficiency, removing the human element from mediation by using AI would create more problems than solutions.[26]
_____________________
[1] See Maya Young, U.S. Lawyer Sanctioned After Being Caught Using ChatGPT for Court Brief, Guardian (May 31, 2025), https://www.theguardian.com/us-news/2025/may/31/utah-lawyer-chatgpt-ai-court-brief [https://perma.cc/4DP8-T7B9]; See Mansi Jain Garg, AI and Mediation: A Threat or Helpful Tool for Mediators — An Indian Perspective, 3 Jus Corpus L. J. 175, 175–76 (2023).
[2] Young, supra note 1.
[3] Id.
[4] Bot Mediation: AI-Powered Mediation for Efficient Legal Dispute Resolution, Law Tech. Today (Mar. 31, 2025), https://www.americanbar.org/groups/law_practice/resources/law-technology-today/2025/ai-powered-mediation-for-efficient-legal-dispute-resolution/ [https://perma.cc/YNY8-ADNX].
[5] Young, supra note 1.
[6] In India, the government “has been working on developing the use of artificial intelligence and increasing AI's application in the legal domain.” This use will improve the efficiency of ODR, enhancing ADR. Garg, supra note 1 at 180; Law Tech. Today, supra note 4.
[7] Garg, supra note 1 at 187–88; Ethical Considerations in AI-Assisted Mediation, Schreiber ADR (Mar. 26, 2025), https://www.schreiberadr.com/ethical-considerations-in-ai-assisted-mediation [https://perma.cc/LX5Q-BQ8C].
[8] Ethical Considerations in AI-Assisted Mediation, supra note 7; Advantages of Mediation, U.S. Off. of Special Couns., https://osc.gov/Services/Pages/ADR-Advantages.aspx [https://perma.cc/T3T2-YRLM].
[9] See Davinia Cutajar, Balancing Efficiency and Privacy: AI’s Impact on Legal Confidentiality and Privilege, Int’l Bar Ass’n (Nov. 29, 2024), https://www.ibanet.org/balancing-efficiency-and-privacy-AI-impact-on-legal-confidentiality-and-privilege [https://perma.cc/PMG6-P936].
[10] Lawrence R. Freedman & Michael L. Prigoff, Confidentiality in Mediation: The Need for Protection, 2 Ohio St. J. on Disp. Resol. 37, 38 (1986); See Ethical Considerations in AI-Assisted Mediation, supra note 7 (mediators have a duty to ensure the process is fair, going against that duty is an ethical violation).
[11] Cutajar, supra note 9.
[12] Id.
[13] Id.
[14] Orion Czarnecki, a Cybersecurity Practice Head, analyzed the intersection of privacy and AI. He believes the future, merely following what is legal when using AI will not be adequate. Rather there will be a demonstration and “commitment to ethical data handling, thinking carefully about how data is collected, used, and shared, and prioritizing fairness and transparency.” Orion Czarnecki, The Future Of Privacy And AI: What’s Next?, Stefanini Grp. (Jan. 25, 2025), https://stefanini.com/en/insights/articles/the-future-of-privacy-and-ai-whats-next [https://perma.cc/Z374-JD8Y].
[15] Ethical Considerations in AI-Assisted Mediation, supra note 7.
[16] Id.
[17] A study spearheaded by MIT’s Jackson Lu found “cultural tendencies embedded within AI models shape and filter the responses that AI provides.” Dylan Walsh, Generative AI isn’t Culturally Neutral, Research Finds, MIT Mgmt. Sloan Sch. (Sep. 22, 2025), https://mitsloan.mit.edu/ideas-made-to-matter/generative-ai-isnt-culturally-neutral-research-finds [https://perma.cc/S4YQ-YKZJ]. Researchers compared the same sets of questions in English and Chinese, where the language of the bot mirrored the culture leanings of its respective language. Id.
[18] Leonardo Nicoletti & Dina Bass, Humans Are Biased. Generative AI Is Even Worse, Bloomberg (June 9, 2023), https://www.bloomberg.com/graphics/2023-generative-ai-bias/ [https://perma.cc/9RQW-52P7].
[19] Id.; see Ethical Considerations in AI-Assisted Mediation, supra note 7.
[20] Deborah S. Ballati & Patricia H. Thompson, When Mediation Conduct Goes Wrong, JAMS (Oct. 27, 2022), https://www.jamsadr.com/blog/2022/when-mediation-conduct-goes-wrong [https://perma.cc/3BPV-Y9TJ].
[21] Ethical Considerations in AI-Assisted Mediation, supra note 7.
[22] See Garg, supra note 1 at 188.
[23] Joan B. Kessler, The Importance of Empathy and Effective Listening in Arbitration and Mediation, JAMS (June 4, 2021), https://www.jamsadr.com/blog/2021/the-importance-of-empathy-and-effective-listening-in-arbitration-and-mediation [https://perma.cc/55CZ-NUS3].
[24] Id.
[25] PTI, Dark Side of AI: Potential Consequences of Emotionless Machines Could Impact Humanity, Econ. Times (Aug. 3, 2023), https://economictimes.indiatimes.com/magazines/panache/dark-side-of-ai-potential-consequences-of-emotionless-machines-could-impact-humanity/articleshow/102393338.cms [https://perma.cc/A597-3R2B].
[26] See id.; see Kessler, supra note 23; see Garg, supra note 1 at 180.

