Why Mediation Will Always Need a Human Pilot
By David Jonson
Lessons from Manned Spaceflight for the Age of AI
Like many Americans, I recently watched the safe return of the astronauts on Artemis II, and was thankful for their successful 10-day mission and accomplishments. What seems to have transfixed so many viewers was that unlike the repetitive, robotic launch and deployment of dozens of satellites on an uncrewed commercial rocket, here, four human lives were at stake as an integral part of a process that took them to work in the hostile environment of space and then safely back to earth. In spite of the substantial technological advances made since the last lunar trip over 50 years ago, we still have humans in the machine, not as a matter of tradition, but as spirited yet rational creatures with different types of intelligence than a computer. The crew’s presence was a connection to us, a differentiator. Because they are four of the most capable of us, we cared more about the outcome than we would have without humans aboard.
The same principle of vested human engagement applies to mediations. There is a moment in every mediation when something shifts either quietly or profoundly. The party who has stoically sat rigidly still for hours suddenly starts to relax. A knowing glance or eye contact is made across a table that initially felt as wide as a canyon. A door that seemed permanently closed opens, almost imperceptibly, with barely a crack. No algorithm predicted these nuances, and no software engineered them. Instead, it was a human being reading the room with all senses and instincts available and honed from decades of life experience and personal interactions, who helped make it happen.
As AI continues to transform the practice of dispute resolution, it is worth pausing to ask a fundamental question: what, exactly, can a machine or algorithm never do? The answer has considerable implications not only for how we use AI in mediation, but for why the human mediator is not merely a tradition worth preserving, but an irreplaceable necessity.
Houston, We Have a (Different Kind of) Problem
From the earliest Mercury, Gemini and Apollo missions, to the Space Shuttle and International Space Station, to today’s Artemis II, engineers and mission planners have consistently debated how much the human pilot should do, versus how much the computers and software should handle.
Their conclusion has been refined over decades of hard experience using both. Computers and automated systems handle the habitual, mind-numbingly repetitive work with astonishing speed and precision. They calculate thrust vectors and astronautical positioning, monitor thousands of sensor readings simultaneously, and execute split-second corrective maneuvers that no human nervous system could replicate, much less with zero downtime. No one seriously proposes sending astronauts into orbit without those systems, because they are essential to the operation.
Humans, however, remain in the cockpit neither as passengers nor as a ceremonial presence, but as pilots, mission commanders and specialists, with genuine authority and ultimate override capability. Space is a ruthless and unforgiving environment of infinite variables, unexpected combinations, and moments that fall outside the envelope of anything previously programmed or predicted. When Apollo XIII’s oxygen tank ruptured 200,000 miles from Earth in April 1970, there was no such contingency plan prepared in the emergency procedures manual. It was a cascade of compounding crises that required human creativity, improvisation, emotional resilience, and judgment under extraordinary pressure. The onboard computers alone could not have brought those three men home alive. It was the people, both in the capsule and on the ground, who had the greatest vested interest and thus made the critical difference.
The lesson is not that computers or any form of AI are inadequate. It is that humans, machines and software excel at categorically different things, and that the highest-stakes situations demand the irreplaceable qualities of the human mind and spirit. Mediation is precisely such an endeavor.
What the Machine Rightfully Does Well
Artificial intelligence has earned a legitimate and valuable place in the modern mediator’s toolkit. There is no wisdom in resisting what technology does genuinely well.
AI can review and organize voluminous case documents in minutes, flagging relevant precedents and summarizing and distinguishing positions with impressive accuracy. It can manage administrative functions such as scheduling across multiple parties and counsel, coordinate the logistical choreography of complex multi-session mediations, and generate comprehensive intake questionnaires tailored to the nature of the dispute. It can draft template agreements, calculate damages ranges based on comparable settlements, identify key issues from submitted materials, and even flag potential areas of common ground based on party statements, all before the mediator has even made introductory remarks.
These functions are the thrust vectors, astro-positioning and sensor readings of mediation practice. They are necessarily precise, voluminous, time-consuming and mentally taxing if done by hand. A mediator who leverages AI for these functions is not cutting corners but is conserving a finite and valuable resource – the human attention and energy that will be needed in the substantive parts of the mediation.
Just as NASA’s flight computers free astronauts to focus on the mission-critical decisions that require human judgment, AI frees the mediator to focus on the work that a human can do better.
Where the Algorithms End and the Mediator Begins
No AI system nor the computer running it can feel and account for the emotional temperature or stress in a room. Nor can AI detect that a party’s defiance is actually bluster masking fear rather than a projection of genuine confidence, or that the slightest edge in an attorney’s voice signals desperation rather than strength. No algorithm can sense that the right moment has finally arrived to gradually, almost imperceptibly shift the conversation away from entrenched positions, to the underlying interests, and know precisely how to do it with these particular parties, in these contexts, on this particular day.
Disputes are human events, not cold rational data sets. Behind every commercial contract case is a relationship that broke down or a promise that was unfulfilled. In every employment dispute there is a person who feels unseen, unheard, or betrayed. Virtually all family matters are a web of variables consisting of history, love, resentment, fear, and hope so intricate that no model trained on prior cases could begin to map it accurately for the specific case at hand.
The mediator brings presence and the humane element, which no AI system can accurately replicate. The human mediator will have a willingness to sit with discomfort and not rush past it. They will have an ability to ask a question whose value lies not so much in the answer it generates, but in the deeper reflection it prompts in the person being asked. The capacity to extend genuine empathy, not simulated, not scripted, in a way that causes an apprehensive party to feel, perhaps for the first time in a years-long conflict, that someone in the process actually understands their experience and cares about helping to get it resolved.
Mediators are not just about layering soft skills on top of a technical process. They are the integrators of all processes, feelings and factors that move cases from impasse to resolution.
The Override Authority That Matters Most
In crewed spacecraft, the human override authority is not merely symbolic. It is a design principle built into the system precisely because engineers know that no automated system, however sophisticated, even when running in parallel processes, can be trusted with unchecked authority in an environment as complex, dynamic and unforgiving as space.
Mediation demands the same design principle. AI can recommend, analyze, prepare, organize and even suggest courses of action. The mediator, however, must retain and exercise full sensitivity of the human aspects and dimensions of the process. When a computational tool suggests that the parties are within settlement range based on their submitted figures, the mediator must be willing to set that aside entirely or modify it considerably if a read of the room indicates the psychological foundation for that phase has not been properly established. When an AI-generated summary categorizes one party’s core concern as financial, the mediator must be free to recognize, from even a brief conversation, that the real issue is actually rooted in human dignity, and then must redirect accordingly.
The machine informs, but the mediator is the fully engaged catalyst to help the parties decide. This hierarchy is neither a concession to nostalgia nor professional protectionism. Rather, it is a practical necessity born of the same wisdom that keeps human pilots in the cockpit of the most sophisticated spacecraft ever conceived.
People Need People
Parties to a dispute are far more likely to accept an outcome, even a difficult one, when they feel genuinely heard by another human being. The legitimacy of the process, in their eyes, is inseparable from the human presence at its center. Much like the human fascination with a crewed space mission is far more emotionally engaging than an uncrewed launch of commercial payloads, a mediation settlement reached with the help of an empathetic, evaluative or facilitative mediator carries a different emotional weight than one produced by a cooly calculating optimized AI recommendation engine, even if the numbers are identical. The human-influenced one seems more credible and is therefore more likely acceptable.
In these matters, people tend to trust the mediator, but it must be continually earned. They commit to outcomes they helped forge, in conversations they experienced as real. The presence of a skilled human mediator is not incidental to that trust, it is the source of it.
We do not put people on spacecraft simply because we have always done so, or because no one has thought to question the practice. We put people on spacecraft because there are things that people observe, orient, decide and act upon under pressure, in complexity, at the moments of highest consequence, that no machine has ever done, and that no machine is close to doing. The same is true of mediation.
Keep the Pilot in the Cockpit
The future of dispute resolution is neither the wholesale, Luddite type of rejection of AI nor its uncritical and unquestioned embrace. It is a thoughtful symbiotic relationship, one in which technology rapidly and efficiently handles the routinely repeatable tasks, thereby freeing human mediators to do what they have always done best: help people find their way through conflict and toward resolution, one carefully guided, inimitable human conversation at a time.
AI will make legal counsel and mediators more efficient, better prepared, and more comprehensively informed. What it will not do, and what it cannot do, is replace the judgment, the empathy, the instinct, and the presence that define the craft.
The spacecraft needs its AI and computers, but ultimately, it also needs its human pilot.