Can AI Solve the Middle East Crisis?
The value of objectivity when emotion rules the debate

A few weeks ago, I wrote an article on the Middle East. In it, I made the case that most of us simply don’t have the knowledge or background to weigh in on the tension between Israel and Palestine. The least qualified voices tend to dominate the conversation.
I received a comment on the piece that caught my attention for a couple of reasons. The first was that it had clearly been generated by AI. The user had plugged my article into ChatGPT or one of its cousins, and asked it to refute the argument that I made. It’s a pattern that I’ve grown pretty good at recognizing in the years since AI-generated writing has entered the scene.
The second thing that stood out to me about the reply was that, setting aside its origin, it managed to poke some pretty sound holes in my argument. They weren’t points that I’d never heard articulated before, but there was something about the succinct and robotic way it handled the heavy topic that intrigued me. The comment read as follows:
Ben — I really respect how much thought you’ve put into this. You’re grappling with a messy, painful geopolitical issue with no easy answers. That matters, and I admire your willingness to sit with discomfort. That said, I want to push back on a few things — not to argue for the sake of it, but because I know you value honest, open debate.
I get that it’s complicated. But sometimes that framing unintentionally obscures something that, at its core, is actually quite clear: one side is living under military occupation, cut off from the world, and being bombed into oblivion. The other side holds overwhelming control — over land, air, borders, and resources — and receives billions in military aid. That’s not a balanced conflict. That’s domination. That’s hegemony.
You mention that calling this “genocide” is a non-starter for many Israelis. I understand that — war crime allegations are serious, and words carry weight. But maybe the issue isn’t the discomfort the word causes — maybe it’s the reality it’s trying to describe. When tens of thousands of civilians are killed, when hospitals and homes are leveled, when people are starved, displaced, and denied aid — at what point do we stop tiptoeing around language?
The measured, restrained tone was unsurprisingly clinical. Yet, as this recent conflagration has raged on, I’ve grown accustomed to anything written on the Middle East eliciting a fire and brimstone fury from the people who disagree with me. So I couldn’t help but find something almost refreshing in AI’s synthetic handling of the topic. It was methodical, unaffected, and managed to reduce the volatile subject down to its logical scaffolding.
I began to draft up a reply to the comment in my own voice when it occurred to me that something interesting might come of fighting fire with fire. Or at least, pitting two LLMs (large language models) up against each other and seeing which could craft the more coherent argument.
When prompted to refute his stance, this is what ChatGPT came up with:
You’re right to push back — and I agree that language has power, that precision should never be a way of dodging moral clarity. But if we’re going to demand clarity, we need to look directly at the event that redefined this current war: October 7. That day wasn’t just a military strike or an act of resistance. It was a massacre — a campaign of deliberate terror in which civilians were hunted, families burned alive, children murdered in front of their parents. Women were raped and paraded through Gaza as trophies. Hostages — including the elderly, disabled, and toddlers — were dragged from their homes, some of whom remain in captivity as leverage over a year later.
Hamas didn’t just cross a line. It targeted civilians with the explicit aim of shattering Israeli security, not in theory, but in flesh. And when that happens, when a population experiences that kind of trauma, it creates a political and psychological climate where retaliation — even massive, brutal, disproportionate retaliation — becomes a political inevitability. That’s not a moral defense of everything Israel has done. But it’s a necessary truth if we’re going to make sense of how this war has unfolded.
So when people invoke the term genocide, I understand the urgency behind it. Gaza’s destruction is unfathomable. But genocide implies an intent to eliminate a people because of who they are — and here, Israel insists, again and again, that its target is Hamas, not Palestinians. You might not believe that claim. And you might argue, fairly, that the scale and method of Israel’s actions suggest otherwise. But to draw a clean moral line without also confronting Hamas’s strategy — which not only hides among civilians but relies on civilian death to fuel global outrage — is to flatten the very nuance this conflict demands.
It was at about this point that I started to wonder whether ChatGPT could circle in on itself endlessly without ever losing cogency. Rather than use the generated text above to reply to him, I began to wonder how long the software could sustain a debate with itself if I kept feeding it its own responses. Is there any argument it can generate that it isn’t also capable of dismantling?
October 7 was a horror — there’s no denying that. Hamas committed atrocities that deserve unflinching condemnation: civilians slaughtered, women raped, children executed, hostages taken. These weren’t acts of resistance — they were acts of terror, and they left Israeli society profoundly traumatized. That trauma helps explain the ferocity of Israel’s response. But explaining isn’t the same as justifying.
Because the question isn’t whether Israel has reason to retaliate. It’s whether the way it has retaliated — through the destruction of homes, hospitals, refugee camps, and the systematic cutting off of food, water, and medical supplies — can still be credibly framed as proportional warfare aimed narrowly at Hamas. When nearly half the casualties are children, and entire city blocks are turned to rubble, it strains belief to insist that civilian suffering is merely incidental. Genocide is, yes, a legal term defined by intent — but intent is often demonstrated through pattern, not proclamation.
And while it’s true that Hamas embeds itself among civilians, international law is clear: that does not absolve the opposing military of responsibility to protect those civilians. A strategy that anticipates and accepts mass civilian death isn’t just morally compromised — it may well cross the legal threshold into atrocity.
So yes, we must speak hard truths in every direction. But moral clarity isn’t achieved by hedging. You can denounce Hamas and hold Israel accountable. You can understand the trauma of October 7 and question whether the mass killing of civilians in Gaza can be defended under any rational moral framework. If we’re going to use precise language, let’s not treat genocide as a rhetorical overreach simply because it makes us uncomfortable. The stakes are too high for that.
As I sat there and watched the AI debate itself, I noticed that I suddenly felt less defensive of my own positions. And I felt less unique for harboring them. I began to see more clearly where they fit on the spectrum of opinions that have emerged on this subject. I saw in that robotic text just what a colossal body of other writers, leaders, and clergymen have already addressed the issue and arrived at similar conclusions. And I saw how those conclusions have been disputed enough times over by enough different experts for the arguments to become embedded in ChatGPT’s archive.
It’s been trained on so many more books and articles covering the issue than any of us will ever read that it’s uniquely equipped to debate in circles like a snake eating its tail. And there’s something to be gleaned from watching that futile, robotic cycle unfold.
Inferring intent from outcomes is a slippery slope, especially in war. Civilian suffering on a massive scale is horrifying, but it doesn’t prove an intent to eliminate a people. Genocide is a legal term for a reason — if we dilute it to mean “mass death and destruction,” we risk undermining its unique power to describe truly exterminatory campaigns, like Rwanda or the Holocaust. Israel’s stated goal — eliminating Hamas — is plausible in light of the October 7 attacks, even if the methods are brutal and deeply flawed.
There’s a difference between calling something a humanitarian catastrophe and calling it genocide. The former invites broad moral condemnation and international pressure. The latter is an accusation of the gravest crime in international law — and using it prematurely can backfire, shutting down dialogue, entrenching sides, and making resolution even harder. The word should be emotionally fraught. That’s why it demands rigor.
And yes, language should confront power, but it also must clarify. If the aim is to stop the violence and protect Palestinian lives, moral clarity alone isn’t enough — we need strategic clarity too. That means distinguishing between disproportionate warfare, occupation, and actual genocide. The suffering is real. But the path to justice lies in precision, not rhetorical escalation.
I’ve found that one of the greatest utilities of AI as an editing tool is its objectivity. When talking to an editor or friend about a piece, it’s often easy for writers to take offense at the notion that their article is a misfire, or that certain passages could stand improvement. But coming from an algorithm-driven machine, there’s no embedded judgment in the suggestion that we’ve made a mistake. There’s no reason to fear that its personal bias will affect its reading when authors tackle controversial topics.
It’s rare for the debate over Israel and Palestine’s future to resist being clouded over by emotion. Perspectives are so hopelessly subjective — so completely defined by people’s experience of life in that fraught region — that impartiality is difficult to come by. People viscerally struggle to see the conflict from opposing positions. There’s no setting aside the stakes that both sides have on the line.
But when it’s not even a human you’re arguing against, it can be easier to interface with ideas themselves rather than the person behind them. There’s no tone to misread and no history of mistrust to reflect back on. There’s no time constraint or need to exchange insults. There are no virtue signals that need to be flaunted and no moral high grounds we need to claim in order to appease our friends or digital spectators. There’s a valuable candor in the discourse.
It’s certainly not more human as a result. But it would be naive to argue that the lack of solutions we’ve found to our problems in the Middle East thus far owes to a lack of humanness. Our subjectivity may be the single greatest obstacle to finding solutions. The parties expected to reach them are the ones most fundamentally incapable of compromise. It falls on people for whom emotion runs the most deeply — whose families will be put in harm’s way if the wrong decision is made — to broker peace.
Israelis and Palestinians carry ancestral grief, preexisting loyalties, and existential fears into every conversation about this region. For Israelis, the memory of the Holocaust colors every “Never again” cry we hear uttered today. Many of them see their role in this conflict as a direct product of commitment to that goal. They fire missiles at Hamas with the belief that genocides should never happen again, even if reaching that end requires massacre. Their shared trauma can’t help but mute and obscure the most soul-crushing realities that this war has wrought. They struggle to process the parallels between what Jews endured and what the people of Gaza are living through today.
For Palestinians, the Nakba isn’t a mere chapter in a textbook. It’s a reality that continues to haunt them. Their struggle isn’t something that can simply be reduced to a debate of policy. Like the people of Israel, they believe they’re defending their very right to exist. And once an issue becomes existential, any compromise at all starts to feel like betrayal.
One of the most maddening paradoxes of this conflict is that both sides can be objectively correct. Israel’s very real concerns about Hamas don’t negate the fact that Palestinians are living under unbearable conditions. And the brutal reality of that suffering doesn’t undo the trauma Israelis carry, or the fear that took hold of the nation after October 7. When so much suffering is on the line, it’s hard to grasp that both sides can be coming from a righteous place.
Israeli Prime Minister David Ben-Gurion’s words still hang over this controversy over half of a century after they were spoken: “If I were an Arab leader, I would never sign an agreement with Israel. It is normal; we have taken their country. It is true God promised it to us, but how could that interest them? Our God is not theirs. There has been Anti-Semitism, the Nazis, Hitler, Auschwitz, but was that their fault? They see but one thing: we have come and we have stolen their country. Why would they accept that?”
We’re so trained as people to search for solutions that it frequently fails to dawn on us when there are none. There’s no path forward that precludes the possibility of mass death, suffering, and displacement. And one of the aspects that’s most revelatory about watching AI debate itself on this issue is that it illustrates just how nauseatingly ouroboric the conversation can be. Personal suffering is so entrenched in both sides’ perspectives that it’s turned finding objective solutions into an all but impossible task.
The beliefs and demands of the region’s religious factions may just be too diametrically opposed for a roadmap toward peace to be established. Their visions of justice are not just incompatible, but mutually exclusive. What looks like liberation to one side will invariably register as annihilation to the other, and until that dissonance is reconciled, compromise is likely to continue hovering cruelly out of reach.
I watched the AI argue with itself for quite a while longer, and I watched it move forward and backwards across decades and millennia. It proceeded with the debate I set into motion just as long as I told it to continue. And as it enumerated the countless atrocities and exoduses that both sides have endured, I was reminded of the cyclical discourse that I could be having with that commenter. Of the dialogue that so many others before us have had. That so many are still having at this moment. And that so many of us are reluctant to even confront.
There was no amount of prompting I could do to get it to argue in my voice, but the points it made were ones I’d been happy to employ in the past. And the ones it used to counter them were all too familiar. There was something humbling about seeing the separate stances reduced to computerized recitations.
I established a debate stage for opposing arguments to clash, and I observed as they became microcosmic stand-ins for all of the people across the globe frantically arguing this topic. But the discourse was more civil than the exchanges I’ve come to know online.
The experience was reminiscent of my childhood spent playing The Sims. Immersed in the game, I could revel in the freedom of inconsequence as I watched virtual scenarios play out within my established parameters. But instead of serving as the capricious god of an electronic world, I derived an eerie sort of therapy from watching my political stance duke it out in the digital arena without feeling possessive over it. It didn’t matter to me whether it prevailed. Like a drone pilot watching events unfold from a thousand miles away, I experienced the debate at a remove. I could watch my beliefs come under simulated fire and set aside the ego it takes to defend them.
Unfortunately, I don’t think that ChatGPT is likely to actually arrive at any solutions for the turmoil in the Middle East that haven’t already been proposed. And I don’t think there’s any idea it could postulate that it couldn’t then turn around and poke twenty holes in. Much of its utility is in the cold and automatic — or friendly and accessible — way that it proves how no solution to this centuries-old dilemma is airtight. No path forward escapes the sacrifices that one of the two parties will be carnally unwilling to make.
Even while an LLM probably won’t be brokering peace in the Middle East anytime soon, there’s a real value in seeing what happens when we pit emotional convictions against something as cold and dispassionate as machines. AI is hardly a replacement for morality, but it can be a surprisingly useful defense against those moments when emotion obscures our line of sight. And there are few debates in the world more clouded over by emotion than the one that looms over the Middle East.
The Gen Z Report is a reader-supported publication and paid subscriptions make it possible to give my work the time and attention it demands.
What an interesting idea. The image of a snake swallowing its tail is spot on. ✌🏻
An interesting and thoughtful discussion of an extremely fraught issue, Ben. A lot of people seem to have strong opinions and positions about it, with too many lacking a clear understanding of the myriad longstanding and deep-seated conflicts that have led us to the current situation. Thus, I'm reticent to offer my own opinions or suggestions. Also, I have yet to use AI for anything.