EXCLUSIVE: The ADL’s Warning to the AI Industry
Daniel Kelley of the CTS explains how LLMs, Wikipedia, and gaming are shaping what the next generation “knows” about Jews.
LLM systems that are now being marketed as neutral, authoritative, and “helpful” were trained using a deeply flawed internet that spent decades failing to confront some of its oldest conspiracies against Jews.
After a decade of arguing about the consequences of social media, we’ve entered a quieter but more troubling phase of the same crisis: one in which AI models, Wikipedia, and online gaming environments are shaping how a generation learns who Jews are, what Israel represents, and how factual events are interpreted and remembered.
Technologies that reproduce antisemitic narratives with confidence and at scale are laying the foundations for our future. And Daniel Kelley, who leads the Center for Technology and Society (CTS) at the Anti-Defamation League, put it plainly during an exclusive interview with The Spiro Circle: If social media taught us anything, it’s that if we wait until the harms are obvious, we have waited too long.
AI is the Latest Battleground
Those who are already playing with AI and testing its limitations may notice that it can sometimes be too cautious in its output - and that’s no accident.
“If you think of where social media was at this point in the technology,” Kelley explained, “it was just like, ‘Oh, wait, maybe we should have rules.’ Whereas the AI folks… are building their safety practices on the horrible history of social media.”
Many AI safety teams are staffed by veterans of those earlier failures. They know how quickly platforms can become vectors for harm, and how hard it is to reverse course. But caution doesn’t mean readiness, and some of the guardrails are still absent.
The ADL, founded in 1913 to combat antisemitism, now operates a research-driven Center for Technology and Society focused on online harms. Its team partners with industry, civil society, government, and targeted communities to expose harms and hold tech companies accountable for just online spaces.
When it conducted testing on tools with generative video earlier this year, its researchers found what Kelley described as “the floor being missing” when it came to antisemitic content.
When prompted, multiple systems generated photorealistic depictions of classic antisemitic tropes. Kelley confirmed that tools like Sora were happy to make videos of Jews drinking blood, images of photo-realistic Jews with a pile of gold, and what he described as “the greedy merchant coming to life.”
It’s not just that these images are shocking (though they are). It shows how easily it is to translate centuries-old libels into modern generative systems.
When the ADL flagged major LLMs like GPT (OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta), later versions blocked that content generation. While Kelley described the move as “heartening”, these fixes only arrived after external testing and pressure, not because safeguards were already sufficient.
“The profit model for AI companies is still to be determined,” he explained. “So because of that fragility, I think we have a moment in time, which may not last long… while you’re still proving yourself to society, this is the time to take care of these issues.”
Kelley was one of the architects of the “Stop Hate for Profit” campaign in 2020, which aimed to hold social media companies accountable for the hate on their platforms (something they are legally protected from due to Section 230 of the 1996 Communications Decency Act). In time, these platforms learnt how to monetise people’s behaviour that was often radicalized through algorithmic tweaks and prompts.
Social media squandered the moment. AI still has one — but barely.
Wikipedia Transformation: From Encyclopedia to Pipeline
Those concerned about AI outputs should look toward where the algorithms are getting their information. Whereas social media content is created by the user, LLMs curate information from a variety of online sources.
If AI is the interface through which people increasingly access information, Wikipedia is one of the pipelines feeding it. And what was once considered a neutral and reliable source of data is increasingly becoming an infrastructure full of flaws and biases.
Earlier this year, ADL published research identifying a network of at least 30 editors coordinating to push an anti-Israel, anti-Jewish agenda across Wikipedia pages. Kelley acknowledged how implausible that can sound at first. “It sounds like a conspiracy theory when you talk about it,” he said. “But we saw coordination of editing activities.”
This time, it wasn’t the fabrication of lies about Jews, but more often focused on the omission of the truth - such as removing references to Palestinian violence.
The ADL's tech challenge: its biggest fight yet
“If I were inventing ADL today… I would probably be technology first,” said Anti-Defamation League National Director and CEO Jonathan Greenblatt at an event in Tel Aviv. “So I would start with a team of engineers, not a team of community professionals. And I would be algorithm-centric, not physical presence-centric. We have a situation where we still ne…
In theory, Wikipedia’s governance model assumes good faith. And that largely works - until it doesn’t. When coordinated bad-faith actors exploit these mechanisms, the system breaks down, and truth and history can be manipulated by people Kelley describes as “driving an ideological agenda.”
“You have something like 800 administrators, only 400 of whom are active, for all of English-language Wikipedia,” he noted. “And when people are not editing in good faith… the system of Wikipedia breaks down.”
Problems with Wikipedia have been claimed by some who suggest it has a left-leaning bias, and that can be problematic if it is used merely as an online encyclopedia. But when large language models ingest it as training data and reference material, these ideological agendas propagate and begin to frame narratives.
Kelley suggested that it was “everyone’s responsibility” to make sure truth and accuracy are both ensured on the sources, and that the accurate sources themselves are input into LLMs. But when it’s everyone’s responsibility, it becomes no one’s.
His recommendation is blunt: until Wikipedia can reliably govern sensitive topics, AI companies should be “circumspect about where they’re pulling their information from and what kinds of biases they’re introducing into their models.”
The stakes are historical as much as political. As Kelley put it, “You have the ability of ideological actors to change our history.”
Gaming: Where Antisemitism Becomes Normal
Whereas Wikipedia is shaping what people know, the gaming community determines what they tolerate.
Another ADL study, this time examining how online gamers with diverse identity usernames are treated in four leading online games (Valorant, Counterstrike 2, Overwatch 2, and Fortnite), saw that almost half included some form of harassment, such as slurs, trash-talking, or disrupted play.
One-third included identity-based harassment, such as “gas the Jews” or calling people the “n-word.”
In 2025, Pew Research & APA research showed that 85% of U.S. teens play video games, 40% identifying as a “gamer.” But in another study, the ADL reported that 75% of teens (ages 10-17) had experienced some form of harassment in online multiplayer games.
For Jewish players, harassment often begins before a word is spoken. In the same study, researchers entered games using usernames like “ProudToBeJewish.”
“Just showing up with that name, they get abused,” Kelley said. “Not even talking.” He cited that Swastikas appeared in Fortnite and Holocaust denial posters surfaced in Roblox.
In some cases, real-world terror attacks are recreated as playable environments. Kelley predicted that events like the Bondi Beach shooting would soon appear in game form. Not as commentary, but as mere playable content.
Finally, because gaming companies’ customers are players rather than advertisers, they face incentives (at least in theory) to intervene. But they are also working against decades of culture formed in the absence of moderation.
When it comes to Jew hatred, social media radicalizes ideas but gaming normalizes radical behavior.
This Time, the Free Speech Angle Fails
Discussions of adequate moderation inevitably trigger objections from critics who seek to protect free speech. While we spent a decade trying to settle the debate, when it comes to AI, Kelley now rejects the premise entirely.
“It’s a misunderstanding of what free speech is and for whom,” he said. “Online safety work is not censorship. If someone comes into a restaurant and stands on a table reading Mein Kampf, the people who run the place are going to stop them.”
Rules, he argued, create space rather than restrict it. “You have more ability for more kinds of conversations when there are rules and those rules are enforced.”
Unmoderated spaces don’t maximize speech, he added. They collapse into monocultures where only the loudest remain.
The Mistake We’re About to Repeat
When it comes to Jew-hatred, none of these harms is new: Technology has always been able to amplify the bad alongside spreading the good. But what’s new for today’s world is the speed and authority with which AI systems can operationalize them.
“We were around when white supremacists were using electronic bulletin boards,” Kelley noted. “The problems aren’t new. The technology is.”
A decade ago, ADL warned social media companies that antisemitism and extremism would scale. Those warnings were largely ignored. Today’s reality, from chatrooms to Bondi Beach, is the result.
“The biggest threat is that we have a new technology with incredible possibilities and we don’t get in on the ground floor,” he concluded.
Social media can be a platform for hate. But AI will not just spread it: it will explain, summarize, answer questions, and archive it. And it will do so with a tone of neutrality and authority that masks the flaws in its sources.
If systems like ChatGPT and other LLMs become the default teachers, historians, and referees of truth, the question is no longer whether antisemitism exists online. It’s whether we are willing to challenge the systems that will soon decide what “everyone knows” about Jews, and what gets quietly edited out.




