The Jewish cost of free speech online
The libertarian in me wants a free and open internet. As a Jew, I know that may be unsustainable. I spoke to an Israeli non-profit balancing both issues.
On the social media battleground, the war between moderation and censorship rages on.
A few months back I spoke to CyberWell, a non-profit that launched in May 2022 as the first-ever open database of online antisemitic content. It essentially acts as an engine of transparency, reflecting the state of online antisemitism and emerging antisemitic trends.
CyberWell implements data collection by acting as what CEO Tal-Or Cohen Montemayor describes as “the online antisemitism compliance officer” for social media platforms. It claims to have recorded an 86% increase in antisemitism online in English and Arabic since October 7 and helped remove more than 50,000 pieces of harmful content across Facebook, Instagram & Threads, and TikTok.
Seems innocent enough, right? But many people online are skeptical of organizations that seek to filter, suppress, or monitor speech they deem ‘hateful’. Suddenly anyone who sees an opinion they don’t like may feel emboldened to silence it. If a government or private business does that, the results can be catastrophic.
This issue becomes particularly complicated within the context of Israel. Sure, criticizing the country’s government online is fair, but calling for the murder of Jews clearly is not. Down with the IDF? Ok. Death to Zionists? Not so much.
I spoke to Montemayor about this challenge: how a company can balance the need to protect minority groups online from harassment or abuse without affecting the free speech of users around the world.
Part One of our conversation was first published on CTech earlier this month. Here is the exclusive publication of Part Two, which touches on X (Twitter), China, the EU, AI, and Joe Rogan.
It has been lightly edited for clarity.
Elon Musk has said he’s going to keep anything on X as long as it’s legal. In effect, people can say whatever they want whether it's deemed hateful or not. Personally, I don't believe that any government or any private company should be allowed to dictate what we say - how does CyberWell navigate that space to promote healthy discourse online?
I do think that the distinction between government intervention and censorship on platforms versus their content moderation, which was put in place for commercial reasons, is important to be drawn. I think that today there is a tendency to conflate the concept of free speech versus paid speech. What you participate in on a social media platform is not free speech. It is paid speech, meaning it is algorithmically enhanced paid machinery.
Those platforms are designed to make money off of the content that you generate and the attention that I give that content. So it's not the same thing as critiquing your government in the classic sense of the word, which is really what the definition of free speech is. I think that those distinctions are very important.
We should draw distinctions between when the government gets involved versus when the social media platforms regulate themselves. And as the consumer, we should be able to distinguish free speech and paid speech.
For example, I wasn't against Nazis marching [in 1977] in Skokie, Illinois, because that's part of their constitutional right under the U.S. Constitution. But I can be against Nazis marching in Disneyland because Disneyland is private property and it's meant to be a family-friendly place. And that's exactly the point here. The major social media platforms make up over half of the worldwide Internet activity. You have under-30-year-olds using these platforms anywhere between 40 to 100 hours a week. So it's not accurate to say that this is a regular platform for expressing free speech.
I've heard criticisms of this within my own peer group. If you have very healthy public platforms, big, major social media platforms, then the haters are going to go to a silo. They're going to go to a 4chan or 8chan. I'm actually pro-silo - silos are monitored very aggressively by major law enforcement agencies, as they should be, because that's where people radicalize themselves.
For a few months I was hearing about the ‘Splinternet’ and worrying that if you push people away or cut people out, it's going to create two or three different Internets. So we would all exist together physically but interact online in completely different realities.
There’s some element of that today, right? The high level of polarization on social media in general.
Right.
Are you going to remove pornography from the Internet? No, but a porn site is a porn site. That is its location. If you're looking to engage in hate content, you should have a place that's meant for that. Not the major social media platforms that increasingly have users who are 15 years old or under. I think trying to distinguish between the real estate of the platforms that we're participating in is a really important way to view the balance of when do we content moderation and when do we not.
What would you say about wolf whistles and expressions that may not be explicitly antisemitic, racist, or sexist, but that can be weaponized to either avoid the AI algorithms that pick up the language or not get attention?
CyberWell is consistently monitoring antisemitism online with our proprietary technology that's meant to flag content that's highly likely to be antisemitic. And we have seen shifts of niche trends in antisemitism that come under specific hashtags or specific trends. We do check how prevalent and how viral those specific trends are becoming and how toxic they're becoming if we want to highlight them or share data to a certain platform with that information.
My position in general when it comes to content moderation and online antisemitism is that there's so much work to be done with overt antisemitism. This means the wolf whistles and the new trends are concerning when they are calling for clear violence or just flagrant, but antisemitism online is so prevalent that there's so much work that needs to be done. That's kind of our primary focus.
Following October 7, we saw a very clear shift specifically in our Arabic data. Sixty-one percent of the data was open calls that were calling to violence against Jews or Israelis or justifying violence against Jews or Israelis. So why go to the wolf whistles when you just have the real hardcore hatred not being removed at scale?
In what way does the EU differ from its approach to the U.S., and how do they influence each other?
The EU has on the books legislation about what is illegal hate speech, which the United States essentially doesn't have at all legally. The only caveat to that is hate speech that's likely to incite violence or physical harm. That's about the only limitation.
Since October 7, I believe the EU issued at least two letters of query and investigation to both Meta and X about how they were allowing violent content and legal hate speech to proliferate on the platform following Hamas’s attacks. This is very similar to what we've seen with the implementation of GDPR, the privacy laws that have managed to fine Meta large sums of money in the EU.
The latest development in the U.S is Section 230 of the Communications Decency Act: That's the primary loophole that keeps social media and big tech not culpable for the content that's generated by their users on their platforms. Congress has tried to revise that law at least 25 times in the last two years and failed. The most recent development in that legislation is against Chinese ownership of TikTok, which would effectively force the sale or result in a ban of TikTok in the United States.
What would be the significance of that?
It is an example of the law balancing between two constitutional interests: One is national security, and the other one is freedom of speech. TikTok was singled out as being owned by an adversary of the United States, China. I do think that they're reviewing this specific law, which is saying, ‘We are now looking at the issue of social media ownership and social media platforms in general through the lens of national security’. That potentially opens the door for additional legislation in the space.
From CyberWell's perspective, what happened on October 7 on social media platforms was the largest hijacking of our major applications by a terrorist group. It perpetuated Hamas’s attack way beyond time and space. What happened on October 7 psychologically paralyzed most of the Israeli population, certainly many Jews around the world. Therefore, every single democracy that is at threat of a terrorist attack should be looking at social media platforms through the lens of national security.
Terrorist organizations are looking to exploit those vulnerabilities again and again. That whole few weeks of content following October 7 highlighted the failures of social media platforms to make the appropriate investment in automated technology to remove at-scale content that was pro-terror calls to violence and hate speech.
The call to either ban TikTok or have it moved to American ownership isn't new, though. The proposal came up a few years ago under then-President Trump but was met with huge outrage. So has sentiment changed?
I think that there's a growing public interest and concern about the social effects of social media platforms on our societies. The U.S. does need comprehensive legislation when it comes to social media. The reason that people are pro-TikTok ban or even pro-regulation on social media is that there is increasing evidence to suggest that these social media platforms are meant to distract you, suck your attention, destroy your self-worth, and are very bad for your child's self-development.
So a lot of people are talking about legislation in the United States around social media, much in the same way that they were talking about legislation around the cigarette industry.
All of this is quite scary. My natural state of being is a very libertarian one. But then when I think about it, I become more conservative with my approach to it all which is weird because it's kind of like a horseshoe. It's like they're on opposite ends but they're close together.
I think that libertarianism presupposes that people have access to all of the information and that they're competent enough to handle all the information. One thing I can tell you about social media platforms is very obvious to everybody: We certainly don't have access to all the information. And that's another way that social media reform can happen: by regulating the transparency around algorithms, around decisions on toxic content, and who makes those decisions on what is removed and what isn't removed.
Transparency is another great way to make sure that we have a more libertarian utopia where people do have all of that information because right now, you do not have access to that information. Not even close.
The two most famous examples of that happened in 2020 and 2021. First, when Twitter prevented the sharing of a New York Post article regarding the Hunter Biden laptop scandal before the election. Social media, at the request of the FBI and CIA, blocked the story, and today 80% of voters believe it would have changed the presidential results had they known about it.
The other example was the following year when doctors were banned from social media at the government’s request for saying that the vaccine didn't stop the spread of COVID-19. Platforms shut down any speech that contradicted the idea that only a vaccine could save us in the pandemic. So I've seen how government and social media platforms can collude to stifle the free press, stifle journalism, and stifle medicine.
My concern isn't only the rogue abusive accounts, which I can just mute or block. I care about the more systemic problems of content moderation. I'm coming at it from a more institutional view.
There is a distinction that should be drawn between institutional intervention and the way you deal with that, versus the way that toxic content, misinformation, and hate speech are being amplified on these platforms.
So you're looking at it from an institutional intervention perspective. Fair enough. As a member of the press, I can understand that. As a Jew with the highest levels of antisemitism across the world right now since World War II, I'm looking at it as a social ills.
Look at what's happening. This reminds me of what happened during the Third Reich where the idea of hating Jews was popularized by traditional media, radio, television, the intelligentsia, and academic circles. Well, now we have social media platforms that are algorithmically enhanced and reaching billions of people.
Many young people are exposed to these anti-Jewish tropes multiple times over a week. I'm concerned that from a social ills perspective, social media platforms are directly contributing to the hostility against Jews.
It's not about a story that's buried, or new information that's coming to light, or a new pandemic that nobody understands and how it's going to hit our hospital infrastructures. It's just very different the way that we're coming at it.
I've experienced both. I'll post a story on X and underneath I get people calling me a ‘fucking Jew’. I've experienced firsthand the fight that you guys are fighting. I just think that another very significant fight is taking place within a very significant space. So it's like concentric circles of battlegrounds.
Have you found that things have generally quietened down since the initial bump of October 7, or is it just increasing?
We've seen a slow return to ‘regular’ rates of online antisemitism, if we could say that. But we're seeing right now the expression of the massive surge in offline antisemitism in communities across the diaspora, whether that's in the United States or if it's in Europe. People are purposefully going to demonstrate inside Jewish neighborhoods, vandalizing those properties, and even bear-spraying people who are going home to their homes. So it's very clear to me that online trends are now moving into the offline space.
And I'll also say in the October 7 denial kind of narrative, specifically, we saw something very interesting recently with the false accusation of rape in the Shifa hospital by IDF soldiers, where even Al Jazeera retracted the story and said it was not real.
For me, that was frustrating because I was reminded of The New York Times and their reporting of Gaza City’s al-Ahli Baptist Hospital in the early days of the war. They had rushed to email their subscribers falsely saying Israel blew it up. If they had waited two hours, the actual truth would have been published but they had already gone viral.
I'm looking at it like, ‘Oh, my God, everyone hates the Jews, and now everyone hates the media, too’. I can see why people think, ‘I don't even know what to believe so I'm going to go and listen to Joe Rogan.’
My husband also listens to Joe Rogan, who's a moron and doesn't know anything. Men love him.
But maybe a three-hour conversation with Joe Rogan might be better than seven minutes of someone screaming at a news anchor on CNN or Fox News.
That is true.


