The S&P 500 fears bad PR from AI. It should fear what it’s doing to people.
Executives cite reputational risk as AI’s top threat. But as chatbots convince teens to end their lives, the real danger lies in who it’s already hurting.
You may think that companies are concerned about the rising threat of AI and how nation-states or malicious actors can attack or disrupt their business or hurt young and impressionable users, but business leaders have another concern on their mind: the fear of reputational damage caused by flawed artificial intelligence - not the nefarious consequences themselves.
A report by Harvard Law School Forum on Corporate Governance found that “Reputational risk is the top AI concern in the S&P 500, making strong governance and proactive oversight essential as companies warn that bias, misinformation, privacy lapses, or failed implementations can quickly erode trust and investor confidence.”
The study arrived soon before the BBC published a damning report on mothers whose sons were encouraged by AI chatbots to take their own lives.”
Reputational risk was the most frequently cited AI concern by 38% of S&P 500 companies, with only 20% of firms flagging AI-specific cybersecurity threats as their primary concern. What’s more, 72% now disclose at least one material AI risk, up from 12% in 2023.
The massive jump reflects a new reality for companies: that their AI is in market-facing workflows where mistakes are screenshot-ready and, if they’re not careful, have shameful results for businesses and catastrophic consequences for the lives of consumers.
“The riskiest AI failures are performative, wrong answers, and instant reach,” said Anirudh Agarwal, CEO of AI-driven marketing agency OutreachX, commenting on the report. “The remedy isn’t another model; it’s evidenceable governance... Treat every AI touchpoint like a press release, not a prototype.”
We’ve all seen examples of bad AI and how it can cause a headache for companies - anything from an embarrassing mistake to lawsuits. Famous examples include:
Air Canada: A website chatbot misled a traveler on bereavement refunds. A tribunal awarded damages; the airline owned what its bot said.
Anthropic: In a copyright case, counsel admitted an AI-formatted citation listed an incorrect title/author, an “embarrassing, unintentional mistake.” Credibility took a hit beyond the docket.
Google Gemini: Historically inaccurate people images went viral; Google paused people-image generation to fix accuracy and bias before resuming later.
Guess x Vogue: AI-generated models in a U.S. Vogue x Guess ad sparked backlash; coverage noted disclosure existed but was tiny, fueling authenticity concerns.
Today, customers are no longer as forgiving when the technology they rely on goes haywire or provides incorrect information. As more young people rely on it for anything from homework assistance to emotional support tools, the risks of fallout for the company are going from bad to worse.
Especially given the recent news that alleges chatbots from companies like Character.ai and OpenAI are convincing teenagers to commit suicide.
In these cases, the fallout from bad AI can be catastrophic and cause irreversible reputation damage, let alone the deaths of young and vulnerable children. Earlier this week, the BBC reported on a young Ukrainian woman with poor mental health who received suicide advice from ChatGPT, as well as another American teenager who killed herself after an AI chatbot role-played sexual acts with her.
As affirmative bots guide impressionable teens to fatal results, is now the time to change priorities? More from the BBC:
Data from the advice and research group Internet Matters says the number of children using ChatGPT in the UK has nearly doubled since 2023, and that two-thirds of 9-17 year olds have used AI chatbots. The most popular are ChatGPT, Google’s Gemini and Snapchat’s My AI.
Companies worry AI will make them look bad. They should worry about what it’s doing to those who trust it.
Because when AI errors stop being embarrassing and start being fatal, reputation will be the least of our problems.



