PR Failure #45: When the Machine Makes a Mistake

AI is the shiny new(ish) tool and buzz word that we’re all still trying to figure out. We know it can save time, and it can do some things a human simply can’t—like looking through an impossibly large deluge of data for that one most important insight. But on the flip, there are things humans can do that AI can’t, won’t, or at the very least shouldn’t do.

As these three very recent communications gaffes prove, it only takes one public pitfall for reputations to unravel and the public to lose trust. From hallucinations to intellectual property disasters and back to hallucinations, even the most futuristic folks are learning the hard way. AI risks aren’t just technical. They’re reputational.

Whatever your role may be—writer, executive, lawyer, business owner—if you abandon the human part of your job, you risk creating serious public relations issues that can ripple well beyond an individual and/or a singular institution.

1. When the Hallucination Hits Home Hard

In May 2025, the Chicago Sun‑Times and The Philadelphia Inquirer included an insert, featuring some super-sounding summer reads. The problem? Of the 15 fiction books on this alluring list, promising plots provided, 10 titles were complete works of fiction—the books didn’t exist. Under the headline “Heat Index: Your Guide to the Best of Summer,” the heat quickly turned up for all involved.

Let’s trace the trouble. The aforementioned two papers bought content from a syndicate, King Features (part of Hearst), which purchased the piece from a freelance writer, Marco Buscaglia. No one did a fact check.

It’s common for newspapers to publish syndicated content and rely on the syndicate to uphold editorial standards. But regardless of there being a trusted relationship in place with a reputable provider, if you skip final review because you’re strapped for staff, you’re still responsible. Readers logically associate published content with the publisher, so the papers took the reputational hit as far as the reader was concerned.

King Features has a responsibility to vet content before distribution, and they own up. The writer also had an obvious duty here, and to his credit, Buscaglia went public quickly, admitting it and apologizing:

“Huge mistake on my part and has nothing to do with the Sun-Times. They trust that the content they purchase is accurate, and I betrayed that trust. It’s on me 100 percent.”

Regardless, Buscaglia lost his contract with King Features, both newspapers are reviewing third-party content sourcing, and the people who pay to read legitimate news and features are angry. As one reader on Reddit wrote, according to NPR:

“As a subscriber, I am livid! What is the point of subscribing to a hard copy paper if they are just going to include AI slop too!?”

Indeed.

2. When Content Scraping Sparks Controversy (and Court Cases)

Hot off the presses… Reuters reported on June 19th the BBC is claiming that Perplexity, a “free,” AI-powered answer engine, has scraped giant swaths of its content sans permission. Allegedly, Perplexity then regurgitated BBC’s verbiage verbatim, with BBC also warning that 17% of the answers referencing BBC content contain errors and/or aren’t attributed. The BBC is not alone in making the claim against Perplexity, and Perplexity is not the only AI tool being accused of content scraping.

WIRED published an explosive article last summer, titled “Perplexity Is a Bullshit Machine.” Their investigation revealed Perplexity ignored robots.txt rules, scraped paywalled articles, and sometimes hallucinated (e.g., specifically and falsely accusing a  police officer of misconduct).

The BBC is one of the world’s most trusted news brands. And it has built that trust over time, with journalistic rigor, through a bevy of human experts. Today, Perplexity is leaning heavily on the credibility and reporting of BBC journalists, while sometimes paraphrasing just enough to pass. So, this isn’t only a copyright issue. It’s a reputational one.

For an outlet like the BBC, allowing an AI tool to parse its pieces for parts devalues its business proposition and undermines public trust. BBC is reportedly considering legal action against Perplexity, with WIRED and Forbes raising similar complaints.

The takeaway? There’s a lot, but from a PR perspective, if your AI is trained on the talents of others without permission or credit, it’s not just your reputation at stake. You’re harming the reputations of others, undermining trust in your own business, and possibly landing in legal jeopardy. Even if we’re solidly in the era of artificial answers, the real consequences are the coming reality.

3. When Citations Compromise Credibility

And speaking of legal peril… in May 2025, Anthropic, an “AI safety and research company,” built a chatbot called Claude. And Claude went on to dream a dream of legalese.

While defending a $75 million lawsuit from Universal Music, concerning copyright misuse of song lyrics, Anthropic’s Claude created a non-existent academic article complete with fictitious authors. Allegedly, there was a manual review, but that failed to find the fault.

Anthropic’s liability team, led by Latham & Watkins, called the situation “an embarrassing and unintentional mistake.” While the team acknowledged Claude was the cause in mistakenly formatting the citation and said review processes fell short, judges weren’t impressed, stating there’s:

“…a world of difference between a missed citation and a hallucination generated by AI.”

This legal pickle not only threatens Anthropic’s defense strategy in this case, but it also puts their core AI Claude at the center of a closer look at credibility and compliance. So, Anthropic’s AI accidentally betrayed its maker in court. Claude turned what should have been serious support into a legit liability.

As we wrap this first review of AI failures, communications professionals:

  • Please make sure any consumer‑facing or public content is fact‑validated by a human (YOU). Especially if invoking brand authority on behalf of a client. There aren’t any shortcuts.
  • If your company is pulling from third‑party content—talk to the rights‑ Plagiarism is plagiarism. Always has been, always will be.
  • And again, any consumer‑facing or public content must be fact‑validated by a human. That includes a lawsuit. It includes everything.

And as a leader, you may not be building or using the AI tool, but you are responsible for its use. It is kind of like driving a car in autopilot or self-driving (AI robot) mode. If it crashes, you, as the driver, are responsible for the outcome! So, define your team’s guardrails, ensure there’s oversight, and have a plan in place if it all goes sideways. Because in the public’s perception, you are your AI! So, when AI errs, you’re responsible.

Proving my point, with a full disclaimer—AI was used to research some of the examples cited in this edition. And one of the answers received was a hallucination. Perhaps the prompt was the problem, but the irony isn’t lost that in writing an article about AI making mistakes, the AI made a mistake.

As we go boldly into the future, fear not, but fact check. Always fact check.

Go deep into AI! Go learn. Use it. But question it and confirm your details just like you’ve been taught in school. Citations and sources still matter. In this current state of society’s usage, transparency is critical. Be proactive in your honesty of usage.

See you at the next PR fail!

Aaron Blank

President and CEO
Fearey