Every day, millions of people share half-truths and false information. Professional speaker Dominic Thurbon discovered he’d been doing exactly that for over a decade.
I’m in the ‘sharing information for a living’ business – speaking, webcasts, podcasts and media interviews where I use stories and data to give people new perspectives. You’d think I’d have good radar for distinguishing fact from fiction. Most days, you’d be right. But not every day.
I recently discovered that a case study I’d used in presentations for over a decade was completely fabricated. The ‘great horse manure crisis’ story claimed that in the late 1800s, London’s 100,000+ horses produced millions of pounds of manure daily.
The Times supposedly predicted streets would be buried under 10 feet of waste. An 1898 international urban planning conference in New York was allegedly abandoned after three days when no solution could be found.
Then motor cars arrived and solved the problem – a perfect anecdote about technological disruption.
Except none of it happened. No horse manure crisis existed. The Times published a clarification denying any such article. The New York conference never occurred – the first documented urban planning conference was in Washington DC in 1909.
I’d become a misinformation superspreader. If someone who writes about truth and lies can make this mistake, what hope do we have?
The scale of the problem
Research shows more than 60% of people lie multiple times hourly, often unintentionally. Recent corporate examples include Builder.ai, which attracted US$400 million from Microsoft before collapsing when its ‘AI’ solution was revealed to be hundreds of Indian engineers.
On TikTok, ADHD-related videos have been viewed over 6 billion times, yet less than 50% contain clinically accurate information. That’s more than 3 billion examples of misinformation shared about one topic on one platform.
The AI challenge
Generative AI tools make creating convincing but incorrect content effortless. Deloitte had to refund more than $100,000 to government after providing a report filled with AI hallucinations.
Convincing doesn’t mean correct. We’re entering a world where our tendency to believe and share what we read has serious implications for individuals, businesses and societies.
Practical steps for EAs
As information gatekeepers, high-level assistants are uniquely positioned to combat misinformation:
- Verify before forwarding: Check sources for statistics and claims before including them in executive communications
- Question perfect stories: If something seems too neat an example, investigate further
- Cross-reference sources: Look for multiple independent confirmations
- Flag uncertainty: Note any uncertainty when forwarding unverified information
- Create verification protocols: Establish fact-checking processes for internal and external communications
Whether we’re talking about horse manure on stage, ADHD on TikTok or executive briefings, the mantra must be to check before you share.
For EAs managing communications and supporting decision-making, this is about more than avoiding embarrassment – it’s about maintaining professional credibility and supporting informed leadership in an era where misinformation spreads faster than ever.

