In the aftermath of the assassination of US right-wing activist Charlie Kirk, a surge of misinformation has flooded social media. Many users, searching for clarity, turned to AI-powered chatbots for updates. Instead of reliable answers, they were often met with contradictory or outright false claims, adding to the confusion already swirling online.
The incident has highlighted a growing problem: AI chatbots tend to provide confident responses even when credible information is lacking, especially during fast-breaking events. With many platforms scaling back human fact-checking and moderation, these missteps by AI systems risk amplifying falsehoods rather than dispelling them.
Just a day after the 31-year-old Trump ally was fatally shot at a Utah university, Perplexity’s official account on X wrongly claimed that Kirk had never been attacked and was “still alive,” watchdog group NewsGuard reported. At the same time, Grok, the chatbot developed under Elon Musk, dismissed an authentic video of the shooting as satire, saying it was a meme edited to make it look as though Kirk had been shot mid-debate.
Grok went further, falsely suggesting that CNN and The New York Times had identified the gunman as Michael Mallinson, a Democrat from Utah. In reality, Mallinson is a 77-year-old retired Canadian banker living in Toronto. He expressed his shock after being inundated with social media posts wrongly accusing him of carrying out the attack.
Such incidents underscore how breaking news often leads to frantic online speculation, which chatbots recycle and spread as fact, deepening the chaos. The environment in the United States remains tense following Kirk’s death, with several right-wing influencers in Trump’s MAGA movement calling for retaliation against political opponents. The shooter, however, has not been identified and motives remain unclear.
Conspiracy theories have also taken root, with some claiming that the video of Kirk’s assassination was AI-generated and that the entire incident was staged. Experts point out this reflects the so-called “liar’s dividend,” where the availability of cheap and accessible AI tools makes it easier for conspiracy theorists to dismiss authentic evidence as fake.
“We examined multiple circulating clips of the shooting and found no signs of manipulation or editing,” said Hany Farid, co-founder of GetReal Security and professor at the University of California, Berkeley. His comments reinforce that while AI can be a powerful tool, it is also fueling a new era of misinformation when misapplied or misrepresented.