Meta issued an apology on Thursday and stated that it had resolved an “error” that led to some Instagram users experiencing a surge of violent and graphic content being recommended on their personal “Reels” feed.
“We have corrected an error that caused some users to see content in their Instagram Reels feed that shouldn’t have been recommended. We apologize for the mistake,” a Meta representative said in a statement provided to CNBC.
This response follows complaints from Instagram users who voiced concerns on social media about a sudden increase in violent and inappropriate content appearing in their recommendations.
Some users reported seeing such content even when Instagram’s “Sensitive Content Control” was set to its strictest moderation level.
Meta’s policy aims to protect users from disturbing images by removing particularly violent or graphic content. This includes material showing “dismemberment, visible innards, or burned bodies,” as well as “sadistic remarks towards imagery depicting the suffering of humans and animals.”
However, Meta allows some graphic content if it is intended to raise awareness about important issues like human rights violations, armed conflicts, or terrorism. Such content is usually accompanied by warning labels.
On Wednesday evening in the U.S., CNBC found several posts on Instagram Reels that showed disturbing images of dead bodies, severe injuries, and violent attacks. These posts were marked as “Sensitive Content.”
Meta’s website explains that the company uses internal technology, including artificial intelligence and machine learning tools, along with a team of over 15,000 reviewers, to detect disturbing imagery. This technology helps prioritize and remove “the vast majority of violating content” before users even report it.
Additionally, Meta strives to avoid recommending content on its platforms that could be considered “low-quality, objectionable, sensitive, or unsuitable for younger audiences.”
Changing policy

The issue with Instagram Reels comes after Meta revealed plans to revise its moderation policies in an effort to better support free expression.
In a statement released on January 7, the company announced that it would adjust how it enforces certain content rules to minimize errors that had led to users being censored.
Meta explained that this would involve shifting its automated systems from detecting “all policy violations” to concentrating on “illegal and high-severity violations, such as terrorism, child sexual exploitation, drugs, fraud, and scams.” For less serious policy breaches, the company said it would rely on users to report issues before taking action.
Additionally, Meta acknowledged that its systems had been restricting too much content based on predictions that it “might” violate guidelines. The company is now working to eliminate most of these content restrictions.
CEO Mark Zuckerberg also announced that Meta would allow more political content and modify its third-party fact-checking system to adopt a “community notes” model, similar to the one used on Elon Musk’s platform, X.
These changes are widely viewed as an attempt by Zuckerberg to improve relations with U.S. President Donald Trump, who has previously criticized Meta’s moderation practices.
According to a Meta spokesperson on X, the CEO visited the White House earlier this month “to discuss how Meta can support the administration in strengthening American tech leadership globally.”
As part of a broader wave of tech layoffs in 2022 and 2023, Meta eliminated 21,000 jobs—nearly a quarter of its workforce—impacting many of its civic integrity and trust and safety teams.
“Meta Resolves Issue After Instagram Users Report Surge in Graphic and Violent Content”