Meta is dramatically increasing its use of artificial intelligence for content enforcement across Facebook and Instagram, while simultaneously decreasing its dependence on third-party vendors. The change affects how the company handles sensitive content like terrorism, child exploitation, drug sales, fraud, and scams.

AI Takes the Lead in Enforcement

The tech giant claims that its new AI systems can already outperform human review teams in detecting violations. Specifically, tests show these algorithms identify twice as much adult sexual solicitation content, while reducing human error rates by over 60%. Beyond detection, the AI is also proving effective at preventing scams (roughly 5,000 daily) and identifying impersonation attempts, including account takeovers indicated by suspicious login activity or profile changes.

Why This Matters

This shift isn’t just about efficiency. Meta has faced growing criticism over its content moderation policies, including lawsuits alleging harm to young users. The move to AI allows Meta to scale enforcement without increasing personnel costs, which is crucial given the sheer volume of content posted daily. It also reduces the risk of human moderators being exposed to disturbing material. The company notes that human reviewers will still handle high-risk decisions (such as account disablement appeals) and reports to law enforcement.

Less Reliance on Outsourcing

Meta intends to reduce its reliance on external vendors for content moderation tasks, suggesting a desire for greater control over its enforcement processes. The company believes AI is better suited for repetitive work (like reviewing graphic content) and adapting to evolving tactics used by bad actors (such as illicit drug sales).

New AI Support Assistant

Alongside the enforcement changes, Meta launched a 24/7 AI support assistant for Facebook and Instagram users, accessible on mobile and desktop. This could further reduce the need for human customer service.

Conclusion: Meta’s move towards AI-driven content enforcement represents a significant evolution in how social media platforms approach moderation. While the goal is increased accuracy and efficiency, the long-term impact on user experience and potential biases within AI systems remains to be seen.