Meta is deploying AI tools to identify underage users across Facebook and Instagram. Advanced visual analysis is used to examine photos and videos for indicators such as physical proportions and movement patterns, allowing systems to estimate age without relying on facial recognition.
Innovative AI Measures For Age Verification
According to the company, detection relies on general visual signals rather than individual biometric identification. Visual analysis is combined with text-based signals and patterns of user interaction to improve the identification of accounts that may belong to underage users. Integration of multiple data points is intended to strengthen enforcement of age-related policies while avoiding direct biometric tracking.
Follow THE FUTURE on LinkedIn, Facebook, Instagram, X and Telegram
Expanding The AI Deployment
Initial rollout began in selected regions, with plans for broader global expansion. Current capabilities focus on static and recorded content, while future updates are expected to extend analysis to live and interactive features, including Instagram Live and Facebook Groups. Expansion into additional formats aims to increase coverage of user activity across platforms.
Enforcement And User Accountability
Accounts identified as potentially underage may be temporarily deactivated. Affected users are required to complete an age verification process to restore access. This system is designed to apply platform rules more consistently while maintaining user accountability.
Context Amid Legal Challenges
Growing legal scrutiny has increased pressure on large technology companies to strengthen child safety measures. A jury in New Mexico ordered Meta to pay $375 million in a civil case related to child protection concerns and alleged misrepresentation. The case forms part of broader regulatory attention to how platforms manage user safety and content oversight.
Enhanced Controls With Teen Accounts
Meta is also expanding stricter default settings for younger users through its “Teen Accounts” feature on Instagram. Safeguards include limitations on direct messaging, proactive filtering of potentially harmful content, and default privacy settings. Rollout currently covers 27 countries across the EU and Brazil, with similar measures planned for Facebook in the U.S., the U.K. and the EU.







