Meta has announced sweeping new safety updates for teenage users on Instagram, following the removal of nearly 135,000 accounts involved in sexualising children earlier this year.
The announcement marks the latest step in Meta’s broader campaign to combat child exploitation across its platforms, as regulatory scrutiny intensifies in the US and UK.
The new changes, introduced on Wednesday, include direct messaging protections designed to help teens identify and block suspicious accounts more easily.
Meta is also introducing automatic restrictions for all teen and child-representing accounts, limiting interactions from unknown users and filtering out potentially offensive content.
These measures are being rolled out globally in response to growing pressure from lawmakers and child safety advocates who claim Meta has not done enough to protect young users.
Instagram removed 135,000 child-exploitative accounts this year
According to Meta, the 135,000 Instagram accounts taken down were found to be engaging in inappropriate behaviour toward children.
These included adult-managed accounts that posted content featuring children and received sexualised comments or direct messages requesting explicit material.
Meta linked 500,000 additional Instagram and Facebook accounts to those original exploitative profiles and removed them as well.
These secondary accounts were part of wider networks that amplified or interacted with the original content, according to internal investigations.
The platform reiterated that Instagram requires users to be at least 13 years old.
However, it allows adults to manage profiles that represent younger children, provided the account bio clearly states this arrangement.
Meta said such accounts will now be automatically placed under the platform’s strictest safety settings.
Meta expands automatic protections and reporting tools for teens
As part of the latest updates, Meta is rolling out more robust tools for teens to manage their safety.
Teen users will now be able to access additional information about people who message them, such as when an account was created.
This is intended to help teens spot potentially suspicious behaviour, especially from accounts that may be newly made or anonymous.
Meta also introduced a simplified way for teens to block and report problematic accounts. This function allows both actions to be completed in a single step.
The company reported that in June alone, teens used the Safety Notice feature to block accounts 1 million times and submitted another 1 million reports after receiving a warning.
Instagram accounts representing children will now automatically adopt stricter message and comment controls.
This includes filtering messages from unknown users and reducing the visibility of these accounts to people not following them.
Offensive comments will be screened and restricted through default settings.
Broader push amid rising legal and regulatory scrutiny
The new safety push comes as Meta faces a wave of criticism and legal challenges from governments and regulators, particularly in the US.
Several state attorneys general have accused the company of using addictive features that negatively impact children’s mental health.
These allegations have amplified calls for legislation that would mandate stronger protections for young users.
In May, Congress reintroduced the Kids Online Safety Act, which would impose a legal “duty of care” on social media platforms, requiring them to shield children from harmful content and behaviour.
The bill had previously stalled in 2024 but has regained momentum amid high-profile lawsuits and continued public concern.
Outside of Meta, other platforms are also facing legal challenges.
In September, New Mexico filed a lawsuit against Snapchat, alleging the platform allowed predators to easily target minors through sextortion schemes.
Meta removed 10 million fake profiles in 2025 to fight impersonation
In a separate announcement last week, Meta disclosed that it had removed around 10 million profiles during the first half of 2025 for impersonating prominent content creators.
This effort, the company said, is part of a campaign to reduce “spammy content” and restore trust in verified online identities.
The impersonation crackdown is linked to concerns that many fake accounts are not only deceptive but also play a role in targeting minors or spreading harmful material.
Meta has increased its use of automation and AI tools to detect such accounts at scale.
With child protection remaining a top concern for lawmakers and regulators globally, Meta’s ability to enforce its new policies effectively will likely determine whether upcoming legislation places stricter demands on the tech giant in the coming months.
The post Meta removes 135,000 Instagram accounts targeting children in 2025 clampdown appeared first on Invezz