
Instagram to alert parents if teens repeatedly search suicide or self-harm terms
Instagram will begin notifying parents when their teenage children repeatedly search for terms linked to suicide or self-harm , the company announced on Thursday, expanding its safety tools for young users amid growing legal pressure.
The alerts will be sent only to families enrolled in Instagram’s parental supervision programme and delivered through email, text message, WhatsApp, or in-app notifications , depending on the contact information provided. Meta said notifications will be triggered only after multiple similar searches within a short period , rather than a single query.
The platform already blocks suicide and self-harm content from teen search results and redirects users to mental health helplines and support resources . Meta said the new feature aims to help parents intervene early if a teen’s activity suggests distress, while avoiding excessive alerts that could reduce their usefulness.
Meta did not release specific data on how often teens attempt such searches. However, internal surveys cited in earlier disclosures show that about 8 per cent of users aged 13 to 15 reported encountering self-harm or suicide-related content, indicating the issue affects a measurable minority of young users. Researchers note that vulnerable teens often use coded or indirect language , which automated systems may struggle to detect.
Meta said the alert system was developed with guidance from its Suicide and Self-Harm Advisory Group , but the company has not published independent test results showing how accurate or effective the guardrails are. Child safety advocates welcomed the move but warned that automated protections can miss context and that some teens may avoid seeking help if they fear parental notification. Meta also plans to introduce similar alerts related to teens’ conversations with artificial intelligence tools about suicide or self-harm.
The announcement comes as Meta Platforms faces lawsuits in the United States over claims that its platforms harm and addict minors and fail to protect them from dangerous content. CEO Mark Zuckerberg has said existing scientific research does not conclusively prove that social media causes mental health harm.
As scrutiny intensifies, the new alerts reflect Meta’s effort to show stronger oversight of teen safety, even as questions remain about the system’s long-term effectiveness.
