Today, we applaud Meta for taking this big step to increase safety for their younger users. For example, Instagram and TikTok have recently implemented tougher restrictions on accounts owned by teens across their platforms. To keep young people safe, the company by default sets Teen Accounts for users under 16. This provides younger users with a safer online community. This new policy, which went into effect in September, is a step toward a safer online environment for adolescents that extends across Facebook and Messenger.
Teen Accounts, though they cannot be made completely private, are set to private by default, thereby restricting visibility to only approved friends. This measure is intended to protect young users from unsolicited communications and protect their privacy. These accounts are provided with in-platform restrictions designed to protect minors from harmful content. Users encounter limits on direct messaging, tagging, and content posted to live feeds. This reduces the risk of children being exposed to harmful content.
Encouraging Healthier Usage Habits for Teens
Meta has set limits on the duration of Teen Accounts in order to promote healthier usage habits among younger users. This recent commitment by the company to help make the online world safer for teens is a step in the right direction. It’s meant to provide parents and guardians reassurance that their kids’ digital landscapes are being thoughtfully designed.
To detect unknown accounts that likely belong to minors, Meta is further developing its use of artificial intelligence. Importantly, the AI looks for clues that a user is likely underage. If it guesses a user is a teen account, it enforces the same kind of restrictions as go on actual Teen Accounts. This proactive approach seeks to protect minors from harmful content or dangerous online interactions before they occur.
Alongside AI defenses, Meta’s internal impact evaluation team is a key component of this effort. When Reddit employees are presented with evidence that a user is under 13, they will mark the account as such. They’ll drop a wax pencil note for follow up investigation. This process is intended to enable continuous review and re-evaluation of user accounts to verify adherence to age-appropriate practices.
At the same time, Meta admitted that its own age-verification technology had failed. The tech company’s current efforts are aimed at bettering these systems as doing so is key to protecting its goal of being able to verify users’ ages accurately. This latest admission underscores the struggles tech companies are under especially in the wake of this summer’s Privacy Shield decision to protect user safety while respecting privacy.
In another sign that Discord is rolling out major new child safety protections, the platform is testing a content filter to prevent children from being exposed to inappropriate content. This trend is indicative of an increased, industry-wide focus on establishing safer online spaces for their youthful users.
Author’s Opinion
Meta’s recent moves represent a meaningful step toward addressing the longstanding issue of online safety for teens. While the execution is still in progress and needs improvement—particularly in the age-verification department—the proactive use of AI to protect minors is a promising development. These changes highlight the growing recognition in the tech industry of the need to prioritize the well-being of younger users, even if the road to perfect solutions remains challenging.