Australia’s online watchdog has criticised the world’s biggest social platforms of failing to properly enforce the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and insufficient measures to stop new account creation. In its initial compliance assessment since the ban took effect, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, warning that platforms must demonstrate they have implemented “appropriate systems and processes” to prevent children under 16 from accessing their services.
Regulatory Breaches Exposed in Initial Significant Review
Australia’s eSafety Commissioner has outlined a concerning pattern of failure to comply amongst the world’s largest social media platforms in her first formal review since the ban took effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish sufficient safeguards to stop minors from using their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification processes, noting that some platforms have permitted children who originally stated themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings demonstrate a notable intensification in the regulatory action, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has stressed that simply showing some children still maintain accounts is insufficient; platforms must rather furnish substantive proof that they have put in place comprehensive systems and procedures intended to stop under-16s from opening accounts in the outset. This shift reflects the government’s determination to hold tech giants responsible, with possible sanctions looming for companies that fail to meet the legal requirements.
- Enabling formerly prohibited users to confirm again their age and regain account access
- Enabling multiple tries at the same age assurance method with no repercussions
- Insufficient safeguards to block new under-16 accounts from being established
- Insufficient complaint mechanisms for parents and the general public
- Shortage of clear information about enforcement efforts and account removals
The Magnitude of the Challenge
The considerable scale of social media usage amongst Australian young people underscores the compliance challenge confronting both the authorities and the platforms themselves. With numerous accounts already removed or restricted since the implementation of the ban, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s conclusions suggest that the operational and technical barriers to enforcing age restrictions have proven far more complex than anticipated, with platforms struggling to distinguish genuine age declarations from fraudulent ones. This complexity has left enforcement authorities wrestling with the core issue of whether existing age verification systems are adequate to the task.
Beyond the operational challenges lies a wider issue about the readiness of companies to place compliance ahead of user growth. Social media companies have long resisted strict identity verification requirements, citing privacy concerns and the real challenge of verifying age digitally. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to implement the systems mandated legally. The move to active enforcement represents a critical juncture: either platforms will substantially upgrade their compliance infrastructure, or they risk facing significant penalties that could reshape their business models in Australia and possibly affect regulatory approaches internationally.
What the Figures Indicate
In the initial month following the ban’s introduction, Australian officials reported that 4.7 million accounts had been limited or taken down. Whilst this figure initially looked to prove regulatory success, subsequent analysis reveals a more nuanced picture. The sheer volume of account removals suggests that many under-16s had managed to establish accounts in the initial stages, indicating that preventative measures were lacking. Additionally, the data prompts inquiry about whether removed accounts reflect genuine enforcement or simply users removing their pages of their own accord in response to the latest limitations.
The limited transparency surrounding these figures has disappointed independent observers seeking to assess the ban’s genuine effectiveness. Platforms have provided scant details about their implementation approaches, success rates, or the profile of deleted profiles. This lack of clarity makes it challenging for regulators and the wider public to assess whether the ban is working as intended or whether young people are simply finding different means to reach social media. The Commissioner’s push for detailed evidence of structured adherence protocols reflects increasing concern with platforms’ resistance to disclosing complete details.
Industry Response and Opposition
The social media giants have responded to the regulator’s enforcement action with a mixture of compliance assurances and scepticism about the practical feasibility of the ban. Meta, which runs Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst at the same time contending that precise age verification remains a significant industry-wide challenge. The company has advocated for a alternative strategy, suggesting that strong age verification systems and parental consent requirements implemented at the app store level would be more effective than platform-level enforcement. This stance reflects wider concerns across the industry that the current regulatory framework places an unrealistic burden on individual platforms.
Snap, the creator of Snapchat, has taken a more proactive public stance, stating that it had suspended 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, sector analysts question whether such figures reflect authentic adherence or simply represent reactive account management. The core conflict between platforms’ business models—which traditionally depended on maximising user engagement and expansion—and the regulatory requirement to systematically remove an whole age group persists unaddressed. Companies have long resisted stringent age verification, citing privacy concerns and technical limitations, creating a standoff between regulators and platforms over who bears responsibility for implementation.
- Meta contends age verification should occur at app store level rather than on individual platforms
- Snap claims to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups highlight privacy concerns and technical challenges as impediments to effective age verification
- Platforms contend they are making their best effort whilst questioning the ban’s overall effectiveness
Larger Inquiries About the Ban’s Effectiveness
As Australia’s under-16 social media ban enters its enforcement phase, fundamental questions remain about whether the law will accomplish its intended goals or merely push young users towards less regulated platforms. The regulatory authority’s initial compliance assessment reveals that following implementation, substantial gaps remain—children keep discovering ways to bypass age verification mechanisms, and platforms have had difficulty stop new underage accounts from being established. Critics contend that the ban’s success depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply migrate to alternative services, encrypted messaging applications, or virtual private networks designed to conceal their age and location.
The ban’s international ramifications contribute further complexity to assessments of its effectiveness. Countries including the United Kingdom, Canada, and various European states are observing Australia’s experiment closely, considering similar regulatory measures for their own citizens. If the ban fails to reduce children’s online activity or fails to protect them from harmful content, it could damage the case for similar measures elsewhere. Conversely, if implementation proves sufficiently strict to truly restrict underage usage, it may encourage other governments to pursue similar approaches. The conclusion will likely influence global regulatory trends for many years ahead, making Australia’s regulatory efforts scrutinised far beyond its borders.
Those Who Profit and Who Is Disadvantaged
Mental health supporters and organisations focused on child safety have endorsed the ban as a necessary intervention to counter algorithmic manipulation and contact with harmful content. Parents and educators maintain that taking young Australians off platforms designed to maximise engagement could reduce anxiety, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health associated with social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates valid applications of social media for young people—maintaining friendships, accessing educational content, and engaging with online communities around common interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families dispute.
The ban’s real-world effects goes further than individual users to influence content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now encounter legal barriers to participation. Small Australian businesses that rely on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations have trouble connecting with young people through channels they previously utilised effectively. Meanwhile, the ban unexpectedly favours large technology companies with resources to create age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects extend far beyond the simple goal of child protection.
What Happens Next for Enforcement
Australia’s eSafety Commissioner has signalled a notable transition from hands-off observation to active enforcement, marking a key milestone in the rollout of the youth access prohibition. The watchdog will now gather evidence to establish whether platforms have neglected to implement “reasonable steps” to restrict child participation, a legal standard that surpasses simply documenting that children remain on these services. This strategy necessitates tangible verification that organisations have implemented suitable mechanisms and procedures meant to keep out minors. The enforcement team has stated it will conduct enquiries methodically, constructing evidence that could result in considerable sanctions for breach of requirements. This move from observation to action reflects increasing dissatisfaction with the companies’ present approach and signals that consensual engagement by itself is insufficient.
The rollout phase presents important questions about the adequacy of penalties and the concrete procedures for holding tech giants accountable. Australia’s regulatory framework provides regulatory tools, but their success hinges on the eSafety Commissioner’s willingness to pursue formal action and the platforms’ capability to adjust effectively. Global regulators, notably regulators in the Britain and Europe, will keenly observe Australia’s regulatory approach and results. A effective regulatory push could set a model for further jurisdictions considering similar bans, whilst failure might compromise the overall legislative structure. The next phase will prove crucial whether Australia’s pioneering regulatory approach translates into substantive defence for young people or remains largely symbolic in its influence.
