Australia’s online watchdog has criticised the world’s largest social media companies of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and inadequate safeguards to prevent new accounts. In its initial compliance assessment since the ban took effect, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to stop under-16s from using their services.
Compliance Failures Revealed in Opening Large-scale Review
Australia’s eSafety Commissioner has outlined a troubling pattern of non-compliance among the world’s biggest social media platforms in her inaugural review following the ban took effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have jointly failed to implement sufficient safeguards to stop minors from accessing their services. Julie Inman Grant raised significant concerns about structural gaps in age verification systems, highlighting that some platforms have permitted children who initially declared themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings indicate a significant escalation in the regulatory action, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has emphasised that simply showing some children still maintain accounts is insufficient; platforms must rather provide concrete evidence that they have established robust systems and processes intended to stop under-16s from opening accounts in the outset. This shift reflects the government’s commitment to ensure tech giants responsible, with possible sanctions looming for companies that do not meet the legal requirements.
- Enabling previously banned users to re-verify their age and restore account access
- Enabling repeated attempts at the same age assurance method with no repercussions
- Inadequate systems to prevent accounts for under-16s from being created
- Insufficient reporting tools for families and the wider community
- Absence of clear information about regulatory measures and user account terminations
The Magnitude of the Challenge
The substantial scale of social media usage amongst Australian young people underscores the compliance challenge confronting both the authorities and the platforms in question. With millions of accounts already restricted or removed since the implementation of the ban, the figures provide evidence of widespread initial non-compliance. The eSafety Commissioner’s findings indicate that the operational and technical barriers to implementing age restrictions have turned out to be considerably more complex than expected, with platforms struggling to differentiate authentic age confirmations from false claims. This intricacy has left enforcement authorities wrestling with the fundamental question of whether existing age verification systems are sufficient for the purpose.
Beyond the technical obstacles lies a wider issue about the willingness of platforms to prioritise compliance over user growth. Social media companies have long resisted stringent age verification measures, citing data protection worries and the genuine difficulty of verifying age digitally. However, the Commissioner’s report suggests that some platforms might not be demonstrating sufficient effort to implement the systems required by law. The shift towards active enforcement represents a pivotal moment: either platforms will substantially upgrade their regulatory systems, or they stand to incur significant penalties that could transform their operations in Australia and potentially influence regulatory approaches internationally.
What the Data Shows
In the opening month after the ban’s introduction, Australian authorities reported that 4.7 million accounts had been suspended or deleted. Whilst this statistic initially appeared to show compliance achievement, subsequent analysis reveals a more layered picture. The considerable quantity of account removals suggests that many under-16s had been able to set up accounts in the first place, indicating that preventative measures were inadequate. Furthermore, the data prompts inquiry about whether removed accounts reflect genuine enforcement or merely users deleting their accounts willingly in reaction to the latest limitations.
The minimal transparency regarding these figures has troubled independent observers seeking to assess the ban’s true effectiveness. Platforms have disclosed scant details about their enforcement methodologies, success rates, or the characteristics of deleted profiles. This opacity makes it hard for regulators and the wider public to evaluate whether the ban is functioning as designed or whether young people are just locating different means to access social media. The Commissioner’s push for detailed evidence of systematic compliance measures reflects growing frustration with platforms’ resistance to disclosing full information.
Sector Reaction and Pushback
The social media giants have responded to the regulator’s enforcement action with a combination of compliance assurances and doubts regarding the ban’s practicality. Meta, which operates Facebook and Instagram, stressed its dedication to adhering to Australian law whilst simultaneously arguing that accurate age determination continues to be a major challenge across the industry. The company has called for a alternative strategy, suggesting that strong age verification systems and parental consent requirements implemented at the application store level would be more effective than platform-level enforcement. This stance reflects broader industry concerns that the existing regulatory system puts an unrealistic burden on separate platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, stating that it had locked 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, industry observers question whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the regulatory requirement to systematically remove an entire age demographic persists unaddressed. Companies have long resisted stringent age verification, pointing to privacy issues and technical constraints, creating a standoff between authorities and platforms over who bears responsibility for execution.
- Meta maintains age verification ought to take place at app store level rather than on individual platforms
- Snap states to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups highlight privacy concerns and technical obstacles as barriers to effective age verification
- Platforms maintain they are making their best effort whilst challenging the ban’s general effectiveness
More Extensive Inquiries About the Prohibition’s Effectiveness
As Australia’s under-16 online platform ban enters its implementation stage, fundamental questions persist about whether the law will accomplish its intended goals or merely drive young users towards unregulated platforms. The regulator’s initial compliance assessment reveals that following implementation, substantial gaps exist—children keep discovering ways to circumvent age verification systems, and platforms have struggled to stop new underage accounts from being established. Critics contend that the ban’s success depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply shift towards alternative services, encrypted messaging applications, or VPNs designed to mask their age and location.
The ban’s global implications contribute further complexity to assessments of its success. Countries including the United Kingdom, Canada, and several European nations are watching Australia’s approach closely, exploring similar regulatory measures for their own citizens. If the ban proves ineffective at reducing children’s online activity or does not protect them from harmful content, it could weaken the case for equivalent legislation elsewhere. Conversely, if implementation proves sufficiently strict to effectively limit underage participation, it may encourage other administrations to adopt comparable measures. The result will potentially determine international regulatory direction for the foreseeable future, making Australia’s enforcement efforts examined far beyond its borders.
Who Benefits and Who Is Disadvantaged
Mental health campaigners and child safety organisations have backed the ban as a necessary intervention against algorithmic manipulation and exposure to harmful content. Parents and educators contend that taking young Australians off platforms built to maximise engagement could reduce anxiety, enhance sleep quality, and decrease exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also removes legitimate uses of social media for young people—maintaining friendships, accessing educational content, and engaging with online communities around shared interests. The regulatory approach assumes harm exceeds benefit, a calculation that some young people and their families challenge.
The ban’s practical impact goes further than individual users to influence content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now encounter legal barriers to participation. Small Australian businesses that rely on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously employed effectively. Meanwhile, the ban unintentionally favours large technology companies with resources to build age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects reach well further than the simple goal of child protection.
What Follows for Compliance Monitoring
Australia’s eSafety Commissioner has signalled a marked change from hands-off observation to direct intervention, marking a critical turning point in the execution of the youth access prohibition. The regulator will now compile information to establish whether companies have omitted “reasonable steps” to block minors from using, a statutory benchmark that goes further than simply recording that children remain on these systems. This approach requires demonstrable proof that organisations have established appropriate systems and processes meant to keep out minors. The enforcement team has stated it will conduct enquiries carefully, building cases that could lead to considerable sanctions for failure to comply. This move from oversight to enforcement reflects mounting concern with the platforms’ current efforts and indicates that willing participation by itself is insufficient.
The enforcement phase raises significant concerns about the adequacy of penalties and the operational systems for ensuring platform accountability. Australia’s regulatory framework offers compliance mechanisms, but their success relies on the eSafety Commissioner’s willingness to pursue regulatory enforcement and the platforms’ capability to adjust meaningfully. International observers, especially regulators in the United Kingdom and European Union, will keenly observe Australia’s implementation tactics and outcomes. A successful enforcement campaign could create a template for additional countries evaluating equivalent prohibitions, whilst inadequate results might compromise the comprehensive regulatory system. The forthcoming period will prove crucial whether Australia’s innovative statutory framework translates into substantive defence for adolescents or becomes largely performative in its impact.
