Beginning 10 December, social media companies operating in Australia will have to prove they are taking “reasonable steps” to stop anyone under 16 from creating accounts. They must also shut down or remove existing accounts held by users below that age.
The new rules are set out in the Online Safety Amendment (Social Media Minimum Age) Act 2024, which updates Australia’s existing online safety laws to formally prohibit social media companies from allowing under-16s to hold accounts.
The federal government describes the measure—promoted as a global first and widely supported by parents—as an attempt to shield children from online risks. Officials argue that many platforms use design features that push young users to stay online longer and expose them to harmful or distressing content.
A government-commissioned study released earlier this year found that 96% of Australian children aged 10–15 use social media. Of those, about 70% reported encountering damaging material, including misogynistic videos, violent content, or posts encouraging disordered eating or self-harm.
The same study reported that one in seven had experienced grooming-type behaviour, and more than half said they had been cyberbullied.
Which platforms are included?
So far, the ban applies to Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Kick, and Twitch.
The government is under pressure to extend the rule to online gaming platforms, and some—like Roblox and Discord—have already introduced limited age checks, likely hoping to avoid being added to the list.
Authorities say they will continue reviewing which services should be covered, based on whether a platform’s primary or significant purpose is social interaction, whether it allows users to connect with others, and whether users can post content.
Some services are excluded: YouTube Kids, Google Classroom, and WhatsApp did not meet the criteria. Children will also still be able to view most content on YouTube without an account.
How will enforcement work?
Penalties won’t fall on parents or young users, but on the platforms themselves. Companies that fail to comply could face fines of up to A$49.5 million for serious or repeated breaches.
To meet their obligations, platforms must deploy age-assurance tools—though the government has not mandated specific technologies. Options being discussed include checks using government identification, facial or voice analysis, and behavioural age estimation (i.e., inferring age through online activity).
Platforms cannot rely on self-reported birthdays or parental approval.
Meta has already announced that it will begin shutting down teen accounts from 4 December, offering reinstatement for those wrongly removed if they verify their age through government ID or a video selfie. Other companies have not yet revealed their plans.
Will the ban be effective?
Experts say it’s too early to tell. Concerns remain that age-verification tools can misidentify users—blocking adults while allowing some underage children through. The government’s own review found facial-estimation systems are least accurate for the very age group the ban targets.
There are also doubts about whether the fines are large enough to drive compliance. Former Facebook executive Stephen Scheeler has noted that Meta earns roughly A$50 million in under two hours.
Critics argue that even strict enforcement won’t remove all online risks for children because gaming sites, dating platforms, and AI chatbots fall outside the ban. Some point out that young people who rely on social media for community or support might become more isolated. Others argue that strengthening digital-literacy education would be a more effective strategy.
Communications Minister Annika Wells has acknowledged that the rollout may be “messy,” saying major reforms rarely look neat at the start.
What about privacy and data security?
A major concern is the volume of sensitive information that could be collected in the verification process. Australia has suffered several large data breaches in recent years, raising fears about how platforms will store and protect this information.
The government says the legislation includes strict safeguards: any personal data used for age verification must be used only for that purpose, then destroyed, with heavy penalties for mishandling. Platforms must also offer age-assurance options that do not require government IDs.


