YouTube Says Australia’s Upcoming Social Media Ban “Won’t Keep Children Safe”

YouTube Says Australia’s Upcoming Social Media Ban “Won’t Keep Children Safe”

YouTube warns that banning under-16s from platforms may be well-intentioned but will lead to unintended consequences and not deliver promised online safety outcomes.

What YouTube Is Saying

YouTube has urged the Australian government to rethink its plan to ban children under 16 from using social media platforms. While the company calls the law “well-intentioned,” it warns it may not accomplish the goal of making children safer online.

The platform also argues it should be exempted from the ban, claiming that it is not a social media service.

Key Points of Concern

  • The law will apply to YouTube, Facebook, Instagram, TikTok, and other platforms.
  • YouTube’s counsel told a Senate committee that enforcement will be extremely difficult and that many important safety issues won’t be addressed simply by barring under-16s.
  • Experts say the legislation is vague in parts, potentially symbolic, and may lack clear guidance on how age verification or under-age detection will work.
  • Firms breaking the law could face hefty fines — up to A$49.5 million (around US$32 million).

Why Legislating Isn’t Always Enough

YouTube points out that laws alone won’t fix online safety challenges. They argue that:

  • Real safety comes from better tools, clearer rules from platforms, and education for children and parents.
  • Simply restricting access could push young users into less regulated spaces online.
  • Enforcement is a key problem — verifying ages, monitoring content, and ensuring platforms comply are costly and complex.

The Australian Government’s Plan

Prime Minister Anthony Albanese’s government is pushing forward with plans to implement a law by the end of 2025 that bans under-16s from most social media platforms.

Some parts of the law will require platforms to deactivate accounts of under-age users where identified.

But the law does not currently require age verification for all users — it just asks platforms to take “reasonable steps” to detect and deactivate accounts that are clearly under-age.

Why This Debate Matters

  • It’s a test of whether online safety laws can actually protect children or just create enforcement headaches.
  • The outcome could shape how other countries regulate social media and protect minors.
  • It raises tough questions about platform responsibility versus parental & educational roles.