- within Media, Telecoms, IT and Entertainment topic(s)
- with readers working within the Media & Information industries
- within Media, Telecoms, IT, Entertainment, International Law and Energy and Natural Resources topic(s)
- with Finance and Tax Executives and Inhouse Counsel
Australia's eSafety Commissioner has released regulatory guidance outlining the 'reasonable steps' that 'age-restricted social media platforms' (social media platforms) will need to take to restrict access for Australians under 16 years old (the Guidelines). Australia's social media ban for those under 16 years old is due to take effect from 10 December this year.
The release of the Guidelines follow the passage of amendments to the Online Safety Act 2021 (Cth) (Act) in December 2024, and the conclusion of the independent Age Assurance Technology Trial final report (Report). This assessed the effectiveness of age assurance technology in restricting social media access for children under 16.
Key takeaways around social media age restrictions
The Guidelines have taken a principle-based approach, rather than enforcing prescriptive requirements on social media platforms. In particular, the Guidelines have highlighted the following:
- In the lead up to and after the ban comes into effect, social
media platforms are expected to focus on detecting and deactivating
the existing accounts of children under the age of 16. This is in
addition to taking reasonable steps to prevent these ex-users from
immediately creating new accounts;
- Third parties may be used to provide age assurance technology,
develop the technology in-house, or deploy a combination of the
two;
- Social media platforms are encouraged to take a 'successive
validation or waterfall approach' to age assurance measures at
the point of account creation and implement a combination of age
assurance methods; and
- strong consideration should be given to how social media platforms can minimise the personal information that they collect and hold, while still ensuring that they are taking reasonable steps to prevent access for children under 16. (If necessary, they need to be able to demonstrate to the eSafety Commissioner that those steps were reasonable).
The Guidelines were informed by the Report, which focused on the following forms of age assurance methods:
- age verification (relying on a user's date of birth from
official identification document details and linking the
identification details to the user);
- age estimation (using artificial intelligence to deduce a
user's age based on biometric or behavioural features, such as
a face scan); and
- age inference (using available verified information from the internet which implies the user is over or under a certain age, or within an age range).
Guiding principles for social media platforms
The Guidelines include a series of 'guiding principles' that should inform the reasonable steps taken by social media platforms to comply with the Act. These principles, and the suggested steps for social media platforms to take in relation to them, are outlined below.
Reliability, accuracy, robustness and effectiveness
- Determine what an 'acceptable error threshold' for age
assurance is based on risk, service type and user base of the
social media platform. Employ measure/s which achieve this level of
accuracy (noting there is no minimum accuracy level required of
technology);
- Consider applying age filters of 18+ for platforms that are
tailored to adults (e.g. dating apps);
- Where age inference or age estimate technology is employed,
ensure appropriate buffer thresholds apply;
- Mitigate known and reasonably foreseeable circumvention risks
(e.g. through red-team testing and third party audits); and
- Undertake and record ongoing internal testing procedures.
Privacy-preservation and data-minimisation
Where possible, use non-personal information and avoid handling sensitive information. The Guideline specifies that there is no requirement on social media platforms to retain personal information as a record of individual age checks.
Accessibility, inclusivity and fairness
- Consider how the technology performs across a diverse range of
appearances, abilities and capacities to ensure systems produce
fair outcomes for all users;
- Build processes to mitigate the impact of accessibility or bias
issues;
- Provide end-users with clear and easy-to-understand information
regarding the age assurance technology being employed by the
platform; and
- Provide end-users with a choice of age assurance methods and a variety of options to verify their age (i.e. allowing for non-document age verification methods).
Transparency
- Ensure there are plain language explanations of when and why
age assurance is required;
- Provide guidance as to the age assurance options available,
what type of personal information will be collected, how the
information will be stored and dealt with, and what will happen to
the age-restricted user accounts; and
- Provide end-users with examples of what legitimate age assurance technology looks like to avoid phishing and scam activity.
Proportionality
- Note that more robust measures will be required for those
social media platforms and services that have a higher risk
profile. These include those with higher existing numbers of
children and young people holding accounts, prevalence of features
associated with harm (algorithmic content recommendations,
persistent notifications, endless scrolling), or prevalence of
violent material / material promoting unsafe eating habits;
- Employ a range of tools in order to tailor age assurance
methods to users based on their risk profile; and
- Avoid unreasonably interfering with end-users' rights such as 'over-blocking'.
Evidence-based and responsive to emerging technology and risk
Review the effectiveness of the age assurance technologies utilised and update methods as circumvention tactics evolve, new scams and data breach opportunities arise and changes to end-user behaviour and demographics.
Specific measures to comply with the regime
The Guidelines have also identified specific measures that should be taken by social media platforms to comply with the Act, as set out below.
Detecting and deactivating accounts belonging to under 16 year olds
- Use a combination of existing user data and signals, including
location-related age-related signals, to infer users'
age;
- Provide simple pathways for people to report potential underage
account holders; and
- Deactivate self-declared under 16 accounts.
Preventing people under 16 years old from creating accounts
- Utilise age assurance methods at the point of account creation;
and
- Collect non-identifiable data to prevent attempts to create new accounts from other devices / new login details.
Preventing circumvention
- Prohibit changes to users' date of birth unless age
assurance is undertaken, and monitor changes to account details
indicative of account transferring; and
- Monitor the creation of multiple accounts from the same device or IP address and implement VPN detection services.
Allowing users to make complaints or seek review of decisions
- Provide clear instructions on how to make complaints to the
platform and the OAIC;
- Refrain from asking for any further government-issued
identification material; and
- Avoid implementing fully-automated reviews.
Promoting transparency and accountability
- Make terms of use, policies and procedures easily accessible to
end-users;
- Ensure senior management are involved in the oversight of the
implementation of the measures;
- Ensure the systems are built to handle an increase in
reporting;
- Ensure staff have received adequate training prior to the
Act's amendments commencing;
- Continue to invest in improving the platforms' ability to
conduct age assurance; and
- Maintain records regarding age assurance methods and what 'reasonable steps' have been taken.
What are not considered 'reasonable steps'
Helpfully, the Guidelines also set out a few examples of what measures would not constitute reasonable steps. These include:
- relying entirely on self-declarations of date of birth by
prospective account holders;
- measures which allow under-age users to hold accounts for
extended periods of time before an age determination is made;
- measures that do not adequately prevent those under 16 who
recently had their accounts deactivated from creating a new one;
and
- measures which falsely identify under-age users and prevent people 16 years and over from holding accounts.
For further information about the social media ban, see Social media use in Australia to be restricted for under 16s.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
![]() |
![]() |
Lawyers Weekly Law firm of the year
2021 |
Employer of Choice for Gender Equality
(WGEA) |