In a media release on 28 November 2021, the Prime Minister announced a "world-leading" move to combat online trolls through the introduction of "new court powers to force global media giants to unmask anonymous online trolls and better protect Australians online".

The draft Social Media (Anti-Trolling) Bill 2021 (the Anti-Troll Bill) was released on 1 December 2021, with the accompanying Explanatory Paper stating that the legislation was intended to address the implications of the decision of the High Court in Fairfax Media Publications Pty Ltd v Voller (Voller). In September 2021, the High Court ruled in Voller that media companies could be liable as "publishers" of defamatory comments made on their Facebook pages by third-parties. You can read our analysis of that decision here.

In this article we explore whether the Anti-Troll Bill achieves the government's objective of unmasking trolls, and how the draft legislation could be strengthened to better strike a balance between protecting online privacy and the impact of online harms to individuals.

Impact of the Voller decision

The High Court decision in Voller made clear that social media account holders can be liable as publishers of defamatory comments made by third party users, including in circumstances where they are not aware of the comments, and even when the comments are unrelated to the content posted to that social media account.

Seemingly in answer to the question posed in our previous article "Will the comments section become a thing of the past?", since the Voller decision US news giant CNN disabled its Facebook pages in Australia to ensure it would not be liable for any defamatory comments made by third party users. Further, although not explicitly stated as being in response to Voller, it is becoming increasingly common for account holders to close the comments sections on some or all of their posts, or to limit who can comment on a post. For example, the ABC News Instagram account has recently closed the comments section on a post, stating "We've closed comments on this post to prevent harmful, defamatory or otherwise unlawful user contributions."

The Anti-Troll Bill seeks to shift liability from these account holders to either the social media service providers, which have a greater ability to control and/or identify the "trolls", or the commenters themselves. The Bill also seeks to give complainants the ability to identify online trolls and/or seek to have defamatory comments deleted.

The Federal Government has also announced that it will seek to establish a House Select Committee to inquire into online harms.

Further, we note that the National Party's Anne Webster MP has also introduced a private members' bill; the Social Media (Basic Expectations and Defamation) Bill 2021, which seeks to make social media service providers liable for defamatory material hosted on their platforms if not removed within a reasonable timeframe.

Social media services, account holders, commenters, and complainants

Before we delve into the operation of the Anti-Troll Bill, it is important to understand roles of each of the relevant stakeholders.

First, we have social media services - Facebook, Instagram, TikTok, Twitter; you get the idea. Presently, social media services are "hosting service providers" which are able to rely on the defence of innocent dissemination under section 235 of the Online Safety Act 2021 (Cth).

Next we have persons who maintain a social media page (account holders). According to Voller, account holders can be liable for defamatory comments made on material posted to their page, regardless of whether the comment is relevant to the post.

Then we have social media users, which we will refer to as commenters. Commenters can be anyone who contributes to the comments section of a post made by an account holder on a social media service. Commenters would include any persons making allegedly defamatory comments (the "trolls"), and are able to conceal their identity on social media. Of course, commenters also include any user making a comment, whether friendly, neutral, abusive, vulgar, funny, and everything in between.

Finally we have anyone who considers that they may have a cause of action in defamation (complainants) for allegedly defamatory comments made by a commenter on an account holder's page of a social media service.

As alluded to above, presently, in the event of allegedly defamatory comment(s), because social media services can rely on the innocent dissemination defence, and commenters can often rely on anonymity, complainants have little recourse than to pursue account holders as publishers of such comments.

The Anti-Troll Bill

The Anti-Troll Bill has four broad functions:

  1. it makes clear that an Australian account holder is not liable as a publisher of defamatory comments made by a commenter in Australia, and that a social media service is the publisher of such a comment;
  2. it provides that the social media service cannot rely on section 235 of the Online Safety Act 2021 (Cth) or the innocent dissemination defence if it is part of a defamation proceeding in Australia related to such a comment;
  3. it provides a defence to such a social media service if it has a complaints scheme that meets the prescribed requirements (described further below); and
  4. it provides a mechanism for a complainant to seek a Court order requiring disclosure of identifying information and / or location of the commenter (described further below).

The prescribed complaints scheme

The prescribed complaints scheme is to operate broadly as outlined below. The requirements emphasised in bold will be discussed in more detail in the analysis section.

If a complainant has reason to believe that they may have a right to obtain relief against a commenter in a defamation proceeding that relates to a comment posted on a page of a social media service provider by the commenter, then:

  • the complainant can make a complaint to the social media service;
  • if the comment was made in Australia (according to the geo-location of the commenter), the social media service must, within 72 hours of the complaint:
    • notify the commenter that a defamation complaint has been made
    • confirm to the complainant that the commenter has been so informed; and
    • provide the complainant with the country location data of the commenter;
  • the social media service may then remove the comment from the page with the consent of the commenter. No time frame is provided for this step;
  • the social media service must then notify the complainant of the outcome of the complaint within 72 hours of that outcome occurring.
  • if the complainant is not satisfied with the outcome, they can request that the social media service provide the contact details of the commenter;
  • the social media service must then, within 72 hours of such a request, ask for the commenter's consent to disclosure of their contact details and, if consent is obtained, provide the details to the complainant;
  • if the social media service considers the complaint does not genuinely relate to the potential institution of defamation proceedings, the service is not required to take any action.

Commenter information disclosure orders

If the complainant cannot obtain the contact details of the commenter using the prescribed complaints process (for example if the commenter does not consent to their information being provided, or the social media service does not consider the complaint to be genuine), and the complainant reasonably believes that they may have a right to obtain relief, they can apply to the Court for an order that certain information be provided.

Certain criteria must be satisfied, and the Court also can refuse to make an order under the section if it considers that the disclosure of the commenter's information is likely to present a risk to the commenter's safety.

Analysis: is the Anti-Troll Bill really Anti-Troll?

The Anti-Troll Bill provides a good starting point for shifting liability for defamation to those who either have the ability to control the impact of the wrongdoing or the wrongdoer themselves.  There are, however, at least three major flaws in the scheme which, unless addressed, mean that the scheme will have little impact on the status quo. These flaws are:

  • the 72 hour response period, with no prescribed timeframe for deleting a post;
  • the need to ask for the commenter's consent to provide their contact details to a complainant following a complaint; and
  • the Anti-Troll Bill targets comments that are defamatory, which is not simple to determine and does not include a range of "troll-like" internet behaviour which would not be caught by the Bill.

Seventy-two hours is a lifetime in the world of social media

In the space of 72 hours, a social media post can go "viral" - shares, likes and comments feed an algorithm designed to promote content that garners interest with a multiplying effect only truly known and understood by the social media service itself.

Under the prescribed complaints scheme, if a defamatory comment is made on a viral post it is possible that hundreds, thousands, even millions of people could see it before the commenter has even received the notice that a complainant has made a complaint.

In fact, after 72 hours it is possible that the wave of attention that social media post will receive is beginning to pass, and if a comment is deleted at this stage, it is unlikely that anyone new is seeing it anyway.

Seventy-two hours is simply too long to leave a defamatory post published online while the social media service assesses the complaint, notifies the relevant parties and seeks consent to delete it. Further, the Anti-Troll Bill does not prescribe a time frame for deleting a post - without a timeframe for this step, the complaint reporting system is meaningless.

The difficulty with deleting a comment the subject of a complaint is, of course, that it may hamper free speech if every complaint results in automatic deletion - one could simply complain about legitimate but undesirable comments to have them swiftly removed, never to be seen again. So where is the balance?

We propose that a comment that is the subject of a complaint be immediately "hidden" while the complaint is being dealt with, to be either "unhidden" or permanently deleted following the outcome of the complaint.

Commenters are unlikely to give consent to disclose their contact details.

The point at which the scheme lacks teeth is the requirement for a commenter's consent to disclose their personal details in the event of a complaint. The scheme reflects the requirements of Australian Privacy Principle (APP) 6.1, which provides that personal information cannot be used for a purpose other than the original purpose of collection unless the individual consents or if APP 6.2 or 6.3 apply. Here, because the individual would consent, the scheme would be APP compliant.

However, there is a more simple approach, which would also satisfy the Australian Privacy Principles.  APP 6.2(b) permits the use or disclosure of personal information for reasons unconnected with the original purpose of collection if "the use or disclosure of the information is required or authorised under an Australian law". The Anti-Troll Bill (if enacted) could circumvent the consent requirement of APP 6.1 by simply authorising social media services to provide relevant personal information about the commenter in prescribed circumstances. We consider that the Bill should utilise this provision and provide a balanced scheme for the provision of personal information when certain criteria are met.

Online trolls are online trolls because they can act anonymously. The cloak of anonymity is arguably the main driver for people to go online to make certain comments that they are unlikely to say in the harsh light of day. It is therefore improbable that, when asked nicely by a social media service, they will voluntarily uncloak themselves to the very person they have defamed (whether intentionally or not).

As stated above, we propose that a comment that is the subject of a complaint becomes "hidden" while the complaint is being dealt with. This means that the comment is not deleted, but it is temporarily not visible to anyone viewing the post, and can later be unhidden and made visible again.

We also propose that such a comment should only become "unhidden" if the commenter consents to the complainant being provided with the commenter's contact information or the complainant withdraws their complaint. At this point, the complainant will then have an ascertainable publisher to pursue in the Courts for defamation. Alternatively, the commenter could refuse to provide their details, and the hidden comment would then be deleted, minimising its potential to harm the complainant.

Defamation is just one aspect of a troll's online activity

The Anti-Troll Bill focuses on defamatory comments, which is just one aspect of undesirable online activity a troll might engage in. As was recently reported in the Sydney Morning Herald, NSW District Court Judge Judith Gibson, a leading defamation jurist, has criticised the Bill, stat "the first problem is that if this is aimed at stopping people who [send] anonymous insults . it's not going to work". This is because abusive or nasty comments are not of themselves defamatory; the comment must also lower a person's reputation. Determining whether or not a comment is defamatory is not a simple task, particularly without context on an online complaints form, though the Anti-Troll Bill requires complainants, social media services and commenters to make that assessment in a time sensitive environment.

Further concerns

Another concern with respect to the proposed scheme is that, if account holders are exempt from liability as a publisher, they have no incentive to monitor or control the comments on their posts.

Whilst the Voller decision can work unfairly in some situations, some degree of exposure and responsibility for account holders may prove effective in reducing defamatory responses. Some social media posts are inherently provocative and likely to trigger comments from across the spectrum - is it really so bad if the comments section on such posts are closed? If account holders are exempt from liability as a publisher, as the High Court observed in Voller, they will likely have an incentive (through likes, shares, and comments) to let a post go "viral" no matter what is being said in the comments section.

We propose that the exemption should not apply if it can be established that the account holder knowingly allowed comments likely to be defamatory to continue to be posted.

Further, the Voller case involved many online defamatory comments, presumably by different commenters. The task of reporting each and every commenter is a heavy burden for complainants, particularly given that the algorithm used by the social media service can cause such comments to "snowball" or be amplified - with commenters "jumping on the bandwagon" of previous comments.

Our final comment

The Anti-Troll Bill in its current form will do little to deter or redress troll activity online. The prescribed complaints process will result in very few if any trolls being unmasked, and moves too slowly to reduce the impact of defamatory comments. Social media services are unlikely to assess each complaint on a case by case basis, meaning that arbitrary assessments of a complaint are likely to occur and complainants could be left with no option than to seek an expensive court order to obtain information to identify the commenter. The Federal Government has stated that the Anti-Troll Bill will soon be open for submissions via its "online consultation hub" and early submissions can be made to the Defamation Taskforce at Defamation@ag.gov.au.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.