The CDEI has published a report on the role of AI in addressing misinformation on social media platforms, which details the findings from an expert forum it convened last year, with representatives from platforms, fact-checking organisations, media groups, and academia.
Key findings of the report include:
- algorithms enable content to be moderated at a speed and scale that would not be possible for human moderators operating alone;
- the onset of COVID-19 and resulting lockdown led to a reduction in the moderation workforce, just as the volume of misinformation was rising; platforms responded by relying on automated content decisions to a greater extent, without significant human oversight;
- increased reliance on algorithms led to substantially more content being incorrectly identified as misinformation; participants noted that algorithms still fall far short of the capabilities of human moderators in distinguishing between harmful and benign content; one reason is that misinformation is often subtle and context-dependent, making it difficult for automated systems to analyse; this is particularly true for misinformation that relates to new phenomena (such as COVID-19);
- platforms have issued reassurances that the increased reliance on algorithms is only temporary, and that human moderators will continue to be at the core of their processes;
- platforms use a range of policies and approaches to addressing misinformation (including removing content, downranking content, applying fact-checking labels, increasing friction in the user experience, and promoting truthful and authoritative information); a lack of evidence may, however, be hindering our understanding of the effectiveness of the aforementioned methods;
- while platforms have begun to disclose more information about how they deal with harmful content, for example via transparency reports, they could go further; transparency reports often provide limited detail across important areas including content policies, content moderation processes, the role of algorithms in moderation and design choices, and the impact of content decisions; and
- platforms emphasised the importance of having clear guidance from the Government on the types of information they should be disclosing, how often and to whom; as the new online harms regulator, Ofcom is well positioned to set new benchmarks for clear and consistent transparency reporting.
The CDEI says that interventions worthy of investigation, which could be undertaken immediately to help mitigate misinformation, include: undertaking more research into the efficacy of moderation tools; experimenting with new moderation methods; increasing transparency of platform moderation policies; and investing more in supporting authoritative content. To access the report, click here.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.