EU Balances Safety and Privacy
The EU is developing a new regulation aimed at identifying users under the age of 18. The initiative aims to enhance online safety by implementing age verification methods for social media and various digital services. Proposed solutions may include biometric data usage, upload of identity documents, and AIdriven verification processes.
Critics are concerned how user data will be stored, protected, and potentially misused. There are also questions regarding the feasibility and practicality of enforcement across diverse platforms.
The EU therefore faces significant challenges in balancing the need for improved online safety with safeguarding user privacy. This complex interplay will be critical to implementation and could shape the future of young people's digital interactions.
New Irish Online Safety Code
Ireland has introduced an online safety code for video sharing platforms. The code prohibits material encouraging cyberbullying, child abuse, self-harm, terrorism and racism while requiring platforms to shield children from explicit content and to provide parental controls. Effective next month the code applies to all major platforms with EU headquarters in Ireland, including Tik Tok and Facebook, and comes with fines of up to EUR 20 million (or 10% annual revenue) for non-compliance. Part of Ireland's broader Online Safety Framework, the code complements the EU's Digital Services Act.
New Tool Boosts AI Compliance in EU
European companies have been slow to deploy AI models due to the strict requirements of the EU AI Act which came into effect in August. To simplify compliance, a new "LLM Checker" has been introduced by ETH Zurich, INSAIT, and LatticeFlow AI providing AI models with compliance scores across categories such as cybersecurity and privacy. Initial findings indicate that major AI models from companies such as OpenAI and Meta, while largely meeting content safety standards, often struggle to tackle discrimination and cybersecurity. The European Commission has welcomed the tool as a first step in applying technical guidelines for AI Act compliance while developing a Code of Practice facilitating enforcement for AI providers.
U.S. Sues TikTok
A coalition of 14 attorneys general and the District of Columbia has filed lawsuits against TikTok alleging it harms young people's mental health by exploiting addictive design features and targeted algorithms. The suit highlights TikTok's use of constant scrolling, push notifications, and filters to engage users which has been linked to anxiety and body dysmorphia. The lawsuits additionally allege TikTok operates as an unlicensed "virtual economy" through in-app purchases and exploits teens for financial gain in its LIVE feature which often appears devoid of age controls. Part of a broader national reckoning against tech companies, it has been likened to previous iconic legal actions against tobacco and pharmaceutical industries. TikTok faces further pressure regarding a potential U.S. ban if ByteDance, its parent company, fails to divest by mid-January.
First-Time AI Memorandum Publication
Ex-president Biden issued the first National Security Memorandum on AI which focuses on securing U.S. leadership in AI, harnessing AI for national security, and building international AI governance. It also mandates coordination among federal agencies, such as the Departments of Defense and Department of Homeland Security, to secure AI infrastructure, manage supply chain risks, and establish standards for safe AI development. Collaboration with the private sector for AI testing and practices to mitigate cybersecurity and safety risks, which align with ongoing AI regulations and the country's commitment to responsible AI advancement, were also highlighted.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.