Commission Unveils Practical Tools for a Smooth AI Act Transition
The European Commission (''Commission'') has introduced two practical tools under the Artificial Intelligence Act (''AI Act'') to facilitate a seamless transition into the EU's new AI regulatory framework. These tools are the AI Act Service Desk (''Service Desk'') and the Single Information Platform (''Platform'').
The Service Desk, in coordination with the AI Office, provides tailored guidance and answers to stakeholder queries, while the Platform centralises essential materials and interactive resources. The platform offers the digital tools, including a Compliance Checker to help users determine whether the Act applies to them and steps required for compliance. The platform also offers an intuitive AI Act Explorer for navigating the Act's structure, as well as an online form for submitting questions directly to the Service Desk.
The Commission will continue assisting businesses and public authorities throughout the phased implementation of the AI Act, which will become fully applicable on 2 August 2027.
US Envoy Calls for Unified Transatlantic Approach to AI Regulation
US Ambassador to the EU Andrew Puzder expressed support for a joint transatlantic framework on artificial intelligence, suggesting that the EU and the United States should "come up with something unified" when regulating AI.
While warning against overregulation that could stifle technological progress, Puzder pointed to areas of convergence between the two sides, including ongoing cooperation on competition enforcement in the tech sector. The remarks signal a potential shift toward closer EU–US alignment on AI governance, even amid ongoing differences over digital taxation and content regulation.
Italy Adopts National Artificial Intelligence Law
On 10 October 2025, the Italian Artificial Intelligence Law (''Italian AI Law'') entered into force, expanding upon the EU Artificial Intelligence Act to account for national implementation needs and sectoral distinctions.
It establishes measures to ensure the safe, transparent, and ethical use of AI across sectors such as healthcare, education, and justice. It also defines responsibilities for public authorities and private developers, sets safeguards for minors, and prohibits the use of deepfake technologies without disclosure. The law also removes a previous proposal requiring localisation of servers used for AI systems in the public sector.
Italy's initiative aligns national oversight with EU-wide AI standards while addressing local priorities such as online misinformation, creative rights, and workplace transparency. Employers are required to inform and train employees about AI systems used in the workplace, promoting transparency and digital awareness. In the field of copyright, works created with the aid of AI tools may be protected if they reflect human intellectual input, while text and data mining of online materials or databases through AI models is permitted, subject to the owner's opt-out rights.
California Moves Forward with New AI Transparency Rules
California is moving ahead with legislation aimed at increasing transparency and accountability in artificial intelligence systems. The state has introduced the California AI Transparency Act, requiring major platforms -including social media, messaging, and search services- to label AI-generated content and allow provenance metadata to be embedded into digital media.
Alongside this, Senate Bill 53 sets out safety and transparency standards for high-impact AI systems, adds protections for minors and whistleblowers, and confirms that developers remain responsible for harmful algorithmic outcomes.
US Judges Admit AI Tools Caused Factual Errors in Court Decisions
Two federal judges in the United States have acknowledged that AI tools contributed to factual errors in recent court rulings. Their admissions came in response to an inquiry by Senate Judiciary Committee Chair Chuck Grassley, who had questioned whether judges were using generative AI in drafting opinions. Judge Henry Wingate of Mississippi reported that a clerk had relied on Perplexity AI to prepare a preliminary draft, which was published due to a lack of human oversight. Both judges have since introduced internal review measures and formal AI-use policies. Grassley welcomed their transparency but urged the judiciary to establish stronger guidance on AI use to ensure that generative tools do not compromise litigants' rights or the fairness of proceedings. The incident adds to growing concerns over unverified AI content in U.S. legal practice, where several attorneys have already faced sanctions for similar errors.
US Senators Propose Bill to Restrict Minors' Access to AI Chatbots
The GUARD Act, a bipartisan bill aimed at reducing children's exposure to AI chatbots, was introduced by two U.S. senators, Josh Hawley and Richard Blumenthal. The proposal would completely ban access to AI for users under 13 and restrict its use to those aged 13 to 17 without verified parental consent. It would also require companies to require robust age verification, either through government-issued identification or other means. The law would make it a crime for AI systems to promote self harm or sexually explicit content to minors, with fines of up to $100,000. It would also require chatbots to periodically warn users that they are not human and cannot provide professional advice. This law was necessitated after a series of lawsuits linking AI chatbots to mental health risks for young people.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.