- within Technology topic(s)
- in United States
- with readers working within the Retail & Leisure industries
- within Technology, Litigation, Mediation & Arbitration, Government and Public Sector topic(s)
Introduction
On 15 April 2025, the Digital Policy Office ("DPO") of Hong Kong released the Generative Artificial Intelligence Technical and Application Guideline (the "Guideline"). This Guideline offers practical advice for technology developers, service providers, and users on how to apply generative artificial intelligence ("AI") safely. It covers key topics such as the scope of AI applications, potential risks, and governance principles. These include technical issues like data leakage, model bias, and errors that must be managed.
This Guideline supports Hong Kong's goal to become a responsible AI centre in the region. It aims to promote innovation while implementing necessary protections.
Understanding the Guideline's framework
The Guideline establishes a governance framework for generative AI designed to promote its effective and beneficial use. Structured around 5 key dimensions – personal data privacy, intellectual property, crime prevention, reliability and trustworthiness, and system security – the framework encourages stakeholders to define their scope of action and assess risks across these areas.
A central feature of this framework is its risk-based approach, which classifies AI systems into four tiers – unacceptable, high, limited, and low risk – each subject to tailored regulatory measures ranging from outright prohibition and legal consequences for the most severe risks, to basic self-certification for low-risk applications.
This tiered structure ensures proportionality, balancing innovation with user and societal protection. The Guideline also aligns with international principles – including those of the Organization for Economic Co-Operation and Development (OECD) and the European Union (EU) AI Act – emphasizing security, transparency, accuracy, reliability, fairness, and objectivity.
Notably, Hong Kong's framework prioritizes practicality and efficiency, urging developers and providers to enhance generative AI utility through optimized algorithms, refined model architectures, and diversified applications. These efforts help ensure contents generated by generative AI are high-quality, efficient, and aligned with user intentions across various tasks and industries.
Practical guidance for stakeholders
The technology developers, including those commissioning development or determining technology use, are recommended to establish internal specialized teams encompassing data, quality control, and compliance to ensure both security and regulatory adherence. Developers are advised to conduct comprehensive testing prior to deployment, carry out regular compliance reviews and assessments, and implement advanced anonymization and encryption techniques when handling personal data. Furthermore, they are encouraged to prioritize fairness through the use of diverse datasets and algorithmic optimization to minimize biases and mitigate hallucinations.
The service providers – including platform operators enhancing existing technologies – are advised to establish a responsible AI service framework. This includes implementing content governance to prevent illegal or inappropriate outputs, providing user-friendly fact-checking tools, and ensuring clear transparency regarding AI-generated content. Additionally, providers should conduct risk assessments, pilot testing, and independent audits, while complying with data privacy regulations and safeguarding against threats such as data poisoning.
For service users – encompassing both individuals and organizations that create or distribute AI-generated content – the Guideline also outlines several key responsibilities. Users should clearly indicate when AI has been involved in content creation or decision-making to maintain transparency. It is also important to proactively verify the accuracy and suitability of AI-generated outputs before using or sharing them. Users are further advised to develop an understanding of AI limitations – such as potential biases, errors, or hallucinations – in order to make informed decisions. Lastly, the Guideline recommends selecting services with robust privacy policies and exercising caution to avoid unnecessary sharing of sensitive information.
Hong Kong's position in global AI governance
While the Guideline does not carry the force of law, it serves as an important reminder that the deployment and use of AI entail significant legal and ethical risks. Given that generative AI and other AI is a rapidly evolving technology with widespread applications, its potential dangers are inherently difficult to predict. Nevertheless, by articulating key principles for responsible AI use, the Guideline offers a practical foundation for mitigating such risks.
Takeaways
The Guideline establishes a robust institutional framework that supports Hong Kong's participation in global AI governance. It underscores Hong Kong's proactive stance in addressing critical challenges with adoption of new and advanced technologies – such as data privacy, ethical risks, transparency, misinformation, and algorithmic accountability – amid the swift progression of generative AI technologies. To ensure the Guideline remains effective and influential, ongoing evaluation, international collaboration, and active public engagement will be essential as AI continues to advance. Companies that are or starting to use generative AI can refer to the Guideline to establish their internal policies on generative AI adoption and risk assessments to ensure that they are not left behind in this AI race but still maintain proper human oversight and governance.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.