Colombia has recently introduced a Bill to regulate artificial intelligence, following the lead of other countries in the region. We take a look at the key provisions of the Bill.
The Bill was introduced on28July 2025andseekstoestablisha comprehensive regulatory framework for the development, use, and governance ofartificialintelligence (AI) in Colombia. Its mainobjectiveis to ensure the ethical, responsible, competitive, and innovative deployment of AI, protecting fundamental rights and promoting sustainable development.It draws on existing frameworks in Europe and the widerLatin Americaregion. In this article, we examine some of the key principlesthat underpin the proposed legislation.
Application
The law would apply to all individuals and entities, public or private, involved in any stage of the AI systems lifecycle—design, development, implementation, operation, commercialisation, or use—when the system is developed, used, or has effects in Colombia, or employs data of Colombian origin.
Guiding principles
The regulationof AI under the Billis based on principles such as human oversight, diversity and inclusion, social and environmental well-being, ethics, transparency, responsible innovation, privacy, proportionality, respect for fundamental rights, environmental protection, economic development, technological sovereignty, adaptability, multi-stakeholder collaboration, and free competition.
Risk-based classification
AI systems are classified into four risk categories:
- Critical risk: Systems that may seriously violate fundamental rights or the public interest (e.g. subliminal manipulation, social profiling, remote biometric identification by authorities). Their use is prohibited except under strict conditions.
- High risk: Systems with a high potential impact on health, safety, or rights (e.g. education, employment, public services, justice). They require compliance assessments, risk management, transparency and human oversight.
- Transparency obligations: Systems that interact with people or generate/manipulate realistic content (e.g. deepfakes) must clearly disclose their artificial nature.
- Minimal or zero risk: Systems that fall outside of the above categories. Developers and users are encouraged to adopt good practices on a voluntary-basis.
Governance and oversight
The Ministry of Science, Technology, and Innovationhas been assigned asthe national authority for AIunder the Billandisresponsible fortechnical guidance, coordination, and oversight. A multi-sector committee and a National Advisory Council of AI Experts will support governance and policy development.
Innovation and education
The law encourages research, innovation, and the creation of regulatory sandboxes for safe experimentation. It mandates the integration of AI education at all levels, with a focus on inclusion, regional development, and workforce transition.
Responsibilities and sanctions
Clear responsibilities areestablishedfor AI developers, suppliers, implementers, and users. Administrative and, in the future, criminal sanctions are envisaged for noncompliance, prioritising corrective and educational measures over punitive ones.
Takeaway for employers
Colombia's new AI Bill introduces a comprehensive, risk-based framework aimed at ensuring ethical and responsible AI use, with strong oversight and clear obligations for all stakeholders. It follows other recent AI developments in the country, including an amendment to the Penal Code and preliminary discussions regarding a new, separate AI law. The Penal Code amendment introduces a specific aggravating offence for cases of identity fraud committed through the use of AI, including the creation of deepfakes and digital identity theft.
Much like countries in the wider region, the Colombian government is clearly focused on tackling the regulation of AI. While the Bill is still in its early legislative stages, employers in Colombia should start preparing for compliance. Many will already have started the process of mapping out which AI systems are used within their organisation, qualifying these AI systems based on their risk level and stopping the use of any AI systems which carry an unacceptable level of risk. As the regulatory landscape evolves, early preparation will be key to ensuring both legal compliance and responsible innovation.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.