ARTICLE
20 September 2024

International: Legal Standards Of AI And Human Digital Rights

Ana Carolina Cesar and Lara Salgueiro, from KLA Advogados, explore the significance of the Universal Declaration of Human Rights (UDHR) and its continued relevance in the age of artificial intelligence (AI).
Brazil Technology

Ana Carolina Cesar and Lara Salgueiro, from KLA Advogados, explore the significance of the Universal Declaration of Human Rights (UDHR) and its continued relevance in the age of artificial intelligence (AI). As AI technologies rapidly evolve, she emphasizes the urgent need for robust regulations to ensure these advancements align with human rights principles, promoting dignity, equality, and justice for all.

On December 10, 1948, the United Nations (UN) promulgated the UDHR in response to the atrocities and events that marked the previous decades and the very existence of humankind. This historic document emerged as a reaction to the deep scars left by World War II when the world witnessed genocides, torture, and other forms of barbarity on an unprecedented scale.

The UDHR not only established a common standard of rights and freedoms for all people and nations but also symbolized a global commitment to human dignity, equality, and justice. Its 30 articles enshrined it as a fundamental pillar for human existence itself.

Despite being one of the most significant legal milestones in history, it is fundamental to recognize that the fight for the preservation of human rights is ongoing, as they continue to be threatened and violated today. Not only by wars and conflicts but also by phenomena that are generally less harmful but equally impactful, such as disruptive technological inventions that radically transform the structure and paradigms of society. It is precisely on this latter hypothesis that AI systems fall, a topic on which some considerations will be made throughout this article.

Before delving into discussions about the impacts - whether beneficial or not - of AI systems in the contemporary world, it is essential to establish a clear definition of what this term encompasses. According to the Organization for Economic Cooperation and Development (OECD), 'An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.'

Due to their potential and precision, AI technologies have rapidly established themselves as a tool of extreme relevance across various contexts and industries. In a short period, we have come to rely on these systems to perform a wide range of tasks, both personal and professional. Just as the Google search quickly became an indispensable tool in our daily lives -so much so that it's hard to imagine life without it - AI has also made a similar impact. For instance, ChatGPT is recognized as one of the fastest-growing services in history, reaching 100 million users weekly.

Its rapid spread and evolution are linked to massive investments in the sector, which drive significant advancements, resulting in a proliferation of applications, devices, and services that use AI to improve and simplify various areas of human life. From virtual assistants on smartphones to autonomous vehicles, AI permeates many facets of our daily interactions.

However, its immense potential comes with risks and concerns. While AI has numerous beneficial applications, its use and development also raise significant issues related to bias, discrimination, privacy, security, and ethics. Mass surveillance systems, facial recognition algorithms, deepfakes, and the manipulation of personal data are just a few of the topics guiding robust discussions worldwide and manifesting as significant threats to individual and collective rights.

Mass surveillance, for example, can easily be employed as a tool for social control, restricting freedoms, and promoting censorship. Facial recognition algorithms, while having recognized utility in security contexts, can also be used in discriminatory ways, exacerbating inequalities and harming minorities. Deepfakes, in turn, have the potential to destabilize societies by creating and spreading false information, threatening the integrity of democratic processes and public trust. The manipulation of personal data without legitimate purposes also raises serious questions regarding data sovereignty and user autonomy.

While the omnipresence of technology is undeniable, it is important to recognize that its potentially harmful effects can be avoided. They can and should be controlled through effective regulations that promote and ensure the ethical and responsible development and use of technology. In this context, especially from a human rights protection perspective, it is essential to observe certain aspects.

International collaboration is essential to establish a universal baseline for AI regulations that all nations must observe. The transnational nature of AI means that the challenges and solutions related to this technology often transcend geographical borders. Therefore, cooperation between countries and international organizations is imperative to harmonize regulations, implement best practices, and address common challenges in a coordinated way. Treaties and international agreements can help ensure that ethical principles and human rights are respected worldwide.

The EU Artificial Intelligence Act (EU AI Act) is one of the most prominent regulations in the world to set a uniform legal framework for 'the development, the placing on the market, the putting into service and the use of AI systems in the Union, in accordance with Union values, to promote the uptake of human-centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the Charter), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.'

According to the EU AI Act, the human-centric approach must be observed in order to mitigate the risks and harms to public interests and fundamental rights. Recital 6 of the EU AI Act declares that AI systems 'should serve as a tool for people, with the ultimate aim of increasing human well-being.'

Also, in this landscape, some international guidelines and frameworks have already gained prominence. A notable example is the OECD AI Principles, which aim to guide AI actors in their efforts to develop trustworthy AI and provide policymakers with recommendations for effective AI policies. Currently, 47 countries have adopted these principles.

Another relevant initiative in this scenario is ISO/IEC 42001, an international standard developed to establish guidelines related to managing risks associated with AI systems. The framework addresses a wide range of issues, from AI governance to data security and privacy. The standard emphasizes the need to continuously assess risks throughout the AI systems' lifecycle, ensuring that technologies are developed and implemented ethically and securely. ISO/IEC 42001 also promotes transparency and accountability, providing a solid foundation for organizations to adopt responsible AI practices.

The AI Risk Management Framework (AI RMF) from the National Institute of Standards and Technology (NIST) is another relevant initiative. This framework was created to assist organizations in managing the risks associated with the development and use of AI systems. The NIST AI RMF proposes a structured approach to identifying, assessing, and mitigating risks, ensuring that AI systems are designed and implemented with a focus on security and ethics. The framework also highlights the importance of ongoing stakeholder engagement and adaptation to technological and regulatory changes, ensuring that risk management practices remain relevant and effective.

It is worth noting that the EU recently published the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (the Framework Convention). In general, the Framework Convention 'aims to ensure that activities throughout the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy, and the rule of law, while also fostering technological progress and innovation.' The Framework Convention covers the use of AI systems by public authorities - including private actors acting on their behalf - and private actors.

Activities within the lifecycle of AI systems must adhere to fundamental principles, including human dignity and individual autonomy, quality, and non-discrimination, respect for privacy and personal data protection, accountability and responsibility, and safe innovation, among others. It's also important to note that the Framework Convention establishes the possibility for the authorities to ban or establish moratoria on certain applications of AI systems.

In addition to international collaboration for policy development and implementation, private sector involvement is also necessary. Companies developing and applying AI technologies should prioritize trustworthiness, transparency, security, explainability, and accountability.

In this sense, transparency is fundamental for building public trust, ensuring that everyone involved in this complex ecosystem understands how algorithms work and how their data is used and protected. Similarly, security must be a priority, with the implementation of stringent measures to protect AI systems from cyber threats and malicious use. The explainability of algorithms is also a critical point to ensure that decisions made by AI are understandable and justifiable, especially in sensitive areas like healthcare, finance, and access to benefits allocation. Finally, accountability represents an essential mechanism for establishing due accountability in cases of failures or abuses, as well as ensuring that companies adhere to high ethical standards.

In summary, as AI continues to expand and influence various aspects of our lives, it is a priority that effective measures are adopted to mitigate its risks and protect human rights. Through robust regulations, transparency, education, and global collaboration, we can ensure that AI becomes a force for good, promoting progress and the well-being of all humanity.

Regardless of the context in which AI is applied, respect for human rights must be a fundamental and non-negotiable principle. AI technologies should be developed and implemented in a manner that guarantees dignity, autonomy, privacy, and fairness for all individuals. This includes creating systems that are fair and unbiased, consistent with obligations to protect human rights, and avoiding biases that could perpetuate inequalities and human rights violations. Only with a firm commitment to these values can we harness AI's full potential in an ethical and responsible manner.

Originally published by Data Guidance.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Find out more and explore further thought leadership around Technology Law and Digital Law

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More