AI Update (May 30, 2024)

The UN General Assembly unanimously passed its inaugural resolution concerning AI on March 21, 2024.
Canada Technology
To print this article, all you need is to be registered or login on

United Nations Adopts First Resolution on AI

The UN General Assembly unanimously passed its inaugural resolution concerning AI on March 21, 2024. The stated goals of this resolution are to promote "safe, secure and trustworthy artificial intelligence systems for sustainable development". The resolution, spearheaded by the United States, garnered co-sponsorship from 123 member states (including Canada) and was embraced by all 193 UN member states.

Key tenets of the resolution include advocating for the development and utilization of AI in a manner that prioritizes human welfare and privacy rights, as well as promoting international cooperation to ensure equitable access to AI technologies.

Acknowledging AI's potential in addressing global challenges and advancing sustainable development objectives, the resolution urges member states and stakeholders across various sectors to establish regulatory and governance frameworks that uphold the safety, security, and trustworthiness of AI, to work towards bridging the digital divide among nations, and formulate strategies to facilitate inclusive and equitable access to trustworthy AI for developing countries. UN member states and international stakeholders are asked to refrain from endorsing AI systems that contravene international human rights laws or pose risks to human rights. Further goals include cultivating an environment conducive to innovation in AI to address global challenges and promote sustainable development and exchanging best practices in data governance to facilitate trusted cross-border data flows for AI applications.

This resolution coincides with the European Parliament's recent approval of the Artificial Intelligence Act ("AI Act") on March 13, 2024, which stands as the world's first comprehensive legal framework for AI and aligns with many principles outlined in the UN resolution (previously discussed in the March 2024 AI Update).

Though not binding on domestic law, Canadian charities, not-for-profits and other organizations would do well to take note of the adoption of this resolution. It sets out the expectations that the international community has for users and developers of AI systems, and indicates the global trends towards the view of these systems. Adherence to the resolution would demonstrate a strong commitment to best practices in developing an AI policy.

American Government Produces Report on Weaknesses in AI Related Cybersecurity

The National Institute of Standards and Technology ("NIST") is a branch of the U.S. Department of Commerce dedicated to advancing American innovation and industrial competitiveness through various programs focused on physical sciences, including nanoscale science and technology, engineering, information technology, neutron research, material measurement, and physical measurement.

On January 4, 2024, NIST published a study titled "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations", which sheds light on vulnerabilities in AI and machine learning systems. The publication aims to assist AI developers and users in understanding potential attacks and mitigation strategies, and underscores the absence of foolproof defenses against such threats, emphasizing the need for ongoing improvement in defense mechanisms.

AI systems are prevalent in various aspects of modern society, relying on extensive data for training purposes. However, the integrity of this data is often compromised, leading to undesirable behaviors in AI systems. For instance, chatbots may learn to produce offensive language when exposed to malicious inputs.

Due to the sheer volume of data used in AI training, monitoring for and filtering out malicious inputs is challenging. The report outlines four primary types of attacks—evasion, poisoning, privacy, and abuse—and categorizes them based on the attacker's goals and capabilities.

Evasion attacks occur post-deployment and aim to manipulate inputs to alter the system's response. Poisoning attacks target the training phase by introducing corrupted data. Privacy attacks involve extracting sensitive information from deployed AI systems, while abuse attacks entail feeding incorrect information to the AI from legitimate but compromised sources.

Despite efforts to mitigate these attacks, the report acknowledges the incomplete nature of existing defense mechanisms. It stresses the importance of awareness regarding these limitations for developers and organizations utilizing AI technology.

The authors emphasized the ongoing vulnerability of AI systems to attacks, highlighting the need for continued research and improvement in defense strategies. They caution against overestimating the current state of AI security, emphasizing the complexity of the challenges involved.

Charities and NFPs are advised to be cautious when implementing the use of AI in their operations. While the use of AI technology presents boundless opportunity for efficiency and growth, it is still very much a developing field, and unfortunately, malicious actors are often ahead of the curve when it comes to discovering its potential power.

Read the May 2024 Charity & NFP Law Update

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More