IR Global member Lim Tat of Aequitas Law LLP appeared in our latest publication , 'Meet the Members Asia-Pacific and The Middle East & Africa' dissecting the advancement in AI and how this can be harnessed to analyse and process vast amounts of data.

Read it below.

The advancement of artificial intelligence (AI) is transforming the landscape of data privacy and challenging the foundations of traditional data protection. This is likely to lead to a re-evaluation of the effectiveness of current data protection methods.

Harnessing AI's capability to analyse and process vast amounts of data should ideally lead to new data protection techniques using algorithms to detect data patterns and anomalies, and then identify potential breaches and vulnerabilities in real-time.

But, the rise of AI has also raised concerns about data privacy. AI algorithms can collect and analyse massive data samples, including personal and sensitive information, all of which have increased concerns over privacy violations.

The use of AI in targeted advertising, facial recognition, and other applications can compromise individuals' privacy, enhancing concerns over surveillance and profiling.

ChatGPT, developed by OpenAI, has leapfrogged other apps such as Instagram and TikTok to become the fastest-growing web platform.

Since its launch, the chatbot has over 100 million monthly active users. Its growth has led to discussions around the moral responsibility of the creators and administrators of such AI systems.

Specifically in data privacy and protection, pertinent questions include:

Who is accountable for the data privacy and protection aspects of AI systems?

How should we approach data privacy and protection issues concerning governance, risk management and compliance regulations when building and using AI applications?

As Singapore's digital economy continues to evolve, a trusted ecosystem where organisations can harness the power of technological innovations and where consumers have confidence in adopting AI is key. The goal is to strike a balance that encourages responsible and ethical AI adoption, fosters innovation, and promotes consumer protection.

The Framework

The Personal Data Protection Commission (PDPC) is the Singaporean government agency tasked with enforcing the Personal Data Protection Act (PDPA), a data privacy law in Singapore.

In January 2020, the PDPC released a Model AI Governance Framework to help organisations using AI technologies govern their use of personal data and ensure that their AI systems are transparent, accountable, and fair.

The framework is intended to be a voluntary guide for companies working with AI technologies, and it aims to encourage their responsible and ethical use. It is based on the PDPC's belief that AI can bring many benefits to individuals and organisations, but only if it is used in a way that respects privacy, security, and other fundamental values.

The framework is organised around four key principles:

  1. Responsibility: Organisations using AI should be accountable for its development, deployment, and outcomes. They should have a clear understanding of the potential risks and benefits of their AI systems, and they should take steps to mitigate them.
  2. Explainability: Organisations should be able to explain how their AI systems work and how they make decisions
  3. Fairness: Organisations should ensure that their AI systems are fair and do not discriminate against individuals or groups. They should take steps to identify and address any biases that may be present in the data or algorithms used by the systems.
  4. Ethics: Organisations should consider the ethical implications of their AI systems. They should be transparent about their use of personal data and should respect individuals' privacy and autonomy.

Under each of these principles, the framework provides guidance on how organisations can implement them in practice. These cover issues such as data management, algorithmic transparency, and human oversight of AI systems. For example, the framework suggests that organisations should establish clear governance structures for developing and using AI systems, and they should conduct regular risk assessments.

A.I. Verify

In May 2022, Singapore's Infocomm Media Development Authority (IMDA) and PDPC launched A.I. Verify – the world's first AI Governance Testing Framework and Toolkit for companies aiming to demonstrate responsible AI in an objective and verifiable way.

A.I. Verify – currently a Minimum Viable Product (MVP), aims to promote transparency between companies and their stakeholders. Software developers and owners can verify the claimed performance of their AI systems against a set of principles through standardised tests.

A.I. Verify packages a set of open-source testing solutions together, including a set of process checks, into a Toolkit for convenient selfassessment. The Toolkit will generate reports for developers and business partners, covering major areas affecting AI.

Commercial entities from different business sectors were invited to test A.I. Verify and provide feedback. IMDA and PDPC also invited organisations to pilot the MVP and have the opportunity to:

Gain early access to the MVP and use it to conduct self-testing on their AI systems/models

Use MVP-generated reports to demonstrate transparency and build trust with their stakeholders

Help shape an internationally applicable MVP to contribute to international standards development.

ISAGO

A second offering, in the form of a companion guide to the Model Framework, the Implementation and Self-Assessment Guide for Organisations (ISAGO) aims to help organisations assess the alignment of their AI governance practices with the Model Framework.

ISAGO provides an extensive list of useful industry examples and practices to help organisations implement the Model Framework. ISAGO is the result of the collaboration with the World Economic Forum's Centre for the Fourth Industrial Revolution to drive further AI and data innovation. ISAGO was developed in close consultation with the industry, with contributions from over 60 organisations.

Summary

In conclusion, as AI continues to shape the landscape of data privacy and protection, Singapore's Model AI Governance Framework provides a valuable resource for organisations seeking to develop and deploy AI systems responsibly and ethically.

The framework emphasises the principles of responsibility, explainability, fairness, and ethics, and provides detailed guidance on how organisations can implement these principles.

With the recent launch of A.I. Verify and ISAGO, Singapore is taking steps to promote transparency, accountability, and trust in AI systems. By striking a balance between technological innovation and consumer protection, Singapore is positioning itself as a leader in the global AI landscape, fostering a trusted ecosystem where AI can be harnessed for the benefit of society.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.