ARTICLE
16 September 2024

Artificial Intelligence 2024 (Japan Chapter) - Trends And Developments

In recent years, the pervasive growth and integration of artificial intelligence (AI) technologies have prompted significant attention from regulatory bodies worldwide...
Japan Technology

AI Guidelines for Business

Introduction

In recent years, the pervasive growth and integration of artificial intelligence (AI) technologies have prompted significant attention from regulatory bodies worldwide, with Japan being a front-runner in establishing comprehensive governance frameworks. The "AI Guidelines for Business Ver 1.0" is a critical document issued by the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry. Tese guidelines underscore Japan's proactive approach in shaping the ethical deployment of AI technologies across various business sectors, aiming to foster innovation while ensuring security, privacy and ethical compliance.

Background and purpose

Japan's commitment to integrating AI aligns with its broader vision of "Society 5.0", a concept that envisions a human-centred society enhanced by digital technologies. The formulation of the AI Guidelines for Business reflects a concerted effort to harness AI's potential while addressing the ethical, legal and societal challenges that accompany its deployment. This initiative not only supports domestic policy frameworks but also aligns with international standards, contributing to global discussions on AI governance at forums such as the G7, G20 and OECD.

Policy framework and development process

The AI Guidelines for Business are the combination of "AI Research & Development Guidelines", "AI Utilisation Guidelines" and "Governance Guidelines for Implementation of AI Principles" and are grounded in the "Social Principles of Human-Centric AI", which emphasise dignity, inclusion and sustainability. These principles guide the development, deployment and management of AI systems, ensuring that technological advancements contribute positively to society.

The guidelines have been developed through a collaborative approach involving multiple stakeholders, including academia, industry and civil society. This inclusive process ensures that the guidelines are comprehensive, reflecting a broad range of perspectives and expertise. The development process also incorporates continuous feedback, adapting to new challenges and technologies through a "Living Document" approach.

Key components of the guidelines

Basic philosophies

The guidelines articulate three fundamental philosophies:

  • Dignity: AI should enhance human capabilities without compromising human dignity.
  • Diversity and inclusion: AI should promote a society where diverse backgrounds are respected and included.
  • Sustainability: AI deployments should contribute to sustainable development and address global challenges.

These philosophies underpin the detailed principles and practices that guide AI development, deployment and utilisation across business sectors.

AI business actors and their responsibilities

The guidelines define roles and responsibilities for three main categories of AI business actors:

  • AI developers: Focus on ethical development practices, ensuring that AI systems are designed with respect for human rights and privacy.
  • AI providers: Responsible for the integration and provision of AI systems, ensuring they are safe, secure and used appropriately.
  • AI business users: Encouraged to use AI systems within ethical boundaries, maintaining transparency and accountability.

Governance and compliance

Effective governance is crucial for the safe and ethical use of AI. The guidelines provide a framework for:

  • Risk management: Identifying and mitigating risks throughout the life-cycle of AI systems.
  • Transparency and accountability: Ensuring that AI deployments are transparent and stakeholders are accountable for their outcomes.
  • Regulatory compliance: Aligning AI practices with national and international laws and standards.

Ten guiding principles

1 Human-centric

When developing, providing or using an AI system or service, each AI business actor should act in a way that does not violate the human rights guaranteed by the Constitution of Japan or granted internationally, as the foundation for accomplishing all matters to be conducted, including the matters described later. In addition, it is important that each AI business actor acts so that the AI expands human abilities and enables diverse people to seek diverse well-being.

2 Safety

Each AI business actor should avoid damage to the lives, bodies, minds and properties of stakeholders during the development, provision and use of AI systems and services. In addition, it is important that the environment is not damaged.

3 Fairness

During the development, provision or use of an AI system or service, it is important that each AI business actor makes efforts to eliminate unfair and harmful bias and discrimination against any specific individuals or groups based on race, gender, national origin, age, political opinion, religion and so forth. It is also important that before developing, providing or using an AI system or service, each AI business actor recognises that there are some unavoidable biases even if such attention is paid, and determines whether the unavoidable biases are allowable from the viewpoints of respect for human rights and diverse cultures.

4 Privacy protection

It is important that during the development, provision or use of an AI system or service, each AI business actor respects and protects privacy in accordance with its importance. At this time, relevant laws should be obeyed.

5 Ensuring security

During the development, provision or use of an AI system or service, it is important that each AI business actor ensures security to prevent the behaviours of AI from being unintentionally altered or stopped by unauthorised manipulations.

6 Transparency

When developing, providing or using an AI system or service, based on the social context in which the AI system or service is used, it is important that each AI business actor provides stakeholders with information to the reasonable extent necessary and technically possible while ensuring the verifiability of the AI system or service.

7 Accountability

When developing, providing or using an AI system or service, it is important that each AI business actor fulfils its accountability to stakeholders within a reasonable extent for ensuring traceability, conforming to common guiding principles and the like, based on that AI business actor's roles and the degree of risks posed by the AI system or service.

8 Education/literacy

Each AI business actor is expected to provide the persons engaged in AI within the AI business actor with the necessary education to gain the knowledge, literacy and ethical views to correctly understand and use AI in a socially correct manner. Each AI business actor is also expected to provide stakeholders with education, in consideration of the characteristics of AI, including its complexity and the misinformation that it may provide, and the possibilities of intentional misuse of AI.

9 Ensuring fair competition

Each AI business actor is expected to maintain a fair competitive environment surrounding AI so that new businesses and services using AI are created, sustainable economic growth is maintained, and solutions for social challenges are provided.

10 Innovation

Each AI business actor is expected to make efforts to actively contribute to the promotion of innovation for the whole society.

Implementation strategies and international alignment

The guidelines emphasise the importance of aligning with international norms and standards to ensure that Japanese AI technologies are globally competitive and compliant. This alignment involves continuous updates to the guidelines based on international developments and technological advancements.

Challenges and future directions

While the guidelines set a robust framework for AI governance, ongoing challenges such as data privacy, algorithmic bias and cross-border data flows require continuous attention. Future revisions of the guidelines will need to address these evolving challenges and ensure that AI governance remains dynamic and responsive to new risks and opportunities.

Conclusion

Japan's AI Guidelines for Business represent a forward-thinking approach to AI governance that balances the need for innovation with the imperatives of security, privacy and ethical integrity. As AI continues to transform industries, these guidelines will play a crucial role in guiding businesses towards responsible and sustainable AI practices, setting a benchmark for global AI governance frameworks.

General Understanding on AI and Copyright

In May 2024, the Japan Copyright Office published a guidance entitled "General Understanding on AI and Copyright in Japan" (hereinafter "General Understanding"), which describes the discussion that took place in a dedicated legal subcommittee in the Copyright Office. While the General Understanding is not legally binding, it represents the subcommittee's views on the interpretation of legal issues involving AI and the Japanese Copyright Act at the time of publication thereof. To discuss legal issues involving AI and copyright, the General Understanding essentially set out the following two situations: (i) the situation where copyrighted works are utilised in the "AI development / training stage" and (ii) the situation where utilisation of an AI product/service (such as generating artistic works by AI) might infringe someone's copyrights. Besides this, the General Understanding also raises the legal question as to (iii) whether AI-generated materials are susceptible to copyright protection and can become copyrighted works.

Copyright issues involving the "AI development / training stage"

Under Article 30-4 of the Japanese Copyright Act, exploitation of a copyrighted work not for enjoyment of the thoughts or sentiments expressed in the copyrighted work (exploitation for non-enjoyment purposes) such as AI development or other forms of data analysis may, in principle, be allowed without the permission of the copyright holder. In this basic framework, the key question would be the standards/criteria to determine whether certain use of copyrighted works for AI development/training would fall under "enjoyment of the thoughts or sentiments expressed in the copyrighted work". On this point, the General Understanding suggests that, in the following cases, the reproduction of copyrighted works for AI training does not satisfy the "non-enjoyment purpose" requirement and thus Article 30-4 of the Copyright Act would not be applicable:

  • The collection of works for AI training to generate materials similar to copyrighted works within the collected works.
  • The collection of works as input data to generative AI for implementation of retrieval augmented generation (RAG).

The General Understanding also points out the necessity to assess the applicability of the Article 30-4 proviso by considering "whether it will compete in the market with the copyrighted work" and "whether it will impede the potential sales channels of the copyrighted work in the future". This assessment should be made by taking various factors into account, such as "technological advancements" and "changes in the way the copyrighted work is used".

Possible copyright infringement when utilising an AI product/service

First, the General Understanding notes that, when AI-generated images or copies thereof are uploaded to social media or sold, copyright infringement will be determined based on the same criteria as for normal infringement. In other words, if an AI-generated image or any other creation is found to have "similarity" with and "dependence" on an existing image, etc (copyrighted work), and there are no applicable copyright exceptions, it will be considered an infringement of copyright. One of the key questions here would be how to determine "dependence" in the case of AI-generated content. On this point, the General Understanding suggests the following approach:

  • If it is uncertain whether a particular copyrighted material is used in the AI training data, dependence will be presumed if the copyright holder can prove that "the AI user had access to the existing copyrighted work" or "the AI-generated material has a high degree of similarity with the work".
  • It is generally assumed that there was dependence on a pre-existing copyrighted work, even if the user of an AI was not aware of it, if the work was used for AI training during the development stage of that AI.

Possible copyright protection of AI-generated materials

Under the Japanese Copyright Act, a copyrighted work is defined as a "creatively produced expression of thoughts or sentiments that falls within the literary, academic, artistic or musical domain". Besides this, the General Understanding notes that only a person (ie, a natural or legal person) can be an "author" under the Copyright Act, meaning that AI itself, which does not have a legal personality, cannot be an author.

In light of this principle, the General Understanding points out that materials autonomously generated by AI are not "creatively produced expressions of thoughts or sentiments" and are therefore not considered copyrighted works. On the other hand, the General Understanding explains that, if AI is used as a "tool" by a person to creatively express thoughts or sentiments, such material is considered a work, and the user of the AI the "author".

Also, the General Understanding suggests that determining whether a person has used AI as a "tool" depends on two factors: (i) whether the person had a "creative intention" and (ii) whether the person has made a "creative contribution". As regards factor (ii), the General Understanding outlines circumstances under which AI products are recognised as containing the AI user's "creative contributions", and provides examples of how this factor may determine the copyrightability of AI-generated material.

AI-based Software as a Medical Device (SaMD)

In August 2023, the Subcommittee on Software as a Medical Device Utilising AI and Machine Learning of the Science Board of the Pharmaceuticals and Medical Devices Agency (PMDA) compiled and published a report summarising discussions from a scientific standpoint regarding AI-based Software as a Medical Device (SaMD). Key points from the report are introduced below.

Activities to establish medical device regulations and safety standards in Japan

The activities contributing to the medical device regulations include the establishment of a review working group for preparing draft evaluation indices for AI-based diagnostic imaging support systems in the project of the Ministry of Health, Labour and Welfare (MHLW) for preparing evaluation indices for next-generation medical devices and regenerative medicine products. The deliverables of the review working group that was in operation from FY2017 to FY2018 were issued as PSEHB/MDED Notification No.1219-4 (Director, Medical Device Evaluation Division, Pharmaceutical Safety and Environmental Health Bureau, MHLW) in May 2019 after reviewing the public comments, and adopted as evaluation indices.

Furthermore, the formulation and revision of certification standards is progressing to transfer SaMD with a track record of approval to the certification system according to the type and the target disease in accordance with the regulatory reform plan approved by the Cabinet on 7 June 2022. In parallel, PMDA has started organising information on review points, eg, studying the conditions and evaluation points necessary for efficacy/safety evaluations, and publishing the information on its website to enhance the predictability of developers.

Additionally, the activities to provide scientific support include the research project for pharmaceutical regulatory harmonisation and evaluation of the Japan Agency for Medical Research and Development (AMED), "Study of pharmaceutical regulations on SaMD using advanced technology such as artificial intelligence" that started in 2019. In this study, the feasibility of AI-based SaMD capable of post-market learning was evaluated under industry-academia-government collaboration. As a result, a proposal for the implementation of continuous learning and performance change by manufacturers within the existing regulatory framework, particularly the "Improvement Design within Approval for Timely Evaluation Notice (IDATEN)", was compiled and submitted to the MHLW. An experimental study to identify training data factors that affect the performance of SaMD through post-market learning was also conducted. The results were incorporated into the proposal. As a successor project to the above study, the AMED pharmaceutical regulatory harmonisation and evaluation research project "Study to contribute to the performance evaluation during the post-market learning of AI-based SaMD" was started in 2022, and an experimental study to identify the points to be considered when determining the validity of the performance evaluation process has been advanced. In the future, an industry-academia-government collaboration system will be established, and draft performance evaluation guidance will be prepared based on the results of the experimental study.

Meanwhile, the results of the Health and Labour Sciences Research Grant project were compiled and issued by the MHLW in May 2023 as a guidance for approval and development based on the characteristics of SaMD.

Current status and challenges of SaMD in Japan

To make the most of the features of SaMD, the IDATEN system was developed as an approval system and applied to change plans in Japan. However, the system has yet to be fully used. The reasons may include possible improvement and deterioration of performance due to post-market learning and concerns about risks such as catastrophic forgetting. Attention should be paid to the potential risk of repeated use of the same test data when evaluating the performance after repeated retraining. Evaluation using pre-marketing test data available at the time of approval is important to ensure that the performance achieved at the time of approval is maintained without any problem such as catastrophic forgetting. Evaluation using post-market test data is necessary to check the performance of the system in operation. When overfitting occurs, the cause should be identified, and certain actions should be taken, eg, using a low-risk development method to continue the development of the SaMD or preventing the problem from spreading by taking strict measures including suspension of use of the SaMD. It is important to further deepen the discussions on this issue.

Furthermore, in the development of deep learning (DL)/AI systems using medical images (radiological and ultrasound images), there are few prospective DL clinical trials or randomised clinical trials of diagnostic imaging. Most non-randomised clinical trials do not have a prospective design, involve a high bias risk and deviate from the existing reporting standard of clinical trial outcomes. To avoid obstacles to the approval of AI-based medical devices, appropriate clinical evaluation methods should be discussed while taking into consideration future trends in research and development of related technologies. However, efforts to scientifically reduce bias risks should eventually be made. It will also be necessary to continue discussions on whether post-market prospective or randomised clinical trials are necessary to evaluate the performance improvement made by post-market learning.

Additionally, combined use of numerical simulation and machine learning (ML) has also been studied. ML-based medical device programs using numerical simulation in the development process or medical device programs developed based on a combination of numerical simulation and ML may become available in the near future. Note that the issues involved in real measurement, including biases, can be controlled with careful use of numerical simulation, but there are limitations specific to numerical simulation. The Science Board report will be a useful reference.

Databases developed to date in Japan

To date, AMED has supported the development of four representative medical image databases (surgery videos, digital pathological images, ECG and gastrointestinal endoscopy) in Japan.

The surgery videos database, primarily established at the National Cancer Center Hospital East, has collected approximately 4,000 cases of 13 different procedures as of 23 August 2023. Each surgical video dataset is linked to patient information, surgeon details and device data. Challenges include the complexity of standardising different video formats and privacy considerations concerning the personal information visible in the videos. Other databases, such as those for digital pathology images, electrocardiograms and gastrointestinal endoscopies, share common challenges related to large data volumes, the complexity of creating annotated datasets, and the need for funding.

Recent Developments in Japan Concerning AI

Recently in Japan, there has been a rapid emergence of businesses utilising generative AI to produce anime and manga, which are among the country's principal export goods. Traditionally, the creation of anime and manga has required substantial time and costs. However, it is anticipated that generative AI will enable the production of high-quality content both quickly and economically. Nevertheless, there are concerns that using generative AI for creating anime and manga might pose legal challenges under copyright law. In response, the Agency for Cultural Affairs has provided guidance as stakeholders explore lawful ways to conduct business.

Additionally, in Japan, there is a growing trend of developing generative AIs that specialise in Japanese, based on data that mitigates the risk of copyright infringement. Although these initiatives may not be as large-scale as those involving platforms like ChatGPT, they are welcomed for their emphasis on eliminating copyright infringement risks. There is keen interest in the future results of these developments.

Originally published by Chambers and Partners.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More