There are no legislative and regulatory provisions that govern AI in Japan. In order to promote and facilitate the free flow of data and the utilisation of AI-related services, contracts and agreements among the parties are very important. In this regard, the Civil Code governs AI business in Japan.
In many AI businesses, personal data must be collected and utilised for machine learning; this will be subject to regulation under the Act on the Protection of Personal Information (APPI).
The process for developing machine learning and statistical models using AI products involves the handling of valuable information such as data, programs and know-how. There has been much discussion on how to expand the IP laws to protect AI-related technologies, products and content. In 2019, amendments to the Unfair Competition Prevention Act introduced new legal protection for raw data utilised in AI. Such raw data – which is subject to certain access restrictions and restrictions limiting data transfers to third parties – does not qualify as a trade secret, but is subject to the new protection. The Japan Patent Office has also amended the Examination Guidelines for Patent and Utility Models several times to cover AI-related cases.
There is no general duty specific to AI; however, under the Civil Code, a company must take the due care of a prudent manager when using AI. For instance, where a company utilises AI in providing its services, its duty to take care will be determined based on the average duty in that service sector. Where AI is utilised broadly to provide similar services in the relevant sector and the average service provider has significant experience of utilising AI, the extent of this duty will be greater than where AI is utilised narrowly in that service sector and the average service provider has little experiences of utilising AI.
Due to their novelty, new products are more likely to be viewed as risky – to an excessive, irrational degree – than products that have been in widespread use for a long time. New products attract people’s attention more readily than conventional products. As robots and other mobile AI products are very new products, it is difficult to dismiss people’s fears that they might malfunction and cause accidents as being completely groundless. As regards liability arising from a robot accident, some scholars claim that liability should be treated on the basis of product liability theory.
A bill that amends the National Strategic Special Zone Law was passed in May 2020.
The amended law aims to create ‘super cities’ in which AI, big data and other technologies are utilised to resolve social problems. The government hopes to deploy cutting-edge technologies to address issues such as depopulation and an ageing society. In such cities, data-linking platforms to collect and organise various kinds of data from administrative organisations and companies will be established for autonomous driving, cashless payments, telemedicine and other services. Several areas will be identified as super-cities in 2021.
Japan has joined a league of leading economies – including the United States, the United Kingdom, the European Union, Australia, Canada, France, Germany, India, Italy, Mexico, New Zealand, South Korea and Singapore – in launching the Global Partnership on Artificial Intelligence (GPAI). The GPAI is an international, multi-stakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation and economic growth.
There are no bilateral or multilateral instruments relevant to the AI context.
The Personal Information Protection Commission (PPC) is responsible for enforcing the APPI. The PPC thus ensures that personal data is handled appropriately in order to protect data subjects’ rights and interests. The PPC has the primary investigatory, advisory and enforcement powers under the APPI, including:
- the power to investigate the activities of companies that handle personal data; and
- in certain instances, the power to render advice to and make orders against them, if the infringement of a data subject’s material rights or interests is imminent.
In relation to the protection of personal data under the APPI, the PPC may delegate its investigatory powers to the relevant minister or another body in limited circumstances, but not its advisory or enforcement powers. It can also provide information to foreign data protection regulators, and in limited circumstances may allow information to be used for criminal investigations overseas.
The Japanese government is reluctant to establish binding rules to regulate AI specially. The Ministry of Economy, Trade and Industry (METI), the Ministry of Internal Affairs and Communication (MIC) and other various ministries are keen to promote AI business and have issued many non-binding guidelines for AI businesses. In June 2018 METI published the Contract Guidance on Utilisation of AI and Data, which sets out guidance that businesses should consider in negotiating and coordinating the details or terms of contracts. The MIC also published Draft AI Research and Development Guidelines in July 2017 and AI Utilisation Guidelines in August 2019.
Financial services and manufacturing are the leading sectors in terms of AI adoption. The healthcare sector was also one of the first movers in implementing advanced AI medicine development.
The following areas afford the greatest potential to realise gains from AI-led developments:
- healthcare;
- financial services;
- agriculture;
- consumer and retail; and
- public and utility services.
Although there are four types of company structures in Japan, many AI companies are established in one of the two main company structures: the corporation/joint stock company (kabushiki-kaisha) or the limited liability company (gōdō-kaisha). The kabushiki-kaisha is the most widely known and credible type of company structure in Japan. As with any corporation structure, these types of companies are governed and owned by investors and owners (ie, shareholders), as well as by directors of the company. A gōdō-kaisha is a company structure that is typical among small and medium-sized enterprises, with a partnership structure as opposed to a share structure. This type of company structure is relatively new in Japan, having been introduced in 2006 to replace the yugen-kaisha structure, and is therefore not as prominent as the kabushiki-kaisha company structure.
Many AI companies in Japan are start-ups. They can use a wide variety of financing methods, including:
- crowdfunding;
- bank loans;
- acquisition of shares by professional investors (business angels); and
- fund raising through venture capital funds.
The government has played an active role in the application and proliferation of AI in various business sectors. Whether in relation to software development, cybersecurity, fintech, fisheries or agriculture, the state is exploring exciting new ways to pursue AI-developed technology and establish guidelines and best practices.
The Financial Service Agency of Japan is acquiring capabilities to monitor and analyse social media posts to prevent possible market manipulation. AI capabilities are set to be deployed in order to detect market manipulation.
(a) Healthcare
The fields of application of AI in medicine are numerous and include:
- computer-assisted surgery;
- remote patient monitoring;
- diagnostic assistance; and
- personalised treatments.
The Ministry of Internal Affairs and Communication and the Ministry of Economy, Trade and Industry of Japan have jointly issued security guidelines on the processing of health data, which may apply to the processing of health data in the context of developing or running medical software that includes AI elements (eg, computer-assisted surgery, remote patient monitoring, diagnostic assistance). However, these provisions are not specific to AI.
(b) Security and defence
The Ministry of Defence announced plans to invest $240 million in the cyber domain in 2020. This includes developing an AI-based system to counter cyber attacks. The system will autonomously detect malicious emails, judge the level of threat using AI and respond against cyber attacks.
The aim is to develop a comprehensive AI system that can detect malevolent emails, respond to cyber attacks in an automated way through machine learning skills, and ultimately neutralise the effect of cyber attacks on public and private sector targets.
(c) Autonomous vehicles
Amendments to the Road Traffic Act and the Automobile Liability Security Act in 2019 have introduced changes to the civil liability of the driver.
Under the amendments, the primary liability for losses caused by a traffic accident is assigned to the operator of the vehicle (eg, the owner of the vehicle or the business owner of a transportation service – not necessarily the driver).
The burden of proof (to disprove negligence) in an accident is shifted to the operator, which will be held liable for damages caused by the accident unless it can successfully prove that:
- it exercised due care;
- the victim or a third party was at fault; and
- the vehicle did not have any defect.
(d) Manufacturing
Japanese manufacturing companies have made attempts to move towards factory automation solutions in order to improve product quality and design, reduce labour costs, minimise the manufacturing cycle and monitor the real-time condition of machines. However, new types of AI-based hardware and software are being adopted in an unregulated area, without clear regulations on workers’ rights, liability of AI software, data privacy or cybersecurity.
(e) Agriculture
There are no dedicated regulations in this sector.
(f) Professional services
There are no dedicated regulations in this sector.
(g) Public sector
There are no dedicated regulations in this sector.
(h) Other
Numerous AI applications have emerged in the legal sphere. Many legal tech firms are using machine learning processes to develop software and applications for tasks such as:
- reviewing contracts;
- translating legal documents; and
- assisting with due diligence.
In many AI businesses, it is necessary to collect and utilise personal data for machine learning. Such collection and utilisation will be subject to regulations under the Act on the Protection of Personal Information (APPI). It is necessary to determine whether the data utilised in AI processes (whether to train models or to use applications that incorporate AI) constitutes personal data, in order to identify the applicable regulations and obligations under the APPI.
Where AI processes involve the processing of personal data, the issues and implications will differ depending on the stage of the process. At the stage of AI development (ie, where databases are used to train models), the issues and available solutions will differ depending on whether the company has created a specific database to train AI models or wants to use existing databases (created for other purposes) to train AI models. In any case, the company will need to:
- inform data subjects appropriately regarding the use of their data; and
- ensure that data subjects can exercise their rights (which in practice is not always easy, depending on the data used to train models).
The applicable cybersecurity regime is generally set out in the APPI and its guidelines, as well as cybersecurity management guidelines. The government of Japan has also issued many cybersecurity guidelines with a specific focus on various business sectors, including telecommunications and cloud services.
In terms of competition, AI may pose challenges such as:
- market foreclosure and related exclusionary practices;
- novel ways of collusion; and
- new strategies for price discrimination.
AI may also raise concerns about technological sovereignty and wealth inequality. Due to the self-learning aspects of AI and machine learning tools, much functioning may occur without the knowledge of the coders or programmers. While this practice is being studied abroad, it has not yet been examined in Japan within the framework of the Act on Prohibition of Private Monopolisation and Maintenance of Fair Trade.
In 2019, the Japan Fair Trade Commission published the Draft Guidelines Concerning Abuse of a Superior Bargaining Position of Digital Platformer Operators to Protect Consumers’ Personal Information, and discussed the use or potential abuse of data which may lead to the disruption of competition.
From an employment perspective, evaluation by AI might result in unreasonable discrimination among employees. Also, some AI companies are providing employee monitoring service that utilise AI, which might result in an invasion of privacy and legal issues if an employer proceeds with such monitoring without implementing the necessary procedures.
There are some concerns relating to data integrity, especially as regards medicine research and development. Data integrity is often discussed in relation to information security. Moreover, as AI systems do not have the intelligence to ascertain whether they are learning from the right or wrong data source, there is a real risk of data manipulation. If a person intentionally feeds inaccurate data into an AI system, this may lead to the manipulation of outcomes and may also undermine the integrity of both the data sets and the outcomes. The Ministry of Economy, Trade and Industry is drafting AI Governance Guidelines which propose stringent measures in order to maintain the accuracy of data.
As the AI market is still in its infancy, it is experiencing a period of growth and development. The current approach of the government aims to establish broad principles for the design, development and deployment of AI in Japan. The Ministry of Economy, Trade and Industry is drafting AI Governance Guidelines which highlight the risks arising from the utilisation of AI in business and suggest how to avoid those risks. An interim report was issued in December 2020.
The proliferation of guidance and guidelines on AI has resulted in the publication of many principles aimed at regulating AI. Although these principles may sometimes have different names, they can be categorised under following main themes:
- Principle of autonomy: AI systems should not subordinate, coerce, deceive, manipulate, condition or control human beings, who should be able to maintain their full and effective self-determination to take part in the democratic process.
- Principle of justice: To the furthest extent possible, detectable and discriminatory bias should be avoided from the collection stage onwards; and a control procedure should be implemented to analyse, in a clear and transparent manner, the purposes, constraints, requirements and decisions of the system.
- Principle of explicability and transparency: The data sets and processes by which an AI system renders a decision – including the data collection and tagging processes, as well as the algorithms used – should be documented to the highest standards, to allow for traceability and improved transparency. This also applies to the decisions rendered by the AI system.
This is a complex question and the response will intrinsically depend on the company concerned, its operations and its sector of activity.
In practice, according to its own criteria and based on the best practices usually implemented in its sector of activity, each company will need to determine:
- the nature of the best practices to be observed in relation to the use of AI technologies (whether these are integrated into tools used within the company or into its products and services); and
- the appropriate internal channels to ensure their effectiveness.
AI can function as a kind of black box and it is difficult to guarantee what results a AI engine will generate in many AI analysis services. It is difficult to include a provision on quality and quantity assurance on the results, as this would present legal risks for an AI service company. It is important to exclude such provisions to mitigate those legal risks.
AI is a kind of black box and it is impossible to guarantee that the quality of results generated by an AI service will meet a customer’s needs.
It is possible that AI might cause potential bias and discrimination in recruitment.
In Japan, innovation in the AI space is protected by the legislation applicable to IP rights. In determining which IP rights apply to a specific AI innovation, a distinction must be made between the protection of the tool itself (and its various components) and that of its outputs. This analysis requires an understanding of both the tool and its operation. How this analysis is carried out will differ depending on whether the IP rights involved relate to:
- literary and artistic property (copyright that protects creations); or
- industrial property (trademarks, patents).
Likewise, the ownership of IP rights will depend on whether more than one person has collaborated in the development of the AI technology.
This is a complex issue that inherently requires case-by-case analysis, but the following points will apply in general:
- Algorithms: If an algorithm cannot be subject to copyright due to a lack of sufficient formalisation, the coded expression of the program (ie, the software integrating the algorithm) will nonetheless be protected by copyright, as long as it meets the originality requirement. In addition, in some cases, machine learning algorithms can constitute innovative technical solutions that open the door to patent protection. In this respect, the Japan Patent Office has already granted a number of patents for computer-implemented inventions that incorporate AI.
- Databases: AI technology that integrates several databases (eg, raw data, labelled data, datasets, output data) may be protected by copyright, provided that the originality requirement is met.
Regulatory sandboxes are the main measures that the government has adopted to incentivise innovation in the AI space. Additionally, the benefits which are available to start-ups are sector agnostic and will be available to all AI players.
Employment is governed by contractual arrangements under employment laws including the Labour Standards Act and AI companies must rely on this framework to supplement any functions that will be fulfilled by AI instead of human resources. In the face of the ongoing COVID-19 pandemic, several companies are devising AI-based solutions which could be employed in manpower-reliant industries, to ensure continuity of work or business.
There are no specialised regulations on employment laws in the IT sector. Much like foreign direct investment, to which few limitations apply, it is possible for AI companies to attract and engage specialist talent from overseas.
To date, no specific legislative reforms on AI have been announced for the next 12 months. The implementation of specific legal regimes has not yet been identified as an essential step for the development of AI in Japan, as AI is currently regulated under a soft law approach. The AI market is still in its infancy and is being promoted by the government. The Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry will continue to play a central role in the formulation of policy to create and protect innovation.
It is important that AI companies understand how to obtain and utilise copyrighted works for machine learning legally under the Copyright Act of Japan, as the scope of copyrighted works which can be obtained and utilised without the copyright holder’s prior approval is narrower than the scope under the fair use doctrine.
Any company wishing to conduct AI activity in Japan should also seek assistance from qualified professionals and bear in mind the following recommendations:
- Analyse and take the necessary steps to benefit from the available financial mechanisms that would facilitate the financing of your company and activity.
- Identify the AI market in which you intend to commercialise your products and services, in order to deploy an adaptive and competitive business model.
- Analyse the feasibility of your project from a regulatory standpoint, depending on the sector in which you wish to develop and market your AI products and services, in order to identify possible obstacles to its development or specific measures to be implemented (as well as the associated costs) so as to facilitate its deployment and ensure its legal security.