It is evident that Artificial Intelligence (AI) is changing the face of medicine1: it detects pathological findings through imaging, assists in early diagnosis of high-risk patients, adapts medication to illnesses in a more precise and personal manner, and assists physicians in medical decision-making. For example, AI today can identify a "diabetic eye" - a dangerous medical condition that diabetic patients run a significant risk of contracting, and whose early detection is critical for treatment and prevention of blindness; AI "reviews" patients' medical information in the intensive care unit and helps identify those patients whose situation is about to deteriorate; AI "marks" moles suspected of being malignant, and even assists in directing patients in the hospital to the ER or once there, through its procedures.

As the number of AI applications expands, and the scope of its adoption in the medical diagnostic and treatment process grows, is it necessary to formulate a suitable regulatory framework.

For example, use of AI technology raises the risk of increasing discrimination in medical decision-making, given the biases inherent in medical data bases2. Moreover, difficulty arises in medical decision-making in a "black box" situation, where there is no clear linkage between the input, the data entered (e.g. medical records of all patients in similar condition) and the final output (such as a recommendation for a specific treatment). If we add to this the intricacy of long-term "learning" algorithms capable of generating different outputs even in similar situations, we see that existing legal categories fail to provide suitable solutions to this complex technology.

Presently, medical devices that include AI elements are classified in Israel according to existing traditional legal categories (regulation of medical devices etc.), in the absence of specific mandatory regulations addressing the unique characteristics of AI. For comparison's sake, in the United States, the FDA is undertaking initial steps to regulate special AI characteristics, including a pilot program for prior approval of an AI product that will change and "learn" during the period of its use, while monitoring it during its lifecycle.

At the same time, American and European regulatory bodies issue non-obligatory instructions and recommendations regarding AI, which eventually may become obligatory regulations and influence Israeli regulation3.

What must be considered when developing an AI-based product to be integrated in medical care?

As one develops software intended for use in medical care, it is necessary to address and deal with relevant regulatory requirements. Note that the definitions of "medical device" and "software as medical device" are complex, and therefore it is important to consult in advance on the matter.

Furthermore, one must ensure compliance with regulatory requirements concerning privacy protection while using medical information to develop the product (see previous post on secondary use of medical data) and during the lifecycle of the product, inasmuch as it continues to collect data and analyze it.

One must examine whether the use of AI violates existing regulations, such as the Physicians' Ordinance, which stipulates that only an accredited physician is permitted to undertake acts that constitute medical practice. From this we may deduce that it is possible that use of AI for certain medical procedures can be undertaken only when subject to a physician's evaluation and decision. This raises questions: is the physician entitled to set a sphere of action in which AI will "decide"? Must every AI "recommendation" be subject to a physician's approval? Is the ability of a physician to supervise a system and halt it in real time - sufficient? To this we add the fear that in the future an argument might be raised that this is a diversion from accepted practice and can be deemed as medical negligence.

In addition, from the non-obligatory guidance published in the AI field, some principles emerge that might eventually become mandatory regulation. These are the main ones:

Transparency - it is important to document the data base, as well as the method whereby AI formulates products to identify and prevent biases. The following questions must be addressed: What is the source of the data and how are the data processed? How was the algorithm developed and how was it validated? How do the algorithm's elements work together? How does the algorithm produce its output?

Explainability - An explanation must be provided as to how the product works, the product's functioning, the type of information it processes, its significance, its limitations, and its "decision-making" process. The explanation must enable the physician to understand the process and the results of the AI product and to be able to explain it to the patient.

Fairness - It must be ascertained that the product is robust and valid to prevent discrimination; one must take care to verify both the information upon which the algorithm was developed and undergoes change and its output, i.e. its recommendation, in order to prevent discrimination of weak populations.

Precise, reliable and well protected technology - the system must be precise, reliable, and protected, so that it will be possible to replicate the outcome, to rectify errors or to warn about them, and to prevent manipulation or harm to the patient.

Mechanisms for supervision and control, as well as human supervision - it must be ascertained that decision-making in the field of medical treatment remains with the health care provider, as referred to above, and that supervision and control mechanisms have been established to identify errors and correct them in real time.

Protection of patients' rights -all the guidelines relate to the importance of respecting human rights and the impact on the integrity of a person's body, his/her right to privacy and autonomy.

These are general principles; we will at a later date present a proposal that is based on the principles and provides practical tools for developing and utilizing AI products in the medical field.


Each new technology challenges existing regulation. As AI will become an integral part of the medical practice, it is important to consider the issues raised already during the product's planning and development stage and address developments that may influence the product's approval and its use. Proper advanced planning will help overcome potential obstacles later, and lead to a safer, better product for the patient's medical care.


1 For an in-depth review of legal aspects of AI in medicine, see: Roy Keidar and Tamar Tavory, "Legal and Regulatory Aspects of AI in Medicine," EMERGING TECHNOLOGIES: THE ISRAELI PERSPECTIVE (Lior Zemer, Dov Greenbaum and Aviv Gaon, eds, Nevo, 2021, Hebrew).

2 See Heidi Ledford "Millions Affected By Racial Bias in Health Care Algorithm." NATURE 31 Oct. 2019.

3 Examples of AI's guidance include the FTC guidance on using Artificial Intelligence and Algorithms, April 2020, Council of Europe, Ad hoc committee of Artificial Intelligence,Towards Regulation of AI system- Global perspectives on the development of a legal framework on Artificial Intelligence systems based on the Council of Europe's standards on human rights, democracy and the rule of law, Dec 2020 at This report also refers to report of the Israel's committee on AI Ethics and Regulation, Nov. 2019, headed by Prof. Karine Nahon.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.