ARTICLE
10 April 2025

AI-Enabled Medical Devices: Transformation And Regulation

MT
McCarthy Tétrault LLP

Contributor

McCarthy Tétrault LLP provides a broad range of legal services, advising on large and complex assignments for Canadian and international interests. The firm has substantial presence in Canada’s major commercial centres and in New York City, US and London, UK.
The use of artificial intelligence ("AI") in healthcare creates transformational opportunities to improve systems and care, while also testing existing regulatory pathways...
Canada Food, Drugs, Healthcare, Life Sciences

The use of artificial intelligence (“AI”) in healthcare creates transformational opportunities to improve systems and care, while also testing existing regulatory pathways and prompting novel approaches for the approval of AI-enabled medical devices. Although the regulatory approval pathways for machine learning medical devices (“MLMD”) and AI-enabled medical devices in Canada and the U.S. have different intricacies, the Health Canada and the U.S. Food & Drug Administration (the “FDA”) approaches do share certain similarities and both regulatory agencies have issued overlapping guidance. As such, medical device manufacturers and medical software developers can expect to prepare and submit similar information across both their FDA and Health Canada applications.

Regulatory Pathways for AI-Enabled Medical Devices

The United States' healthcare system and market is often a consideration in the research and development of new medical devices in Canada. It is not uncommon for Canadian companies to look to the United States for regulatory approval of their devices, even before seeking similar approvals in Canada. Accordingly, below we have set out general information about the FDA's regulatory pathway prior to our discussion about the Health Canada process.

U.S. Food & Drug Administration

As of the FDA's latest update on March 25, 2025, the FDA has authorized 1,016 AI/ML-enabled medical devices.1

When navigating the U.S. regulatory regime, it is first important to determine whether a product is classified as a “medical device” under the Federal Food, Drug, and Cosmetic Act (“FD&C Act”).

For instance, software may be considered a medical device. One common type of software that may fall into this category, and which is primed for AI integration, is Clinical Decision Support (“CDS”) software. The FDA has issued guidance on interpreting whether CDS software meets the criteria in section 520(o)(1)(E) of the FD&C Act, which provides criteria and non-exhaustive examples for determining whether a given CDS software would be considered a Device or Non-Device CDS.2

If the software functions meet each of the criteria, such CDS software functions will be excluded from the definition of a device in s. 520(o)(1)(E) of the FD&C Act. If any of the criteria is not met, the CDS software function is a device under the FD&C Act and will be treated accordingly. As such, careful attention is needed at the research and development stage to adequately plan to meet the requirements for the correct respective approval pathway.

Once an AI-enabled or machine learning product is determined to be a medical device, the applicable pre-market approval pathway must be selected. The FDA provides three different pre-market approval pathways for medical devices, some of which are only available for select classes of devices:

Premarket notification (510(k))3

A 510(k) is a premarket submission made to the FDA to demonstrate that the device to be marketed is as safe and effective as, or “substantially equivalent” to, a pre-existing legally marketed device.4 Class I, II, and III devices intended for human use, for which a Premarket Approval application (“PMA”) is not required, must submit a 510(k) to the FDA unless the device is exempt from 510(k) requirements. Prior to marketing a device in the U.S., each submitter must first receive an order from the FDA which finds the device to be substantially equivalent, thereby clearing the device for commercial distribution.5

Any legally marketed device may be used as a predicate (including a device that was itself cleared under the 510(k) pathway), as long as the predicate is not currently in violation of the FD&C Act. Substantial equivalence is determined by comparing the device to a predicate and noting that:

  • it has the same intended use as the predicate; and
  • it has the same technological characteristics or intended use as the predicate; and
  • it may have different technological characteristics but does not raise different questions of safety and effectiveness; and
  • the information submitted to the FDA demonstrates that the device is as safe and effective as the legally marketed device.

While the growth of AI capacity and technology is still relatively new, one peer-reviewed paper notes that an increasing number of AI/ML-based medical devices are being approved through the 510(k) pathway.6 This indicates that a large number of existing companies are leveraging AI in repurposing existing software and other intellectual property and utilizing the 510(k) premarket clearance pathway to achieve approval of AI/ML-based medical devices and advance their commercial interests.7

De Novo Classification

In the event that a device manufacturer either determines there is no appropriate predicate device or receives a high-level not substantially equivalent determination from the FDA in response to a 510(k) submission, the De Novo classification must be chosen for class I or class II devices.8 The De Novo request provides a marketing pathway to classify novel medical devices based on controls to provide reasonable assurance of safety and effectiveness for the intended use.

If data and information demonstrates that the general and any special controls are adequate to provide reasonable assurance of safety and effectiveness and the probable benefits outweigh the probable risks of the device, the FDA will authorize the device to be marketed in compliance with applicable regulatory controls and establish a new classification regulation for the new device type.9 Once granted, the new device may also serve as a predicate device for 510(k) submissions.

Premarket Approval (s. 515)

Class III devices are those that support or sustain human life, are of substantial importance in preventing impairment of human health, or which present a potential, unreasonable risk of illness or injury.10 Due to the higher level of risk associated with Class III devices, in addition to the use of general and special controls, these devices require a premarket approval (“PMA”) under section 515 of the FD&C Act, the most stringent pathway required by the FDA. As such, manufacturers and software developers planning to introduce a truly novel AI-enabled Class III medical device that lack appropriate predicate devices should be prepared for the financial resources, timeline, and risk of rejection involved with applying for a PMA application.

A PMA application consists of scientific regulatory documentation submitted to the FDA that demonstrates the safety and effectiveness of a Class III device. The Technical Sections of the PMA application will include data and information substantiating an application from clinical and non-clinical laboratory studies as well as reliability testing of algorithms, particularly adaptive models. Multiple device-specific FDA guidance documents that describe specific data requirements for PMA applications are available, and study protocols should include all applicable elements described in the device-specific guidance documents.11

AI and MLMD Specific Guidance

At the beginning of 2021, the FDA released an AI/ML-Based Software as a Medical Device (SaMD) Action Plan, which aimed to describe the FDA's foundational principles for premarket review and oversight of performance monitoring and evaluation of AI enabled medical devices and MLMDs.12 Consistent with the Action Plan, the FDA has since published additional documents and guidance, together with Health Canada, the UK Medicines and Healthcare products Regulatory Agency (“MHRA”), and the International Medical Device Regulators Forum (“IMDRF”), which are discussed further below.

In June 2024, the FDA issued its Final Guidance: Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions.13 As described by the FDA, the recommendations in this guidance are intended to support iterative improvement through modifications to AI-enabled devices while continuing to provide a reasonable assurance of device safety and effectiveness through the use of a Predetermined Change Control Plan (“PCCP”). The FDA reviews a manufacturer's PCCP as part of a marketing submission for an AI-enabled device to ensure the continued safety and effectiveness of the device without necessitating the making of ongoing marketing submissions for implementing each modification described in the PCCP.

On January 6, 2025, the FDA published the Draft Guidance: Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.14 This draft guidance proposes both lifecycle considerations and specific recommendations to support pre-market approval submissions for AI-enabled medical devices. The draft guidance highlights recommendations from other FDA guidance in order to assist manufacturers with applying those recommendations to AI-enabled devices, as well as providing additional recommendations on topics of specific relevance for AI. When final, the guidance is intended to be used in addition to other applicable FDA guidance for a given device.

Health Canada

Health Canada's regulatory authority in relation to medical devices is derived from the Food and Drugs Act (“F&DA”) and accompanying Medical Devices Regulations (“MDR”).

Similar to the FDA pathway, determining what constitutes a medical device under Canada's MDR is an important first step. A medical device is defined as “an instrument, apparatus, contrivance or other similar article, or an in vitro reagent, including a component, part or accessory of any of them, that is manufactured, sold or represented for use in: (a) diagnosing, treating, mitigating or preventing a disease, disorder or abnormal physical state, or any of their symptoms, in human beings or animals.”15

For Health Canada, MLMDs are classified from Class I-IV and include both standalone software or medical devices that include software, as well as in vitro or non-in vitro diagnostic devices. When applying to Health Canada, manufacturers should be clear in their cover letter about:

  • Stating their device uses ML for all Class II, III, and IV applications for an MLMD.
  • If a PCCP is being introduced or required, noting that their device includes a PCCP.
  • A justification for the proposed medical device classification applied to the MLMD with reference to the rules outlined in Schedule 1 of the regulations.16

Applications must demonstrate the MLMD (and accompanying PCCPs) meet and will continue to meet applicable safety and effectiveness requirements, and provide a therapeutic benefit that outweighs an acceptable level of risk. All medical devices must meet applicable requirements of sections 10-20 of the MDR, while Class II-IV applications must also include information listed in section 32.17 Manufacturers should provide appropriate clinical evidence to support the safe and effective clinical use of a Class III and IV MLMDs. Applicants should always be prepared to provide more information during review of an application or after a device has been licensed.

Health Canada takes a product lifecycle approach in examining the safety and effectiveness of an MLMD, and appropriate attention should be owed to Health Canada's ‘Pre-market guidance for machine learning-enabled medical devices',18 and ‘Good Machine Learning Practice for Medical Device Development: Guiding Principles',19 in meeting such standards. This guidance addresses key considerations such as risk management (including bias, specially calling out the need for sex and gender-based analysis plus), data set quality, training methods, performance testing, transparency (including labelling considerations) and what should be included within the scope of a PCCP. Even following approval, manufacturers should engage in meaningful post-market performance monitoring and consider including a description of the processes and risk mitigations in place to respond to potential changes in the inputs to the ML model or how changes to the ML outputs are handled by compatible products. Manufacturers should also provide copies of the directions and instructions for use for the device, which must comply with the labelling requirements in sections 21-23 of the MDR.

Per Health Canda, a PCCP is the documentation that characterizes a device, its bounds, the intended changes to the ML system, and the protocol for change management.20 Detailed guidance on PCCPS can be found in Health Canada's ‘Pre-market guidance for machine learning-enabled medical devices'. PCCPs should be risk-based and informed by evidence with the aim of reassuring and substantiating that the device will continue to operate within its intended use even as it evolves over time.

Health Canada's guidance also notes that data used by manufacturers should adequately represent Canada's population and clinical practice, with regard to biological differences across sexes and skin pigmentation, for example.21 Similar to the FDA, Health Canada has also published a guidance document on specific clinical evidence requirements for medical devices, which provides information on when clinical data and evidence is required and how to appropriately compare devices.22 A guidance document on ‘Pre-market Requirements for Medical Device Cybersecurity' is also available, advocating for early consideration around cybersecurity in the product life cycle, device-specific risk management, and robust validation testing procedures.23

Global Perspectives

The FDA, Health Canada, the UK Medicines and Healthcare products Regulatory Agency (“MHRA”), and the International Medical Device Regulators Forum (“IMDRF”) have also issued joint guidance in this area. While documents like the IMDRF Quality Management System for Software as a Medical Device framework24 are not authoritative regulations, they provide harmonized quality management principles for regulators to adopt in a way that fits their own regulatory framework. There is also a clear willingness for jurisdictions to learn from each other and adopt globally-recognized best practices and principles.

The FDA, Health Canada, and the MHRA have jointly identified ten guiding principles with the IMDRF to inform the development of Good Machine Learning Practice (“GMLP”) and address the unique considerations of iterative MLMDs, which should be considered early during the product life cycle.25 Building upon the GMLP principles, the FDA, Health Canada, and MHRA released additional documents on guiding principles for PCCPs26 and promoting appropriate transparency for MLMDs.27

While PCCPs may be developed and implemented in various ways across jurisdictions, one of the key objectives of the guiding principles for PCCPs is to provide foundational considerations that highlight characteristics of robust PCCPs, with a specific emphasis on encouraging international harmonization.28

In this context, “transparency” describes the degree to which appropriate information about a MLMD is clearly communicated to relevant audiences. Effective transparency relies on ensuring information that can impact patient outcomes is communicated, and done so in a way that is context-appropriate and relies on a holistic understanding of users and environments. Taking a human-centered design approach that considers the information needs throughout each stage of the product lifecycle is recommended, and timely notifications and software updates should be made when limitations are discovered.

Providers seeking to market MLMDs in the EU will be subject to the EU AI Act, and will find familiar principles from the American and Canadian regimes. MLMDs are classified as the highest risk classification for permitted uses of AI, requiring compliance with the obligations outlined in Article 8. Implementing appropriate risk management systems that can adequately identify, evaluate, and mitigate the reasonable foreseeable risks of MLMDs to health, safety, and privacy is likewise emphasized in addition to mandating that providers implement an AI quality management system to ensure compliance.29 Data governance obligations also extend beyond patient data to AI-specific requirements like training, validation, and testing data sets.30 Providers from outside the EU also need to appoint an EU-authorized representative.

Considerations

Healthcare providers who are uncertain about how AI works may be reluctant to trust and unwilling to adopt its use, leading to underutilization of potentially beneficial technology.31 In contrast, the risk of automation bias should not be ignored, which is the tendency for humans to be overconfident in machine-based recommendations, relying reflexively on an AI tool without independent deliberation and decision making.32 Some experts justifiably worry that physicians may even feel compelled to rely on AI against their professional judgment if AI use becomes the standard of care and not using AI results in liability.33 Locking-in AI tools to prevent them from learning iteratively helps to maximise safety and effectiveness guarantees following market entrance and to reduce oversight costs, but this approach also undercuts the potential of escalating benefits to public health and cost containment offered by ML software.34

The following considerations should be kept front of mind during the regulatory approval process to ensure sufficient systems and controls are in place for AI-enabled medical devices and MLMD:

Black Box Dilemma

Software's limited capacity to justify to an outside observer (even to an AI's own developers35) how and why it may have reached a decision has been referred to as AI's “black box” dilemma, with calls for increased “explainability” or interpretability in software design.36 Explainability is an aspect of transparency, and was defined in the combined Transparency Guiding Principles for MLMDs by the FDA, Health Canada and MHRA as the degree to which an output or result or basis for a decision or action can be logically explained to a person.37 Being able to understand its decision-making capabilities is imperative in ensuring that an adaptive algorithm would not begin learning from and incorporating novel faulty assumptions into subsequent diagnoses.

Even explainable AI, however, may not resolve these issues, as “explanations” often provide a proxy for how AI works rather than a direct understanding, and certain tools may be safe and effective despite not being explainable.38 Not only does this introduce potential uncertainty into AI's outcomes, but reliance on this information may create challenges when apportioning liability between health care professionals, hospitals, and software manufacturers in the event of negligence arising from reliance on a faulty output.

Algorithmic and Data Bias

The term “algorithmic bias” has been defined as an unwarranted skewing of outputs stemming from a problem in the algorithmic design.39 Developers have seen positive feedback loops whereby an adaptive ML application becomes increasingly optimized for data that it is exposed to at greater frequencies.40 If training datasets used by AI programmers systematically under-represents particular groups like BIPOC individuals and women, then the data that algorithm is built upon is skewed accordingly.41 

Unfortunately, even despite innovators' good intentions, many accessible data sets currently used to train AI applications in healthcare are relatively racially, geographically, and/or socio-economically homogenous and therefore fail to adequately represent many vulnerable and marginalized patient populations.42 For example, MIMIC-III is a large, public database in the US that has been used for the development of over 1,300 applications in adult critical care.43 However, this data is from a large tertiary care hospital in Boston, MA, and as such, an AI application derived from MIMIC-III data could be biased in favour of socioeconomically advantaged individuals, and against patient subpopulations that lack access to such care (especially in the US).

These issues have not been ignored by lawmakers. The federal government's Directive on Automated Decision Making (“ADM Directive”) indicates that Canadian regulatory frameworks will also likely require that any health-focused AI technology ensures that all data be tested for bias and non-discrimination with human oversight before care decisions are made.44 Certain risk-mitigating requirements within the ADM Directive are proportionally targeted to the varying impact of the automated decision, one of which includes utilizing adequately representative training data sets. However, solely validating training data is insufficient to mitigate the risk of algorithmic bias, as ML software can itself also “learn” to be biased over time.

Privacy Concerns and Other Applicable Legislation

Jurisdiction over privacy concerns, in the health context and more broadly, is shared between provincial and federal governments. The Personal Information Protection and Electronic Documents Act (“PIPEDA”) applies to privacy protections in the commercial medical device context. Provincial privacy laws and specific health privacy laws also govern information about an individual's health or information associated with the provision of health care to an individual. Moreover, adequate consideration must be owed to international and interprovincial data sharing requirements and limiting the risk of AI tools reidentifying previously deidentified data.45 Especially in the context of adaptive MLMD, patient autonomy would also need to be safeguarded by expressly requesting free and informed consent for the collection of patients' health information for purposes that go beyond strictly diagnostic decision making purposes (i.e. for training algorithms), ensuring patients that reject such additional steps are not denied access to such clinical tools.

The federal government previously introduced Bill C-27, which included the proposed Artificial Intelligence and Data Act (“AIDA”) to regulate international and interprovincial trade and commerce in AI systems. However, AIDA died on the Order Paper at the time of prorogation by Parliament on January 6, 2025. If AIDA had received royal assent in its form, AIDA would have established common requirements for the design, development, and use of AI systems, including measures to mitigate risks of harm and biased output. Note however, that s. 3(1) of the AIDA indicated the Act would not have applied to a “government institution” as defined in section 3 of the Privacy Act, which includes the Department of Health and the Public Health Agency of Canada.

In Ontario, the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024 recently received royal assent, although a coming-into-force date has not yet been set. This legislation enacts the Enhancing Digital Security and Trust Act, 2024  and amends the Freedom of Information and Protection of Privacy Act. The Enhancing Digital Security and Trust Act, 2024  includes significant new obligations regarding the use of AI by prescribed public sector entities, which could include hospitals and provincial health agencies. Public sector entities identified in yet-to-be released regulations will be be required to do the following with respect to the use of artificial intelligence systems:

  • provide information to the public about their use of such systems;
  • develop and implement accountability frameworks and take steps respecting risk management;
  • in certain circumstances, disclose information and ensure an individual provides oversight of the use of an artificial intelligence system;
  • comply with technical standards respecting artificial intelligence systems.

Each of these requirements will be prescribed in more detail in yet-to-be released regulations. Institutions considering the implementation of AI-enabled medical devices or MLMD will want to keep an eye on the coming into force of this legislation and the publication of corresponding regulations.

Conclusion

The potential for AI-enabled medical devices and MLMDs to revolutionize healthcare is well-known, but is not without risk. Developers and manufacturers of medical devices who are exploring the use of this technology must proceed with the expectation that regulators are proceeding with caution, and that regulatory requirements are poised to continue to evolve as we learn more about how the devices are deployed in the healthcare context.

As has long been the case, we expect the FDA and Health Canada will continue to communicate and collaborate, and demonstrate a general alignment in regulatory approach. That said, the FDA appears to have taken the lead in this space, and we will continue to monitor Health Canada's next steps.

Footnotes

1. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices

2. https://www.fda.gov/media/109618/download

3. Summary:  https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/premarket-notification-510k; full: https://www.fda.gov/media/82395/download

4. S. 513(i)(1)(A), FD&C Act.

5. https://www.fda.gov/media/99812/download

6. https://www.thelancet.com/journals/landig/article/PIIS2589-7500(23)00126-7/fulltext

7. The paper found that from 2019-2021, more than a third of AI/ML-based devices originated from predicate non-AI/ML counterparts.

8. https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/de-novo-classification-request

9. https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/de-novo-classification-request

10. https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/premarket-approval-pma

11. https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-assistance/guidance-documents-medical-devices-and-radiation-emitting-products

12. https://www.fda.gov/news-events/press-announcements/fda-releases-artificial-intelligencemachine-learning-action-plan

13. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial-intelligence

14. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing

15. Section 2, MDR.

16. https://laws-lois.justice.gc.ca/eng/regulations/sor-98-282/page-12.html#h-1022100

17. https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/application-information/guidance-documents/pre-market-guidance-machine-learning-enabled-medical-devices.html

18. https://www.canada.ca/content/dam/hc-sc/documents/services/drugs-health-products/medical-devices/application-information/guidance-documents/pre-market-guidance-machine-learning-enabled-medical-devices/pre-market-guidance-machine-learning-enabled-medical-devices.pdf

19. https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/good-machine-learning-practice-medical-device-development.html

20. https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/application-information/guidance-documents/pre-market-guidance-machine-learning-enabled-medical-devices.html

21. https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/application-information/guidance-documents/pre-market-guidance-machine-learning-enabled-medical-devices.html

22. https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/application-information/guidance-documents/clinical-evidence-requirements-medical-devices.html

23. https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/application-information/guidance-documents/cybersecurity/document.html#a2.1

24. https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-151002-samd-qms.pdf

25. https://www.fda.gov/media/153486/download

26. https://www.fda.gov/media/173206/download?attachment

27. https://www.fda.gov/media/179269/download?attachment

28. https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles

29. https://www.nature.com/articles/s41746-024-01232-3

30. AI Act, Article 10.

31. Risk Analytica, “The Case for Investing in Patient Safety in Canada. Canadian Patient Safety Institute” (2017), online: .

32. David Lyell & Enrico Coiera, “Automation Bias and Verification Complexity: A Systematic Review” (2017) 24:2 Journal of the American Medical Informatics Association 423.

33. A Michael Froomkin, Ian Kerr, & Joelle Pineau. “When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning” (2019) 61 Ariz L Rev 3.

34. Walter G Johnson, “Flexible regulation for dynamic products? The case of applying principles-based regulation to medical products using artificial intelligence” (2022) 14:2 Law, Innovation and Technology 210.

35. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Boston: Harvard UP, 2015).

36. Tim Miller, “Explanation in Artificial Intelligence: Insights from the Social Sciences” (2019) 267 Artificial Intelligence 1

37. https://www.fda.gov/media/179269/download?attachment

38. Boris Babic et al, “Beware Explanations from AI in Health Care” (2021) 373(6552) Science 284

39. Bradley Henderson, Colleen M Flood & Teresa Scassa, “Artificial Intelligence in Canadian Healthcare: Will the Law Protect Us from Algorithmic Bias Resulting in Discrimination?” (2022) 19:2 CJLT 475.

40. James Zou & Londa Schiebinger, “AI Can Be Sexist and Racist — It's Time to Make It Fair” (2018) 559 Nature 324.

41. Ziad Obermeyer et al, “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations” (2019) 366 Science 447.

42. Bradley Henderson, Colleen M Flood & Teresa Scassa, “Artificial Intelligence in Canadian Healthcare: Will the Law Protect Us from Algorithmic Bias Resulting in Discrimination?” (2022) 19:2 CJLT 475.

43. Sujay Nagaraj et al, “From Clinic to Computer and Back Again: Practical Considerations When Designing and Implementing Machine Learning Solutions for Pediatrics” (2020) 6 Current Treatment Options in Pediatrics 336.

44. Government of Canada, “Directive on Automated Decision-Making” (28 June 2021) online: [ADM Directive].

45. I Glenn Cohen & Michelle M Mello, “HIPAA and Protecting Health Information in the 21st Century” (2018) 320:3 JAMA 231.

To view the original article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More