25% off all training courses Offer ends May 29, 2026
View HIPAA Courses
25% off all training courses
View HIPAA Courses
Offer ends May 29, 2026

The HIPAA Journal is the leading provider of HIPAA training, news, regulatory updates, and independent compliance advice.

When AI Technology and HIPAA Collide

Minefields HIPAA Covered Entities and Business Associates Should Avoid

HIPAA Covered Entities beware! Your vendors are probably implementing artificial intelligence (“AI”) technology within their service offerings.

Today, an all-too-common scenario involves an email message or telephone call from your trusted third-party vendor indicating that they are going to integrate AI technology into their service offerings that will involve the use of your patients’ Protected Health Information (PHI). They claim that by using AI technology, they can provide their deliverables in less time, generate useful insights more rapidly, interpret medical imaging, improve the delivery of diagnosis and treatment, or perform accurate predictive analytics.

However, lurking surreptitiously behind the potential benefits of using PHI in AI technology lies a murky mix of risks that could negatively impact you, your vendors, and even your patients, especially when HIPAA compliance and patient PHI are involved. So how should a Covered Entity respond to its Business Associates’ use of patient PHI in AI technology?

AI Technology

When assessing a Covered Entity’s or Business Associate’s use of PHI in AI technology, it is helpful to have a basic level of understanding of what is meant by AI technology. In layman’s terms, AI is a machine’s ability to undergo tasks that are typically performed by humans, or which require human intelligence.

One goal of AI is to create applications that are capable of being self-reliant and can think and act like humans. For example, an application that is able to perform tasks by learning and problem solving. AI technology involves the use of computer algorithms and analytics to build models that can solve problems. But in order for these models to be useful, the algorithms require enormous amounts of data to learn from.

Regulating Data When Using AI Technology

When AI technology uses large sets of data that are used to train the AI technology for their intended purposes, the data is likely going to include personal data, health data, or even PHI. Because the AI technology is using these types of data, data privacy laws and regulations will impact the AI technology’s uses of that data. In other words, if the AI technology is using large amounts of PHI, HIPAA will apply to the PHI, whether the AI technology is used by a Covered Entity or by a Covered Entity’s Business Associates.

Consider the following scenario: a Covered Entity hires a data aggregator as its Business Associate to input large amounts of PHI from providers’ electronic health records in order to train and use AI technology to identify diverse candidates for clinical trials. HIPAA will continue to apply to the PHI that is ingested by the AI technology because PHI is being collected from providers and is being used by the Business Associate’s AI technology to provide services on behalf of the Covered Entity.

Issues When Using PHI with AI Technology

The most common issues to be aware of when using PHI in AI technology arise from the application of HIPAA’s rules to the use of PHI with regard to the AI technology. Some of the issues may seem obvious – and that’s partly the point. The application of HIPAA’s rules will not vary, but there are many uses of PHI in AI technology, so the challenge is to understand how HIPAA’s rules apply to the various uses of PHI by the AI technology. Here are a few examples of how HIPAA’s rules can impact the uses of PHI in AI technology:

Authorization to Use PHI in AI Technology

The first issue to address is whether a Covered Entity or its Business Associates have the appropriate authority to use PHI in AI technology. Under the HIPAA Privacy Rule, there are explicit requirements regarding the access, collection, use, and disclosure of PHI.

Is the use for Treatment, Payment, or Healthcare Operations (“TPO”)? Research? Marketing? Under the direction of a valid HIPAA authorization from the patients? Does it fall under any of the other approved uses under HIPAA that do not require an authorization (public interest, law enforcement, etc.)? These uses are all governed by the HIPAA Privacy Rule.

Where the use of PHI is not for TPO or another approved use without an authorization, the requirement to obtain a HIPAA authorization still applies for such uses as research, marketing, or any other use of the PHI pursuant to a valid HIPAA authorization.

Training AI technology may not be considered TPO, so if a Covered Entity or its Business Associates are interested in using large amounts of PHI for training purposes, they will first need to obtain an appropriate HIPAA authorization to do so from each patient. However, obtaining HIPAA authorizations from large numbers of individuals will be challenging, and this process could hinder the ability to input large amounts of PHI in AI technology.

Data Minimization and Purpose Limitation

An important limitation on the use of PHI under the HIPAA Privacy Rule is that a Covered Entity and its Business Associates must only use the minimum amount of PHI necessary for its intended purposes. There are a few exceptions to this rule, such as when one Covered Entity is sharing the PHI of a patient with another Covered Entity for treatment purposes, or when disclosing PHI directly to the patient. However, in most instances, when using PHI, only the minimum amount of PHI must be used for its intended purposes.

So how would a Covered Entity or its Business Associates address this requirement when using AI technology? If large amounts of PHI must be ingested by AI technology to train it, how much PHI is enough? Who would decide how much is enough? And is the PHI being used for its intended purposes? Is someone going to oversee the use of PHI to ensure that the use does not violate the HIPAA Privacy Rule?

Another major concern about using PHI in AI technology is the ease with which the AI technology can access and use more data than is necessary for the intended purposes, e.g., data overreach. If a Covered Entity’s Business Associates are going to use large amounts of PHI to train AI technology, it will be challenging to ensure that the Minimum Standard and Purpose Limitation are met while safeguarding against data overreach.

Role-Based Access to PHI When Using AI Technology

Under the HIPAA Security Rule, only those employees who have a need to access and use PHI as part of their roles should be given access to it. Thus, a Covered Entity and its Business Associates are required to have role-based access controls in place to ensure that only those employees who need to have access to PHI are able to do so.

This poses another challenge when working with AI technology as only the employees who have met the access control requirements should be working with PHI. Will this requirement change which roles are able to work with PHI and AI technology?

For smaller entities, it may be difficult to assign roles and rights to access and use PHI because employees may be required to perform several different functions within their job duties. For example, a start-up company that has lean engineering and data science teams may have employees that have multiple job functions that require access to various forms of data including both PHI and de-identified data.

Because of the smaller number of employees, it will be challenging to assign roles and responsibilities to employees who typically do not have access to PHI, but work with AI technology, to now do so while also maintaining independence from access to de-identified data. This is a problem because employees who work with de-identified data should not work with PHI and vice versa in order to avoid instances where de-identified data could be re-identified by an employee who also works with PHI.

Data Integrity and Confidentiality

Another HIPAA Security Rule requirement is that a Covered Entity and its Business Associates must ensure the integrity, confidentiality, and availability of PHI. Therefore, when using PHI in AI technology, strict security measures must be in place to adequately protect the integrity, confidentiality, and availability of the PHI. Such measures should include at a minimum: access controls, encryption, firewalls, and continuous monitoring and oversight of the use of PHI by the AI technology to prevent unauthorized access to or use of the PHI.

However, it will be more difficult to implement and ensure there are appropriate security controls in place in the AI technology, if the AI technology is pulling in and using data from multiple sources, and if the AI technology is being accessed by multiple parties.

Practical Steps to Avoid HIPAA Non-Compliance

With the risks and challenges of using PHI in AI technology in mind, here are several suggestions that Covered Entities and Business Associates can follow to help minimize the risk of non-compliance with HIPAA’s rules when using PHI in AI technology:

Develop and Implement Policies and Procedures

Determine whether existing policies and procedures regarding the collection, handling, distribution, and use of PHI adequately cover the uses of PHI with AI technology. If not, then new policies should be developed and implemented that specifically address approved use cases of PHI in AI technology. For example, implement a policy that restricts employee use of PHI in unapproved AI technology for personal use, while granting the limited use of PHI in approved AI technology as part of the Covered Entity’s or Business Associate’s services.

  • AI Governance – Determine whether an existing privacy and security governance team can adequately address the uses of PHI in AI technology. If not, consider creating a separate AI Governance team to provide continual oversight over the uses of AI technology.
  • Update Contracts – Review and update contract templates and Business Associate Agreement templates and include additional language to address the risks associated with using PHI in AI technology.
  • Training and Awareness – Update training to include uses of PHI in AI technology and the risks of HIPAA non-compliance when using AI technology.
  • Code of Conduct – Develop a code of conduct with respect to the uses of PHI in AI technology and share the code of conduct with other Covered Entities and Business Associates whose data will be used by the AI technology.
  • Transparency – Covered Entities should include the uses of PHI in their Notice of Privacy Practices, and Business Associates should develop materials to share with Covered Entities that outline their uses of PHI in AI technology.
  • Risk Assessments – Conduct HIPAA risk assessments to identify risks to the integrity, confidentiality, and availability of PHI when used in AI technology. Assessments should be conducted regularly, especially when there are changes to existing processes or technology or with the development of new processes or technology.
  • Expert Support – Seek out the support of experienced data privacy and security professionals to help understand the risks of using PHI in AI technology, and implement best practices to minimize risks to HIPAA non-compliance when using PHI in AI technology.

With the development and use of AI technology in healthcare, it is hard to imagine Covered Entities and Business Associates not adopting and using AI technology because of the potential benefits. However, there are several risks to HIPAA compliance that can impact the use of PHI in AI technology. Establishing a strong set of policies, protocols, governance, and monitoring processes will help Covered Entities and Business Associates safely minimize the risks involved with using PHI in AI technology.

Update for January 2023: NIST AI Risk Management Framework

The NIST AI Risk Management Framework (AI RMF) provides healthcare organizations with a structured way to evaluate and manage the risks of artificial intelligence while aligning with HIPAA’s privacy and security standards. Unlike HIPAA, which sets baseline requirements for safeguarding Protected Health Information (PHI), the NIST framework focuses on principles such as validity, reliability, safety, security, explainability, privacy, and fairness in AI systems. For covered entities and business associates, this means that when deploying AI tools to process or analyze healthcare data, the AI RMF can be used alongside HIPAA to ensure not only regulatory compliance but also trustworthy and ethical AI practices. The framework is available as an overview on the NIST website.

Update for March 2025: HHS OCR Proposed Security Security Rule Update Has Impact on AI and PHI

On January 6, 2025, the HHS Office for Civil Rights (OCR) proposed the first major update to the HIPAA Security Rule in 20 years, citing the rise in ransomware and the need for stronger cybersecurity. For organizations deploying artificial intelligence in healthcare, these changes are especially significant, as they remove the distinction between required and addressable safeguards and introduce stricter expectations for risk management, encryption, and resilience. AI systems that process Protected Health Information (PHI) will be subject to these enhanced standards, meaning vendors and covered entities must reassess their security controls and ensure compliance before integrating AI into clinical or administrative workflows.

Author: Todd L. Mayover, CIPP E/US, is an experienced in-house attorney, consultant and data privacy compliance expert with more than two decades of experience working on compliance, legal, privacy, and regulatory affairs issues for companies in the digital health, healthcare, life sciences and pharmaceutical sectors. At Privacy Aviator LLC, Todd provides guidance to early-stage and multinational companies on complex AI and data privacy compliance programs, focusing on AI, GDPR, HIPAA, HITECH, PIPL, and various other U.S. state and international AI and privacy laws and regulations. You connect with Todd directly via LinkedIn

x

Is Your Organization HIPAA Compliant?

Find Out With Our Free HIPAA Compliance Checklist

Get Free Checklist