Healthcare organizations are under enormous pressure to adopt AI. The efficiency gains in clinical documentation, prior authorization, diagnostic support, and revenue cycle management are real. The vendors are persistent. The board wants to see an AI strategy.
What often gets skipped in that pressure is a careful analysis of what HIPAA actually requires when AI enters the picture. That gap creates real liability — not theoretical future risk, but the kind of violation that results in OCR investigations, settlement agreements, and corrective action plans.
This is not an argument against AI adoption. It is an argument for doing it correctly.
How AI Changes Your HIPAA Surface Area
HIPAA's requirements have not changed because of AI. The Privacy Rule, the Security Rule, and the Breach Notification Rule still apply exactly as they did before you started using AI tools. What has changed is the ways PHI can flow through your environment — and many of those new flows are happening without adequate oversight.
The core problem is this: AI systems that handle PHI are subject to HIPAA, and the people deploying those systems often do not know that PHI is being processed until after something goes wrong.
A physician using a commercial AI transcription tool to document patient encounters is routing PHI through that vendor's infrastructure. A billing team using an LLM-based tool to help draft appeals letters is feeding claims data — which is PHI — into that system. A care management team using a commercial AI tool to summarize patient records is sending those records to a third party. In each case, the question is the same: is there a Business Associate Agreement in place, and does that BAA adequately cover AI processing?
In most organizations I assess, the answer is no — either the BAA does not exist or it predates the AI functionality and does not address how PHI is used in model training or inference.
PHI in AI Prompts: The Underappreciated Risk
One of the most common HIPAA risks I see in healthcare organizations is the uncontrolled use of PHI in AI prompts.
Employees who want to use AI tools to help with their work will use them. If you have not provided sanctioned AI tools with appropriate PHI protections, your staff will use unsanctioned ones. That might mean a nurse copying clinical notes into a commercial chatbot to generate a care summary. It might mean an administrator feeding patient account information into an AI tool to draft a collections letter.
Each of those actions is a potential HIPAA violation. PHI entered into a commercial AI service that does not have a BAA with your organization has been disclosed to an unauthorized recipient — regardless of whether that data was ever actually viewed by a human at the vendor.
The behavioral root cause is not malice. It is efficiency pressure combined with inadequate tooling. The compliance answer has two parts: provide sanctioned AI tools that are properly governed, and train staff on what PHI is and why it cannot be used in unsanctioned tools.
The Security Rule's workforce training requirements apply here. AI use policies and regular training on those policies are not optional. They are required safeguards.
Business Associate Agreement Requirements for AI Vendors
Under HIPAA, any vendor that creates, receives, maintains, or transmits PHI on your behalf is a Business Associate and requires a BAA. This applies to AI vendors whose tools touch PHI.
That sounds straightforward, but it creates complications in the AI context.
First, many commercial AI vendors will not sign BAAs. Consumer-grade AI tools — including many popular LLM-based tools — explicitly state in their terms of service that they are not for use with PHI. If your staff is using those tools with patient data, you have a compliance problem regardless of whether you are aware of it.
Second, even vendors willing to sign BAAs may offer BAAs that do not adequately address AI-specific risks. A BAA written before the vendor offered AI functionality may not address whether PHI is used to train models, how long data is retained for inference, whether output data is logged, or what happens to PHI in the event of a model incident. You need to review BAA language specifically for these provisions.
Third, the question of whether a vendor qualifies as a Business Associate is not always obvious with AI. If a vendor processes PHI to improve their own models, that use is likely beyond the scope of your service relationship and requires careful evaluation.
The practical guidance: for any AI tool that might touch PHI, conduct a vendor assessment before deployment. Obtain a BAA if required. If the vendor will not sign one, the tool cannot be used with PHI. Document both outcomes.
The Minimum Necessary Standard and AI
HIPAA's minimum necessary standard requires covered entities to make reasonable efforts to limit PHI disclosure to the minimum necessary to accomplish the intended purpose.
AI tools often create tension with this standard because they work best with more context. An AI clinical documentation tool performs better when it has access to the patient's full record. An AI coding tool is more accurate when it can see complete encounter notes.
The compliance question is whether you are providing more PHI than is necessary for the specific AI function. This requires a deliberate analysis for each AI use case: what PHI does this AI function actually need, and are we providing more than that?
Document this analysis. If OCR reviews an incident involving your AI tools, the ability to show that you assessed and addressed the minimum necessary standard will matter.
Practical Compliance Steps
Building HIPAA-compliant AI adoption in a healthcare organization requires systematic work, not ad hoc decisions. Here is a practical framework.
AI inventory and risk analysis. You cannot manage what you do not know about. Inventory all AI tools in use across your organization, including those deployed by individual departments without central IT involvement. For each tool, assess whether PHI is or could be involved. Update your risk analysis to include AI-specific risks.
BAA remediation. For every AI vendor whose tools touch PHI, verify BAA status. Obtain BAAs where they do not exist. Review existing BAAs for AI-specific provisions and renegotiate where needed. Terminate use of tools that handle PHI where a BAA cannot be obtained.
AI use policies. Develop and enforce a policy governing AI tool use that specifically addresses PHI. The policy should define what tools are sanctioned for use with PHI, what tools may not be used with PHI under any circumstances, and the process for requesting approval of new AI tools.
Access controls and logging. HIPAA's Security Rule requires access controls and audit logs for systems containing PHI. Those requirements extend to AI systems. Ensure your AI deployments support audit logging at the level of granularity required by the Security Rule.
Training. Incorporate AI-specific HIPAA content into your workforce training program. Staff need to understand what PHI is, what happens when PHI is entered into an unsanctioned AI tool, and where to go if they are unsure whether a tool is approved.
Incident response procedures. Update your breach assessment and notification procedures to address AI-specific incidents. If an AI system processes PHI in an unauthorized way — including sending PHI to a vendor without a BAA — that is a potential reportable breach. Your incident response team needs to know how to assess those situations.
The Regulatory Environment Is Tightening
HHS has been active in signaling enforcement priorities around AI and data privacy in healthcare. Organizations that build a strong compliance foundation now — proper BAA coverage, documented risk analysis, enforced use policies — are better positioned regardless of how guidance develops.
The organizations that will face OCR scrutiny are those that treated AI adoption as a technology project and forgot it was also a compliance project.
At Ayliea, our healthcare security assessments specifically evaluate AI-related HIPAA compliance gaps alongside your broader security posture. If you are not certain your AI adoption is compliant, a structured assessment is a more efficient path than finding out through an incident.
