AI in Telehealth: Do HIPAA Rules Still Apply?
Telehealth platforms increasingly use AI for scheduling, transcription, and clinical support. But remote care with AI assistance creates new HIPAA compliance challenges that many providers don't fully understand.
HIPAA applies everywhere PHI goes
Location doesn't change HIPAA obligations. Whether care happens in-person, over video, or through AI-assisted platforms, the same rules apply to Protected Health Information (PHI).
The key question isn't where care happens, but who handles PHI and how it's protected during AI-enhanced telehealth encounters.
Common telehealth AI scenarios
AI medical scribes
What it does: Transcribes and summarizes patient-provider conversations during video visits.
HIPAA implications: The AI scribe vendor becomes a Business Associate handling PHI. Requires BAA and specific security controls.
Gray areas: Real-time processing, cloud storage, model training on transcripts.
Symptom checkers and triage
What it does: Patients input symptoms, AI provides preliminary assessment or routing recommendations.
HIPAA implications: Patient-entered health information becomes PHI when collected by covered entities.
Gray areas: Direct-to-consumer vs. provider-sponsored tools, data retention policies.
Clinical decision support
What it does: AI analyzes patient data to suggest diagnoses, treatments, or care protocols during telehealth visits.
HIPAA implications: Full PHI access requires comprehensive BAAs and security controls.
Gray areas: Real-time analysis, cloud processing, integration with EHR systems.
The Business Associate challenge
Most telehealth AI vendors qualify as Business Associates under HIPAA because they create, receive, maintain, or transmit PHI on behalf of covered entities.
BAA requirements for AI vendors
- Permitted uses → Limit AI processing to providing contracted services only
- Data minimization → Access only PHI necessary for the specific AI function
- Safeguards → Administrative, physical, and technical protections for PHI
- Subcontractor management → BAAs with any AI model providers or cloud services
- Breach notification → Prompt reporting of any PHI exposure or unauthorized access
- Data return/destruction → Clear procedures for PHI handling after contract termination
Common BAA gaps with AI
- Model training restrictions → Many vendors want to use customer data to improve AI models
- Subprocessor disclosure → AI vendors often use multiple cloud services and model providers
- Data residency → PHI may be processed across multiple geographic locations
- Audit rights → Limited visibility into AI processing and security controls
Platform-specific compliance considerations
Video conferencing with AI features
Compliance focus: Recording, transcription, and cloud storage of patient encounters.
- Ensure AI transcription vendors have BAAs
- Control where recordings and transcripts are stored
- Manage access to AI-generated summaries and notes
- Verify encryption for AI processing of video/audio data
AI-powered patient portals
Compliance focus: Chatbots, automated responses, and patient data analysis.
- Limit AI access to necessary PHI only
- Ensure patient authentication before AI interactions
- Monitor AI responses for accuracy and appropriateness
- Maintain audit logs of AI-patient interactions
Remote monitoring with AI
Compliance focus: Wearable data, home monitoring devices, and AI analysis.
- Classify device data as PHI when health-related
- Secure transmission from devices to AI processing systems
- Control AI vendor access to continuous monitoring data
- Manage patient consent for AI analysis of monitoring data
State telehealth laws and AI
State regulations add complexity beyond federal HIPAA requirements:
Licensing and AI assistance
- Provider responsibility → Licensed clinicians remain responsible for AI-assisted diagnoses
- Standard of care → AI tools must meet state medical practice standards
- Documentation requirements → Some states require disclosure of AI tool use in patient records
Patient consent for AI
- Informed consent → Patients may need specific disclosure about AI involvement in their care
- Opt-out rights → Some jurisdictions allow patients to refuse AI-assisted care
- Data sharing → Additional consent may be required for AI processing of health data
Technical safeguards for telehealth AI
Encryption and transmission
- End-to-end encryption → Protect PHI during transmission to AI processing systems
- At-rest encryption → Secure storage of AI-processed health data
- Key management → Control encryption keys, don't rely solely on vendor key management
- Secure APIs → Authenticated and encrypted connections between telehealth and AI systems
Access controls and monitoring
- Role-based access → Limit AI system access based on job functions and need-to-know
- Audit logging → Track all AI access to PHI with timestamps and user identification
- Session management → Automatic logout and session termination for AI-assisted encounters
- Anomaly detection → Monitor for unusual AI access patterns or data requests
Vendor management for telehealth AI
Managing AI vendors in telehealth requires extra diligence due to PHI sensitivity:
Due diligence checklist
- HIPAA experience → Verify vendor has healthcare clients and understands compliance requirements
- Security certifications → SOC 2 Type II, HITRUST, or FedRAMP authorization
- Subprocessor mapping → Full disclosure of AI model providers, cloud services, and data processors
- Incident history → Review any past security incidents or compliance violations
- Insurance coverage → Adequate cyber liability and professional liability coverage
Use our comprehensive vendor due diligence guide for additional evaluation criteria.
Insurance considerations
Telehealth AI creates unique insurance challenges:
- Professional liability → Coverage for AI-assisted diagnoses and treatment recommendations
- Cyber liability → Protection for PHI breaches during AI processing
- Technology E&O → Coverage for AI system failures or errors
- Business interruption → Protection when AI vendor outages disrupt telehealth services
Review our cyber vs. AI insurance analysis and questions for your insurer.
Patient rights and AI transparency
Patients have rights regarding AI use in their healthcare:
Disclosure requirements
- AI involvement → Inform patients when AI assists in their diagnosis or treatment
- Data usage → Explain how their health data will be processed by AI systems
- Vendor relationships → Disclose third-party AI services that will access their PHI
- Limitations → Clearly communicate AI tool limitations and need for clinical oversight
Patient control
- Consent management → Allow patients to opt out of AI-assisted care where possible
- Data portability → Ensure patients can access AI-generated summaries and analyses
- Correction rights → Process for patients to correct AI-generated information in their records
Compliance best practices
- Risk assessment → Evaluate each AI tool's PHI access and processing requirements
- BAA negotiation → Ensure comprehensive Business Associate Agreements with all AI vendors
- Staff training → Educate providers on HIPAA requirements for AI-assisted telehealth
- Patient communication → Develop clear disclosure processes for AI involvement in care
- Incident response → Plan for AI-related PHI breaches or system failures. See our crisis response guide.
- Regular audits → Monitor AI vendor compliance and security controls
- Documentation → Maintain records of AI decision-making for compliance reviews
Emerging regulatory considerations
Beyond HIPAA, telehealth AI faces evolving oversight:
- FDA regulation → Some AI diagnostic tools require FDA clearance or approval
- State medical boards → Licensing requirements for AI-assisted remote care
- FTC oversight → Consumer protection and advertising standards for AI health claims
- State privacy laws → California, Virginia, and other state privacy regulations may apply
Questions to ask yourself
- Do all our AI vendors have comprehensive Business Associate Agreements?
- Are we properly disclosing AI involvement to patients during telehealth encounters?
- Do we understand which AI subprocessors have access to our patients' PHI?
- Does our professional liability insurance cover AI-assisted remote diagnoses? Similar considerations to our general HIPAA AI guide.
- Do we have incident response plans for AI-related PHI breaches during telehealth sessions?
No email required — direct download available.
Navigate telehealth AI compliance with confidence
Start with our free 10-minute AI preflight check to assess your telehealth compliance risks, then get the complete AI Risk Playbook for healthcare-specific frameworks and vendor evaluation tools.