AI and PHI: 5 Questions Every Healthcare Vendor Should Answer
Choosing AI vendors for healthcare requires more than feature comparisons. When Protected Health Information (PHI) is involved, the wrong vendor choice can create HIPAA violations, security breaches, and liability exposure. Here are the essential questions that separate compliant vendors from compliance risks.
Why these questions matter
Healthcare AI vendors often focus on clinical benefits and cost savings. But administrators need to evaluate HIPAA compliance, security controls, and liability protection before any PHI touches vendor systems.
Many vendors provide generic security answers that don't address healthcare-specific requirements. These 5 questions cut through marketing language to reveal actual compliance capabilities.
Question 1: How do you handle PHI in your AI processing?
What you're really asking
Does the vendor understand HIPAA's definition of PHI and have specific controls for health information processing?
Red flag answers
- "We treat all data the same way" → No healthcare-specific controls
- "Our platform is HIPAA-compliant" → Vague claim without specifics
- "We don't store any data" → May still process PHI in memory or temporary storage
- "We anonymize everything" → Anonymization is complex and often ineffective with AI
Good answers include
- Specific PHI identification and classification procedures
- Separate processing environments for PHI vs. non-PHI data
- Clear data flow documentation showing PHI handling
- Encryption and access controls specific to PHI processing
- Data retention and deletion policies aligned with HIPAA requirements
Follow-up questions
- Can you provide a data flow diagram showing how PHI moves through your AI systems?
- What specific safeguards protect PHI during AI model inference?
- How do you ensure PHI isn't used for model training or improvement?
Question 2: What subprocessors and AI models access our PHI?
What you're really asking
Who else will have access to your patients' health information, and do they all have appropriate HIPAA protections?
Red flag answers
- "We handle everything in-house" → Likely untrue for most AI vendors
- "Our cloud provider is secure" → Doesn't address HIPAA Business Associate requirements
- "We use industry-standard AI models" → May mean PHI goes to third-party model providers
- "That's proprietary information" → Transparency is required for HIPAA compliance
Good answers include
- Complete list of subprocessors that may access PHI
- Business Associate Agreements with all subprocessors
- Specific AI model providers and their HIPAA compliance status
- Geographic locations where PHI processing occurs
- Notification procedures for subprocessor changes
Follow-up questions
- Can you provide BAAs for all subprocessors that will access our PHI?
- How do you notify customers when subprocessors change?
- What happens if a subprocessor has a security incident involving our PHI?
Question 3: What are your audit and monitoring capabilities?
What you're really asking
Can you prove HIPAA compliance and detect security incidents involving PHI?
Red flag answers
- "We monitor everything" → Too vague, no specific PHI protections
- "Our logs are comprehensive" → Doesn't address HIPAA-specific audit requirements
- "We'll provide reports if needed" → Reactive rather than proactive monitoring
- "Trust us, we're secure" → No verifiable audit capabilities
Good answers include
- Real-time monitoring of PHI access and processing
- Detailed audit logs with user identification and timestamps
- Automated alerts for unusual PHI access patterns
- Regular compliance reporting and dashboards
- Third-party security audits and certifications (SOC 2, HITRUST)
Follow-up questions
- Can you provide sample audit reports showing PHI access tracking?
- How quickly can you detect and report potential PHI breaches?
- What compliance certifications do you maintain and can you share certificates?
Question 4: How do you handle AI errors and liability?
What you're really asking
When AI makes mistakes with patient data, who's responsible and how is liability allocated?
Red flag answers
- "AI errors are the user's responsibility" → Vendor disclaims all liability
- "Our AI is very accurate" → Doesn't address error handling procedures
- "We limit liability to subscription fees" → Inadequate for healthcare risks
- "That's covered in our terms of service" → Liability terms buried in fine print
Good answers include
- Clear error reporting and investigation procedures
- Appropriate liability insurance coverage for healthcare applications
- Defined liability allocation between vendor and customer
- Indemnification for vendor errors or security failures
- Procedures for AI model updates and error correction
Follow-up questions
- Can you provide certificates of insurance showing adequate liability coverage?
- What's your process for investigating and correcting AI errors?
- How do you handle liability when AI errors affect patient care?
Question 5: What's your incident response and breach notification process?
What you're really asking
Can the vendor meet HIPAA's 60-day breach notification requirements and support your incident response?
Red flag answers
- "We've never had a breach" → Unrealistic and doesn't address procedures
- "We'll notify you if something happens" → No specific timeline or process
- "Our security team handles incidents" → Doesn't address HIPAA notification requirements
- "Check our privacy policy" → Generic response, not healthcare-specific
Good answers include
- Specific breach notification timeline (within 24-48 hours to customers)
- Detailed incident response procedures and escalation paths
- Support for customer HIPAA breach notification requirements
- Forensic investigation capabilities and reporting
- Business continuity and disaster recovery plans
Follow-up questions
- Can you provide your incident response playbook and notification procedures?
- How do you support customers in meeting HIPAA's 60-day breach notification deadline?
- What forensic and investigation support do you provide during security incidents?
Evaluating vendor responses
Documentation requirements
Compliant vendors should provide:
- Business Associate Agreement → Comprehensive BAA covering all HIPAA requirements
- Security documentation → Policies, procedures, and technical safeguards
- Audit reports → SOC 2, HITRUST, or similar third-party assessments
- Insurance certificates → Proof of adequate cyber and professional liability coverage
- Incident response plan → Detailed procedures for security incidents and breach notification
Warning signs to avoid
- Reluctance to provide specific documentation
- Generic security responses not tailored to healthcare
- Inability to explain PHI handling in detail
- No experience with healthcare customers or HIPAA compliance
- Inadequate liability insurance or unwillingness to share certificates
Beyond the 5 questions: additional considerations
Technical evaluation
- Integration security → How does the AI tool connect to your EHR or other systems?
- Data residency → Where is PHI stored and processed geographically?
- Encryption standards → Both in transit and at rest protection for PHI
- Access controls → Role-based access and authentication mechanisms
Operational considerations
- Staff training → Vendor support for HIPAA-compliant AI tool usage
- Workflow integration → How AI fits into existing clinical and administrative processes
- Performance monitoring → Ongoing assessment of AI accuracy and clinical outcomes
- Update management → How AI model changes are tested and deployed
Contract negotiation priorities
Use vendor responses to negotiate stronger contract terms:
- Comprehensive BAA → Include all subprocessors and specific PHI handling requirements
- Liability allocation → Appropriate vendor responsibility for security failures and AI errors
- Audit rights → Access to vendor security controls and compliance documentation
- Data ownership → Clear customer ownership of PHI and AI-generated insights
- Termination procedures → Guaranteed PHI return or destruction upon contract end
Use our comprehensive vendor evaluation guide for additional contract considerations.
Implementation best practices
Once you've selected a compliant AI vendor:
- Pilot testing → Start with limited PHI exposure to test security and compliance
- Staff training → Educate users on HIPAA requirements for AI tool usage
- Monitoring setup → Implement ongoing oversight of vendor compliance and AI performance
- Incident procedures → Establish clear escalation paths for AI-related security issues
- Regular reviews → Periodic assessment of vendor compliance and contract performance
Insurance and risk management
AI vendor relationships create new insurance considerations:
- Cyber liability → Ensure coverage includes third-party vendor incidents
- Professional liability → Verify coverage for AI-assisted clinical decisions
- Technology E&O → Protection for AI system failures or vendor errors
- Business interruption → Coverage for operational disruption from vendor incidents
Review our insurance coverage analysis and questions for your insurer.
Regulatory compliance beyond HIPAA
Healthcare AI vendors must also address:
- FDA regulations → Medical device requirements for diagnostic AI tools
- State privacy laws → California, Virginia, and other state health data protections
- International compliance → GDPR for EU patients, other international health data laws
- Accreditation standards → Joint Commission, NCQA, and other healthcare quality requirements
Questions to ask yourself
- Have we asked all 5 critical questions to every AI vendor we're considering?
- Do we have comprehensive documentation from vendors about PHI handling and security?
- Are we comfortable with the vendor's subprocessor list and their HIPAA compliance?
- Does our insurance adequately cover risks from AI vendor relationships? Similar to considerations in our general HIPAA AI guide.
- Do we have clear procedures for monitoring vendor compliance after implementation?
No email required — direct download available.
Master healthcare AI vendor evaluation
Start with our free 10-minute AI preflight check to assess your current vendor risks, then get the complete AI Risk Playbook for healthcare-specific vendor evaluation frameworks and contract templates.