The Risks of Over Relying on Large Language Models in Healthcare
By Daniel Hoffman, CISSP
Fortiva IT, LLC
The adoption of AI in U.S. healthcare is accelerating at an incredible pace. According to a 2025 survey by the American Medical Association, over 66% of physicians are now using some form of AI, up from just 38% the year prior. From ambient clinical documentation tools to automated coding assistants and diagnostic support, healthcare organizations are increasingly integrating large language models (LLMs) into day to day operations. While the potential benefits are significant, thin improved efficiency, reduced administrative burden, and faster decision-making, this rapid integration brings equally important concerns around accuracy, patient safety, data security, and compliance.
Cybersecurity in Shared Data Models: A Growing Attack Surface
While large hospitals and insurance companies can afford to build private models, most private practices, billing services, and laboratories use third-party AI tools that operate on shared cloud infrastructure. Even if vendors advertise HIPAA compliance, risks remain:
- Data Commingling – In multi-tenant models, improperly segmented data may be exposed to other customers.
- Leakage via Prompt Inversion or Retention – LLMs have been shown to regurgitate sensitive content if improperly configured.
- Supply Chain Risk – Open-source dependencies or subcontracted services could become backdoors for attack.
Hallucinations in Patient Care and Billing: Not Just a Bug, a Liability
LLMs are probabilistic, meaning they can “hallucinate” plausible-sounding but factually incorrect responses.
🏥 Patient Safety Risks
- Misattributing or fabricating diagnoses
- Incorrect medication summaries
- Falsely inferring symptoms or test results
💵 Billing & Compliance Risks
- Overcoding can trigger audits and payer clawbacks
- Undercoding reduces revenue and skews reporting
- AI may apply incorrect modifiers or time-based codes, leading to fraud exposure
Benchmarking AI vs Human Accuracy
As the chart shows, AI performs well in structured, repetitive tasks but accuracy drops in high-context activities like diagnosis and clinical summaries.
(Insert chart here if using WordPress or email PDF)
The Business Risk: Over-Reliance Without Oversight
- Automation can’t replace accountability. Risks of overreliance include:
Skill Atrophy – Clinicians lose documentation and diagnostic sharpness if they defer to AI. - Hidden Dependencies – Relying on opaque AI APIs creates vendor lock-in and unknown operational fragility.
- Non-Compliance – Data may cross regulatory boundaries without proper governance.
- Brand Damage – Patients trust their doctors—not the AI SaaS tool advertised online.
“Trust but verify” isn’t enough. In AI governance, it’s “Monitor, Measure, Mitigate.”
Final Thought: AI Needs a Human Backstop
LLMs offer immense potential, but they don’t understand ethics, liability, or nuance. Healthcare organizations must approach AI with humility, integrity, cybersecurity discipline, and strong governance.
Discussions
Would you be interested in joining a conversation on this topic?
References
1. Stanford Study on AI-Generated Clinical Documentation: https://www.nature.com/articles/s41746-025-01670-7
2. AMA: 66% of Physicians Now Use AI: https://www.ama-assn.org/practice-management/digital-health/2-3-physicians-are-using-health-ai-78-2023
3. Google Med-PaLM Accuracy Benchmark: https://arxiv.org/abs/2303.13375
4. HHS HIPAA Guidance on AI and ePHI: https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/index.html
5. Medical Economics: Healthcare Data Leakage via AI Tools: https://www.medicaleconomics.com/view/health-care-workers-are-leaking-patient-data-through-ai-tools-cloud-apps
