by Donna Vanderpool, MBA, JD
Ms. Vanderpool is Director of Risk Management at Professional Risk Management Services (PRMS).
Funding: No funding was provided for the preparation of this article.
Disclosures: The author is an employee of PRMS. PRMS manages a professional liability insurance program for psychiatrists.
Innov Clin Neurosci. 2024;21(10–12):48–49.
This ongoing column is dedicated to providing information to our readers on managing legal risks associated with medical practice. We invite questions from our readers. The answers are provided by PRMS (www.prms.com), a manager of medical professional liability insurance programs with services that include risk management consultation and other resources offered to health care providers to help improve patient outcomes and reduce professional liability risk. The answers published in this column represent those of only one risk management consulting company. Other risk management consulting companies or insurance carriers might provide different advice, and readers should take this into consideration. The information in this column does not constitute legal advice. For legal advice, contact your personal attorney. Note: The information and recommendations in this article are applicable to physicians and other health care professionals so “clinician” is used to indicate all treatment team members.
Question
I’ve read a lot about artificial intelligence (AI) in healthcare, and I’m ready to incorporate it into my practice. What is the current risk management thinking on this topic?
Answer
AI basically means the use of computers to perform tasks that usually require human intelligence, and it is exploding into all aspects of our lives at a rapid pace. However, most AI uses are not yet ready to incorporate into a clinical practice. Those few that are ready have significant risks, and they must be used in conjunction with, not instead of, a clinician’s decision-making.
General problems with AI in healthcare include but are not limited to the following:
- Gathering information: AI needs an incredibly large amount of patient information to train, but there are federal and state confidentiality protections on patient data.
- Security issues in training data: There are security concerns with training data, such as training data poisoning (the intentional contamination of data to sabotage the AI).
- Hallucinations: AI is well known to hallucinate (i.e., present false or misleading information as fact) about medications, which could be a significant patient safety issue. AI also cannot distinguish between accurate and false information.
- Bias: Bias in training data, such as sexism and racism, will cause biased AI results. This is a significant issue since training data includes everything on the internet, and we know there are inequities in healthcare found in medical data.
Professional Guidelines
The American Medical Association has developed “Principles for Augmented Intelligence Development, Deployment, and Use,”1 which includes the following topics:
- Oversight of Health Care Augmented Intelligence
- When to Disclose: Transparency in Use of Augmented Intelligence-Enabled Systems and Technologies
- What to Disclose: Required Disclosures by Health Care Augmented Intelligence-Enabled Systems and Technologies
- Generative Augmented Intelligence
- Physician Liability for Use of Augmented Intelligence-Enabled Technologies
- Data Privacy and Augmented Intelligence
- Augmented Intelligence Cybersecurity
- Payor Use of Augmented Intelligence and Automated Decision-Making Systems
Similarly, the American Psychiatric Association has issued “The Basics of Augmented Intelligence: Some Factors Psychiatrists Need to Know Now,”2 speaking to the following topics:
- Effectiveness and Safety
- Risk of Bias and Discrimination
- Transparency
- Protecting Patient Privacy
The Federation of State Medical Boards released “Navigating the Responsible and Ethical Incorporation of Artificial Intelligence into Clinical Practice”3 with guidelines on the following:
- Education
- Accountability
- Informed Consent and Data Privacy
- Equity and Bias
- AI Governance
AI and Professional Liability Risks
AI can cause privacy breaches, ethical concerns, and medical errors. AI can make up facts, leading to an incorrect diagnosis, and deliver incorrect information to clinicians and patients. The biggest liability concern at this point is overreliance on AI output by clinicians. Clinicians will likely be held liable for bad AI-generated advice that harmed a patient.
Takeaways
- Do not RELY on AI. AI should only supplement the clinician’s clinical decision-making, not replace it.
- Remember that AI hallucinates.
- Be transparent when using AI to ensure the role of AI is clear. AI-created documentation needs to indicate it was created by AI but reviewed by the clinician.
- Patients need to consent to the use of AI in their treatment.
- Do not give patient information to a generative AI system, such as ChatGPT, without obtaining a business associate agreement from the AI platform vendor.
References
- American Medical Association. Principles for augmented intelligence development, deployment, and use. 14 Nov 2023. Accessed 25 Nov 2024. https://www.ama-assn.org/system/files/ama-ai-principles.pdf
- American Psychiatric Association. The basics of augmented intelligence: some factors psychiatrists need to know now. 29 Jun 2023. Accessed 25 Nov 2024. https://www.psychiatry.org/News-room/APA-Blogs/The-Basics-of-Augmented-Intelligence
- Federation of State Medical Boards. Navigating the responsible and ethical incorporation of artificial Intelligence into clinical practice. Apr 2024. Accessed 25 Nov 2024. https://www.fsmb.org/siteassets/advocacy/policies/incorporation-of-ai-into-practice.pdf