AI-Augmented Approval Workflows: A Dual-Authority Framework for Clinical Decision Making
DOI:
https://doi.org/10.63282/3050-922X.IJERET-V7I1P132Keywords:
Artificial Intelligence, Clinical Decision Support, Human-AI Collaboration, Medical Liability, Approval Workflows, Patient Safety, Healthcare Governance, Cryptographic Verification, Risk-Based DelegationAbstract
High-stakes clinical decisions are increasingly influenced by artificial intelligence systems, yet no formal framework exists specifying when AI may approve or recommend medical interventions. Existing advisory-only mechanisms are prone to alert fatigue, with override rates exceeding 90% for warnings of drug-drug interactions in commercial systems. This paper addresses such challenges by proposing a dual-authority framework for establishing explicit approval responsibilities of both human clinicians and AI systems. The framework includes risk-based authority on delegation models, formal disagreement resolution protocols, and liability allocation mechanisms. A lightweight dual-signature protocol enables cryptographic verification of the parties' signoffs over crucial decisions. An integration with established EHR systems and sub-500ms response times allow the system architecture to work conveniently. The proposed evaluation targets sensitivity above 90% for safety-related decisions, reducing override rates below 20%. By codifying shared accountability, the framework meets regulatory requirements such as California’s 2024 Physicians Make Decisions Act while also facilitating meaningful AI inclusion in clinical decision-making.
References
[1] A. D. Bryant, G. S. Fletcher, and T. H. Payne, “Drug interaction alert override rates in the Meaningful Use era,” Applied Clinical Informatics, vol. 05, no. 03, pp. 802–813, 2014, doi: https://doi.org/10.4338/aci-2013-12-ra-0103.
[2] M. Felisberto et al., “Override rate of drug-drug interaction alerts in clinical decision support systems: A brief systematic review and meta-analysis,” Health Informatics Journal, vol. 30, no. 2, Apr. 2024, doi: https://doi.org/10.1177/14604582241263242.
[3] J. F. Choukroun, K. Lee, and A. Rey, “Creating Meaningful Alerts and Reducing Alert Fatigue: Strategies Implemented by Informatics Pharmacists to Optimize Dose Range Checking Alerts in a Multihospital Health System,” Journal of Pharmacy Technology, vol. 38, no. 6, pp. 319–325, Aug. 2022, doi: https://doi.org/10.1177/87551225221117152.
[4] J. Becker, “Governor signs Physicians Make Decisions Act, keeping medical decisions between patients and doctors–not AI,” Senator Josh Becker, Sep. 30, 2024. https://sd13.senate.ca.gov/news/press-release/september-30-2024/governor-signs-physicians-make-decisions-act-keeping-medical.
[5] L. T. Kohn, J. M. Corrigan, and M. S. Donaldson, “To err is human: Building a safer health system,” PubMed, 2020. https://pubmed.ncbi.nlm.nih.gov/25077248/.
[6] B. Shneiderman, “Human-Centered Artificial Intelligence: Three Fresh Ideas,” AIS Transactions on Human-Computer Interaction, vol. 12, no. 3, pp. 109–124, 2020, doi: https://doi.org/10.17705/1thci.00131.
[7] “A model for types and levels of human interaction with automation | IEEE Journals & Magazine | IEEE Xplore,” ieeexplore.ieee.org. https://ieeexplore.ieee.org/abstract/document/844354.
[8] R. Challen, J. Denny, M. Pitt, L. Gompels, T. Edwards, and K. Tsaneva-Atanasova, “Artificial intelligence, bias and clinical safety,” BMJ Quality & Safety, vol. 28, no. 3, pp. 231–237, Jan. 2019, doi: https://doi.org/10.1136/bmjqs-2018-008370.
[9] W. N. Price, S. Gerke, and I. G. Cohen, “Potential Liability for Physicians Using Artificial Intelligence,” JAMA, vol. 322, no. 18, pp. 1765–1766, Oct. 2019, doi: https://doi.org/10.1001/jama.2019.15064.
[10] C. Cestonaro, A. Delicati, B. Marcante, L. Caenazzo, and P. Tozzo, “Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review,” Frontiers in Medicine, vol. 10, no. 1305756, Nov. 2023, doi: https://doi.org/10.3389/fmed.2023.1305756.
[11] Drug Enforcement Administration, “Electronic prescriptions for controlled substances”, Federal Register. 2010.
[12] A. Ault, “New State Law Will Restrict AI in Prior Authorization, Coverage Decisions,” Medscape, Oct. 14, 2024. https://www.medscape.com/viewarticle/new-state-law-will-restrict-ai-prior-authorization-coverage-2024a1000krq.
[13] Center for Devices and Radiological Health, “Clinical Decision Support Software - Draft Guidance,” U.S. Food and Drug Administration, 2019. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software.
[14] NIST, “AI Risk Management Framework,” Artificial Intelligence Risk Management Framework (AI RMF 1.0), vol. 1, no. 1, Jan. 2023, doi: https://doi.org/10.6028/nist.ai.100-1.
[15] P. A. Grassi et al., “Digital identity guidelines: authentication and lifecycle management,” NIST Special Publication 800-63B, Jun. 2017, doi: https://doi.org/10.6028/nist.sp.800-63b.
[16] K. R. Saverno et al., “Ability of pharmacy clinical decision-support software to alert users about clinically important drug drug interactions,” Journal of the American Medical Informatics Association, vol. 18, no. 1, pp. 32–37, Jan. 2011, doi: https://doi.org/10.1136/jamia.2010.007609.
[17] D. W. Bates, “Incidence of Adverse Drug Events and Potential Adverse Drug Events,” JAMA, vol. 274, no. 1, p. 29, Jul. 1995, doi: https://doi.org/10.1001/jama.1995.03530010043033.
[18] The views expressed in this work are those of the author and do not necessarily reflect the views of any current or former employers.