Human-Centered Ethical AI in Healthcare Contact Centers

Authors

  • Suresh Padala Independent Researcher, USA. Author

DOI:

https://doi.org/10.63282/3050-922X.IJERET-V1I2P110

Keywords:

Healthcare AI Ethics, Algorithmic Bias Mitigation, Explainable AI Healthcare, Clinical Decision Support Transparency, AI Governance Framework

Abstract

The use of AI in healthcare contact centers presents opportunities and challenges for triage and care routing as well as for clinical decision-making, particularly with respect to algorithmic amplification of bias, opacity in decision-making, and inequality in health. This article provides a proposed framework for the human-centered and ethically mindful design, development, and deployment of AI in healthcare contact centers based on five interdependent principles of fairness, transparency, explainability, accountability, and patient autonomy. The framework outlines the technical requirements for bias auditing, explainable AI to improve the transparency of clinical tools, and governance frameworks with clinical and regulatory oversight bodies. Evidence suggests that increasing transparency and explainability can improve operator trust and that AI tools with interpretable results improve response time and accuracy. The article describes bias present within healthcare data, including socio-economic status, race/ethnicity, geographic access distribution, and insurance status. It discusses methods used to address bias, including rebalancing training data, altering algorithmic weight, and adopting fairness constraints. Working iteratively through existing and new governance models at increasingly mature levels helps in providing HIPAA-compliant and nascent AI models to organizations with varying resource levels. Positioning trust as a prerequisite of, and not an outcome from, adoption creates the opportunity for healthcare organizations to be efficient, protect patient rights, deliver health equity outcomes, and become leaders of responsible healthcare technology innovation.

References

[1] Rajkomar, A., Hardt, M., Howell, M. D., Corrado, G., & Chin, M. H. (2018). Ensuring fairness in machine learning to advance health equity. Annals of Internal Medicine, 169(12), 866–872. https://doi.org/10.7326/M18-1990

[2] Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019). Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics, 21(2), E167–E179. https://journalofethics.ama-assn.org/article/can-ai-help-reduce-disparities-general-medical-and-mental-health-care/2019-02

[3] Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2018). Deep learning for healthcare: Review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236–1246. https://doi.org/10.1093/bib/bbx044

[4] Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

[5] Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2016). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. arXiv. Available: https://arxiv.org/abs/1610.08452

[6] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214–226. https://doi.org/10.1145/2090236.2090255

[7] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ““Why should I trust you?” Explaining the predictions of any classifier.” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778

[8] Endsley, M. R. (2017). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64. https://doi.org/10.1518/001872097778543886

[9] Sittig, D. F., & Campbell, E. M. (2008). Grand challenges in clinical decision support. Journal of Biomedical Informatics, 41(2), 387–392. https://doi.org/10.1016/j.jbi.2008.04.010

[10] Mezrich, R. S. (2008). Emerging legal issues for computer‑assisted medical diagnosis: Liability and regulation. Journal of Law, Medicine & Ethics, 36(2), 212–224

Downloads

Published

2020-06-30

Issue

Section

Articles

How to Cite

1.
Padala S. Human-Centered Ethical AI in Healthcare Contact Centers. IJERET [Internet]. 2020 Jun. 30 [cited 2026 Mar. 12];1(2):79-84. Available from: https://ijeret.org/index.php/ijeret/article/view/490