Ethical Implications of AI in Critical Care Medicine

Ethical Implications of New Technologies in the Care and Decision-Making of Critically Ill Patients

Dr. Manuel Palomo Navarro1; Dr. Carolina Ferrando Sánchez1; Dr. Elena Parreño Rodríguez1; Dr. Aída González Díaz1

  1. Specialists in Intensive Care Medicine at the Intensive Care Medicine Department, Hospital de Sagunto (Valencia, Spain)

OPEN ACCESS

PUBLISHED: 30 November 2025

CITATION: Palomo Navarro, M., Ferrando Sánchez, C., et al., 2025. Ethical Implications of New Technologies in the Care and Decision-Making of Critically Ill Patients. Medical Research Archives, [online] 13(11). https://doi.org/10.18103/mra.v13i11.7097

COPYRIGHT: © 2025 European Society of Medicine. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

DOI https://doi.org/10.18103/mra.v13i11.7097

ISSN 2375-1924

ABSTRACT

The integration of artificial intelligence, big data, and telemedicine into intensive care is transforming the way clinicians make decisions and care for critically ill patients. In daily ICU practice, these tools can support diagnostic precision, guide ventilation strategies, and help anticipate clinical deterioration. However, their adoption also demands a careful ethical approach. Issues such as transparency, equity, and patient autonomy must remain at the center of implementation to ensure that technological progress truly translates into safer and more humanized care. It is essential for clinicians to understand how these algorithms work, to validate their results, and to recognize when automated recommendations may not fit the clinical context.

At the same time, predictive models and tele-ICU systems have opened new possibilities for early detection and continuous monitoring, particularly in settings with limited resources. Yet, as these systems rely on large datasets, they also expose challenges related to privacy, data protection, and algorithmic bias. Future development must prioritize interpretability, data security, and equitable access across institutions. For AI to become a reliable ally in critical care, it must remain under human supervision, with informed consent processes adapted to this new context. Ultimately, technological innovation should not replace clinical judgment but rather enhance it—allowing intensivists to make faster, safer, and more ethically sound decisions.

Keywords:

Artificial intelligence; intensive care; Bioethics; Telemedicine; Clinical decision support systems; Patient autonomy

Introduction

The rapid incorporation of artificial intelligence (AI), big data, and telemedicine into intensive care medicine is redefining not only clinical practice but also the ethical foundations of patient care. These technologies offer unprecedented possibilities for early detection, diagnostic precision, and personalized treatment. Yet, their growing influence on clinical decision-making compels us to reconsider long-standing bioethical principles such as autonomy, beneficence, justice, and non-maleficence. In environments as complex and vulnerable as the intensive care unit, the intersection between human judgment and algorithmic reasoning raises profound questions about accountability, consent, and equity.

From a bioethical standpoint, the challenge lies not merely in validating the accuracy of these systems, but in ensuring that their use respects human dignity and preserves the clinician–patient relationship. The opacity of AI algorithms, the risk of biased data, and the asymmetry of knowledge between developers, clinicians, and patients demand transparency and shared responsibility. Moreover, the traditional concept of informed consent must evolve to address decision-making mediated by automated recommendations and remote interfaces. Thus, ethical foresight must guide the integration of digital tools in critical care, ensuring that innovation remains a means to reinforce — rather than replace — compassionate, just, and patient-centered medicine.

Materials and Methods:

A literature review was conducted using PubMed with MeSH terms to identify relevant studies on the integration of clinical decision support systems (CDSS) in critical care settings. The search strategy included the following queries:

  • ((“Decision Support Systems, Clinical/ethics”[Majr:NoExp] OR “Decision Support Systems, Clinical/legislation and jurisprudence”[Majr:NoExp] OR “Decision Support Systems, Clinical/trends”[Majr:NoExp]) AND (“Critical Care/ethics”[Majr:NoExp] OR “Critical Care/legislation and jurisprudence”[Majr:NoExp] OR “Critical Care/trends”[Majr:NoExp]))

This search yielded three relevant articles.

  • (“Decision Support Systems, Clinical”[Majr:NoExp]) AND (“Critical Care”[Majr:NoExp])

This search retrieved 92 articles.

  • (“Artificial Intelligence/ethics”[Mesh]) AND “Critical Care”[Mesh]

This search yielded two relevant articles.

  • “Diagnosis, Computer-Assisted/ethics”[Mesh]

This search yielded 14 articles.

  • “Artificial Intelligence AND informed consent”[Mesh]
  • “Learning Machine AND ethics”[Mesh]

The selected articles were assessed for relevance, methodological rigor, and contribution to understanding the integration of new technologies in Intensive Care Units (ICUs). Key themes were identified, including decision-making enhancements, patient safety improvements, and communication dynamics among healthcare professionals.

Results:

ARTIFICIAL INTELLIGENCE IN DECISION-MAKING

The introduction of AI in medical decision-making in ICUs represents a major advancement in resource optimization and diagnostic accuracy. However, its implementation is not without ethical challenges that must be addressed to ensure safe, equitable, and humanized medical care. Aspects such as patient autonomy, equity in access, medical responsibility, and transparency are fundamental in clinical practice.

Various organizations are developing machine learning and deep learning systems that will support or even replace clinical judgment in certain contexts. Although clinicians remain legally accountable, it is urgent to define clear legal and ethical frameworks to delineate these responsibilities. Future physician training should include how to validate these tools, so they can accept or correct atypical results and contribute to algorithm improvement. Their ability to identify normal cases depends on the dataset used for training.

Another key aspect is equity in healthcare. While AI has the potential to reduce disparities, its use may also reinforce existing gaps. An algorithm trained on limited or biased data could generate discriminatory recommendations, affecting certain patient groups based on gender, social class, or ethnicity. Furthermore, the implementation of these technologies is not uniform across all hospitals, which could widen the gap between institutions with more and fewer resources. To prevent these issues, it is essential to develop inclusive and equitable AI systems, subjecting them to constant audits to detect and correct potential biases.

Patient autonomy is a fundamental bioethical principle that guarantees the right to make informed decisions about one’s medical care. However, in the ICU, this autonomy is often limited due to the critical condition of the patient, which frequently prevents active participation in decision-making. AI in the ICU can analyze clinical data in real time and generate recommendations on treatments, mechanical ventilation adjustments, or drug administration, among others. Yet many of these decisions are difficult to explain to both physicians and relatives, posing a dilemma on how to obtain truly informed consent. If the patient cannot participate in the decision and the relatives or physicians do not fully understand the algorithm’s reasoning, the patient’s autonomy may be compromised. Moreover, to implement such tools, patients should also sign informed consent for the use of their data.

The use of AI in the ICU also involves handling large volumes of sensitive medical data, posing challenges in terms of privacy and security. These data must be properly stored and protected, in compliance with strict regulations such as the General Data Protection Regulation (GDPR) in Europe or HIPAA in the United States. In addition, robust cybersecurity measures must be implemented to prevent attacks that could compromise patients’ medical information.

Another factor to consider is the variability in the data used to train algorithms. An AI system may be affected by what is known as “domain shift,” i.e., differences in population characteristics, technological advances, or changes in data collection that can affect its performance. To ensure the accuracy and relevance of these systems, it is essential to conduct continuous audits and monitor their performance across different clinical settings.

ARTIFICIAL INTELLIGENCE IN PREDICTIVE ANALYTICS

The integration of AI in predictive analytics has revolutionized critical care by enabling earlier identification of patient deterioration, optimizing mechanical ventilation strategies, and enhancing personalized treatment plans. AI-driven models utilize vast datasets to detect subtle physiological changes, allowing clinicians to intervene proactively. This approach not only improves patient outcomes but also reduces ICU mortality rates and the length of hospital stays.

THE ROLE OF AI IN PREDICTIVE ANALYTICS

Predictive analytics in intensive care relies on machine learning algorithms that analyze patient data in real time to identify high-risk individuals and recommend timely interventions. Meiring et al. demonstrated how machine learning techniques enhance ICU outcome prediction over time, improving clinical decision-making through adaptive algorithms. These models continuously refine their predictions by incorporating new patient data, ensuring a dynamic and responsive approach to patient care.

A key application of AI in predictive analytics is optimizing mechanical ventilation. AI-driven models, as explored by Jansson et al., facilitate precise ventilation management by predicting optimal settings based on patient-specific variables. This reduces the risk of ventilator-induced lung injuries and improves oxygenation strategies in critically ill patients. Additionally, real-time interpretation of EEG signals for status epilepticus patients, as discussed by Waller et al., showcases AI’s potential in neuromonitoring and rapid therapeutic adjustments.

ENHANCING EARLY DETECTION OF CRITICAL CONDITIONS

One of the most significant benefits of AI in predictive analytics is its ability to detect early signs of clinical deterioration. Jung et al. examined a real-time automated bedside dashboard for sepsis care, demonstrating improved detection and earlier administration of life-saving interventions. By analyzing vital signs, laboratory data, and hemodynamic parameters, AI models can recognize sepsis patterns before clinicians detect them, leading to improved response times and reduced mortality.

AI has also proven beneficial in glucose management for critically ill diabetic patients. Blaha et al. studied the Space Glucose Control system, which uses AI to regulate blood glucose levels in ICU patients. The system ensures more stable glucose management, minimizing hyperglycemic episodes and reducing complications associated with glycemic variability.

CHALLENGES IN AI-DRIVEN PREDICTIVE ANALYTICS

Despite its benefits, implementing AI-driven predictive analytics in critical care poses several challenges. One of the primary concerns is data quality and integration. AI models require extensive and high-quality datasets for accurate predictions. However, inconsistencies in electronic health records (EHRs) and variations in clinical documentation can hinder model performance.

Kasparick et al. emphasized the need for standardized data collection and interoperability between hospital systems to maximize AI’s predictive potential. Without seamless integration, AI-driven recommendations may be incomplete or less reliable, potentially impacting clinical decisions.

ETHICAL DILEMMAS

  • Obtaining Results: The use of AI tools has caused a certain degree of rejection by clinicians who are unaware of the process used to obtain results. The AI Act (Regulation (EU) 2024/1689) establishes that the systems approved in the EU are the ones that must be used. The use of these AI systems is heavily regulated, allowing only those approved by the EU. Beyond the EU, the use of FDA-approved AI systems is recommended. The AI Act includes as high-risk use cases those AI systems used for biometric categorization. The AI systems employed in Critical Care use biometric parameters, so these systems may be subject to strict obligations before they can be put on the market.
  • Informed Consent: The patient must be informed about the use of AI in their diagnostic and therapeutic process. Although critically ill patients may be unable to provide consent, it is advisable to implement informed consent in the ICU. Informed consent must include information about the use of AI systems, and must also inform about the possible biases that may occur.
  • While distrust of AI tools can be addressed through the use of approved systems, overuse and loss of confidence in one’s own decision-making can pose another dilemma in clinical practice. In critical care, decision-making and data analysis, while providing significant assistance to the clinician, cannot imply a loss of decision-making capacity, since decisions require immediacy and speed in the case of critically ill patients.
  • Supervised Learning: Machine learning involves deep learning, which can introduce biases, and analyzing data from critically ill patients, which are unstable and highly variable, can introduce significant biases into such learning. Therefore, human-supervised learning seems more important in critical care.

Ethical Aspects of Telemedicine in the Critically Ill Patient

The use of telemedicine in critical care settings, such as virtual intensive care units (Tele-ICUs), has significantly expanded in response to challenges like the shortage of specialists, the need for continuous care, and geographical disparities in access to specialized services. While this technological innovation offers significant clinical benefits, it also raises ethical concerns related to patient autonomy, privacy, consent, and equity in healthcare. This analysis explores the main ethical dilemmas associated with remote critical care, with an emphasis on respecting the rights and dignity of patients in a state of extreme vulnerability.

VULNERABILITY AND AUTONOMY OF THE CRITICALLY ILL PATIENT

Critically ill patients are often unconscious, intubated, or cognitively impaired, which hinders their ability to participate in clinical decisions. In this context, telemedicine adds a further layer of physical and emotional distance that can further limit patient autonomy. This demands rigorous ethical protocols to ensure that decisions are made in the patient’s best interest, including clear communication with family members or legal representatives.

INFORMED CONSENT IN CRITICAL SITUATIONS

Informed consent within Tele-ICU environments presents an added challenge. Medical urgency may leave no time for discussions about the use of digital platforms, and patients are often unable to provide explicit consent. It is therefore essential that protocols include advance directives or mechanisms that honor the patient’s previously expressed wishes.

PRIVACY AND DATA SECURITY IN THE VIRTUAL ICU

Virtual intensive care units continuously collect biomedical data (vital signs, ventilation parameters, alarms). The large volume of data shared across multiple medical sites increases the risk of privacy breaches, particularly in community hospitals with limited technological infrastructure. Cybersecurity must be ensured so that data is not accessible to unauthorized individuals.

PHYSICIAN–FAMILY RELATIONSHIP IN CRITICAL CARE

In intensive care, interactions between healthcare professionals and the patient’s family are essential for shared decision-making. In the Tele-ICU model, intensivists may not be physically present, which can hinder empathy and trust during critical moments. The ethical challenge lies in preserving the human aspect of care through compassionate and well-structured virtual interactions.

JUSTICE AND EQUITY IN ACCESS TO CRITICAL CARE

The Tele-ICU model aims to improve access to care in rural or underserved areas. However, if not implemented equitably, it may worsen existing disparities. For example, hospitals with fewer resources may lack adequate connectivity or trained personnel to work with virtual systems. This raises an ethical dilemma regarding distributive justice.

INTEGRATION OF BIG DATA AND AUTOMATED MONITORING IN CRITICAL CARE

The integration of big data analytics and automated monitoring systems has transformed ICU management by enhancing real-time decision-making, optimizing patient monitoring, and improving clinical workflows. By combining hemodynamic monitoring, laboratory results, and imaging data, intelligent systems provide a more comprehensive and timely approach to patient care.

THE ROLE OF BIG DATA IN ICU MANAGEMENT

The vast amount of data generated in ICUs requires advanced computational tools for real-time interpretation and clinical decision support. Jung et al. explored the use of an automated bedside dashboard for sepsis detection, demonstrating that integrating multiple data sources improved early identification of septic patients and led to more timely interventions. Similarly, Wulff et al. examined an interoperable clinical decision-support system designed for pediatric ICUs, which utilized big data analytics to enhance the early detection of systemic inflammatory response syndrome (SIRS). The study underscored the importance of real-time data integration in identifying high-risk patients before they clinically deteriorate.

ENHANCING ICU DECISION-MAKING WITH AUTOMATED MONITORING

Automated monitoring systems play a crucial role in reducing human error and streamlining patient assessment. Pflanzl-Knizacek et al. developed a clinical decision support system that integrates patient monitoring with predictive analytics, significantly improving adherence to treatment protocols. These systems provide physicians with actionable insights while minimizing the cognitive burden associated with managing large volumes of patient data.

Furthermore, Van Scoy et al. introduced an online decision-support tool aimed at assisting families of critically ill patients. By aggregating patient data and presenting it in an accessible format, the system enhanced communication between clinicians and families, promoting shared decision-making in the ICU setting.

ETHICAL CONSIDERATIONS IN BIG DATA AND AUTOMATED MONITORING

The integration of big data in critical care introduces significant ethical considerations related to the principles of bioethics—beneficence, non-maleficence, autonomy, and justice—as well as the protection of patient data.

  • Beneficence and Non-Maleficence: While big data and AI-driven decision-support tools aim to improve patient outcomes (beneficence), they must be carefully validated to avoid erroneous recommendations that could lead to harm (non-maleficence). Fortis et al. emphasized that excessive data without proper contextualization could result in cognitive overload for clinicians, potentially increasing the risk of medical errors.
  • Autonomy: Patient autonomy is challenged by automated decision-support systems, as they may influence clinical decisions without clear transparency regarding how data-driven conclusions are reached. Shared decision-making models, such as those proposed by Van Scoy et al., can help ensure that patients and their families remain engaged in the decision-making process.
  • Justice: The accessibility of advanced monitoring and data-driven interventions raises concerns about equity in healthcare. Herasevich et al. highlighted the need for interoperability and standardization to ensure that all healthcare institutions, regardless of resources, can benefit from these technologies. A lack of standardization could create disparities where only well-funded hospitals have access to advanced data-driven care.

DATA PROTECTION AND PRIVACY CHALLENGES

A major challenge in the integration of big data analytics in ICUs is ensuring the protection of patient data. Ethical concerns surrounding privacy and data security are paramount, as large-scale data collection and real-time monitoring systems increase the risk of breaches and misuse.

Future Directions

The future of AI in predictive analytics for critical care lies in adaptive learning models that continuously improve based on real-world clinical applications. Future research should focus on:

  • Enhancing explainability: Developing interpretable AI models to increase clinician trust and acceptance.
  • Expanding patient-centered AI solutions: Incorporating real-time feedback to refine predictive algorithms.
  • Improving data security: Addressing privacy concerns and ensuring ethical AI implementation.
  • Personalized medicine: Utilizing AI to tailor treatment plans based on individual patient responses.
  • Informed Consent: Implement informed consent in analyses performed with AI.
  • Supervised Learning: Ensure all machine learning processes are supervised by humans.
  • Developing adaptive AI models: Implementing machine learning algorithms that filter and prioritize alerts based on clinical relevance.
  • Improving interoperability: Establishing universal data standards to facilitate seamless integration across healthcare systems.
  • Enhancing user interfaces: Designing more intuitive dashboards that simplify complex data visualization for clinicians.
  • Addressing ethical concerns: Ensuring that patient data privacy and security protocols align with regulatory frameworks.
  • Data protection: Ensuring compliance with regulations and adhere to data protection laws.
  • Anonymization and encryption: Advanced encryption methods and anonymization techniques should be implemented to protect sensitive patient information while maintaining the integrity of data-driven insights.
  • Patient consent and transparency: Patients should be informed about how their data will be used, and explicit consent should be obtained, especially when AI-based decision-support tools are employed in clinical settings.

Conclusions

The incorporation of emerging technologies such as artificial intelligence, big data, and telemedicine is reshaping the landscape of intensive care, transforming both clinical decision-making processes and the delivery of care itself. When guided by robust bioethical principles, artificial intelligence can become a valuable ally in improving safety, diagnostic precision, and the personalization of treatment. However, their responsible implementation requires transparency in the algorithms, fairness in their application, a clear definition of professional responsibilities, and strict protection of personal data.

References:

  1. Celi LA, Csete M, Stone D. Optimal data systems: The future of clinical predictions and decision support. Curr Opin Crit Care. 2014;20(5):573–580. doi:10.1097/MCC.0000000000000137
  2. Jansson M, Rubio J, Gavaldà R, Rello J. Artificial Intelligence for clinical decision support in critical care, required and accelerated by COVID-19. Anaesth Crit Care Pain Med. 2020;39(6):691–693. doi:10.1016/j.accpm.2020.09.010
  3. Montomoli J, Bitondo MM, Cascella M, Rezoagli E, Romeo L, Bellini V, et al. Algor-ethics: Charting the ethical path for AI in critical care. J Clin Monit Comput. 2024;38(4):931–939. doi:10.1007/s10877-024-01157-y
  4. Chua IS, Jackson VA, Kamdar M. Ethical issues in the development of tele-ICUs. Chest. 2021;160(4):1392–1398. doi:10.1016/j.chest.2021.05.047
  5. World Medical Association. WMA statement on the ethics of telemedicine. Published April 24, 2025. Accessed April 24, 2025. https://www.wma.net/policies-post/wma-statement-on-the-ethics-of-telemedicine/
  6. Institute for Healthcare Improvement. How to protect patient privacy during telemedicine visits. Published April 24, 2025. Accessed April 24, 2025. https://www.ihi.org/resources/Pages/ImprovementStories/How-to-Protect-Patient-Privacy-During-Telemedicine-Visits.aspx
  7. American Medical Association. Telemedicine’s potential ethical pitfalls. AMA J Ethics. Published December 2020. Accessed April 24, 2025. https://journalofethics.ama-assn.org/article/telemedicines-potential-ethical-pitfalls/2020-12
  8. Kaur D, Panos RJ, Badawi O, Bapat SS, Wang L, Gupta A. Evaluation of clinician interaction with alerts to enhance performance of the tele-critical care medical environment. Int J Med Inform. 2020;139:104165. doi:10.1016/j.ijmedinf.2020.104165
  9. Subramanian S, Pamplin JC, Hravnak M, Hielsberg C, Riker R, Rincon F, et al. Tele-Critical Care: An update from the Society of Critical Care Medicine Tele-ICU Committee. Crit Care Med. 2020;48(4):553–561. doi:10.1097/CCM.0000000000004190
  10. Iserson KV. Informed consent for artificial intelligence in emergency medicine: A practical guide. Am J Emerg Med. 2024;76:225–230. doi:10.1016/j.ajem.2023.11.022. PMID:38128163.
  11. Bakker T, Klopotowska JE, Dongelmans DA, Eslami S, Vermeijden WJ, Hendriks S, et al. The effect of computerised decision support alerts tailored to intensive care on the administration of high-risk drug combinations, and their monitoring: A cluster randomised stepped-wedge trial. Lancet. 2024;403(10425):439–449. doi:10.1016/S0140-6736(23)02465-0
  12. McDougall RJ. Computer knows best? The need for value-flexibility in medical AI. J Med Ethics. 2018;0:1–5. doi:10.1136/medethics-2018-105118
  13. Yu KH, Healey E, Leong TY, Kohane IS, Manrai AK. Medical artificial intelligence and human values. N Engl J Med. 2024;390(20):1895–1904. doi:10.1056/NEJMra2214183
  14. Kuntze MF, et al. Ethical codes and values in a virtual world. Cyberpsychol Behav. 2002;5(3):203–209.
  15. Watson DS, et al. Clinical applications of machine learning algorithms: Beyond the black box. BMJ. 2019;364:l886.
  16. Gonzalez FA, et al. Is artificial intelligence prepared for the 24-h shifts in the ICU? Anaesth Crit Care Pain Med. 2024;43:101431.
  17. Prabhu SP. Ethical challenges of machine learning and deep learning algorithms. Lancet Oncol. 2019;20:621–630.
  18. Meiring C, et al. Optimal intensive care outcome prediction over time using machine learning. PLoS One. 2018. PMID:30427913.
  19. Waller RG, et al. Novel displays of patient information in critical care settings: A systematic review. J Am Med Inform Assoc. 2019. PMID:30865769.
  20. Jung J, et al. A real-time bedside dashboard for sepsis care: Early detection and management. [Details missing].
  21. Blaha J, et al. Space GlucoseControl system for blood glucose control in intensive care patients. BMC Anesthesiol. 2016. PMID:26801983.
  22. Kasparick M, et al. Enabling artificial intelligence in high acuity medical environments. Minim Invasive Ther Allied Technol. 2019. PMID:30950665.
  23. Lyell D, et al. How machine learning is embedded to support clinician decision making: An analysis of FDA-approved medical devices. BMJ Health Care Inform. 2021;28:e100301. doi:10.1136/bmjhci-2020-100301
  24. Gibson BR, Rogers TT, Zhu X. Human semi-supervised learning. Top Cogn Sci. 2013;5:132–172.
  25. Jung AD, et al. Sooner is better: Use of a real-time automated bedside dashboard improves sepsis care. J Surg Res. 2018. PMID:30278956.
  26. Wulff A, et al. An interoperable clinical decision-support system for early detection of SIRS in pediatric intensive care using openEHR. Artif Intell Med. 2018. PMID:29753616.
  27. Pflanzl-Knizacek L, et al. Development of a clinical decision support system in intensive care. Stud Health Technol Inform. 2018. PMID:29726444.
  28. Van Scoy LJ, et al. Development and initial evaluation of an online decision support tool for families of patients with critical illness. J Crit Care. 2017. PMID:28131020.
  29. Herasevich V, et al. Informatics infrastructure for syndrome surveillance, decision support, reporting, and modeling of critical illness. Mayo Clin Proc. 2010. PMID:20194152.
  30. Fortis S, et al. Strategies for effective use of clinical informatics in critical care settings. Crit Care Med. 2014. PMID:25072775.
  31. Data Protection Regulation. Legal frameworks for healthcare data security and patient privacy. J Med Ethics. 2020. PMID:32587612.
Interested in publishing your own research?
ESMED members can publish their research for free in our peer-reviewed journal.
Learn About Membership

Call for papers

Have a manuscript to publish in the society's journal?