Loading

Viewpoint Article Open Access
Volume 4 | Issue 1

Patient safety in the era of artificial intelligence: A point-of-care nurses’ practical guide

  • 1Edson College of Nursing and Health Innovation, Arizona State University, USA
  • 2College of Adult and Professional Studies, Indiana Wesleyan University, USA
  • 3Daphne Cockwell School of Nursing, Toronto Metropolitan University, Canada
  • 4Daphne Nursing Research, Premier Health, USA
+ Affiliations - Affiliations

*Corresponding Author

Lihong Ou, lihongou@asu.edu

Received Date: December 10, 2025

Accepted Date: January 07, 2026

Abstract

Artificial intelligence (AI) is increasingly integrated into healthcare to support diagnosis, documentation, and predictive analytics. However, its use also introduces risks, including data bias, limited transparency in decision processes, privacy concerns, and overreliance on automated outputs. Many point-of-care nurses report uncertainty about how these systems function within clinical workflows and how to use them safely in practice. This article provides an evidence-informed, practical guide for point-of-care nurses on the safe and responsible use of artificial intelligence in patient care. It outlines common clinical applications, major categories of risk relevant to bedside practice, and key strategies for maintaining patient safety. Recommended practices include strengthening digital and AI literacy, understanding how systems operate within local workflows, engaging in organizational planning and oversight, staying current with emerging literature, and applying critical judgment when interpreting AI-generated outputs. By addressing AI as a supportive tool rather than a replacement for clinical decision making, this guide aims to help nurses mitigate risk, protect patient privacy, and promote ethical, transparent implementation in everyday clinical settings.

Keywords

Artificial intelligence, Patient safety, Clinical decision support, Predictive analytics, Bias in healthcare, Nursing practice 

Background

Artificial intelligence (AI) can diagnose disease, predict outcomes, and personalize treatments. It has become one of the most transformative technologies in healthcare, offering tools that can improve patient care and outcomes. However, its implementation comes with risks, including bias in datasets, inequities in care delivery, lack of transparency, security and privacy concerns, and over-reliance on AI-generated diagnoses and treatment recommendations. While AI offers significant benefits, its integration into healthcare demands vigilance due to the high stakes involved. This practical guide provides strategies for nurses to uphold patient safety and ensure AI’s responsible and appropriate use in clinical practice. Accordingly, the guidance presented is evidence-informed, drawing on peer-reviewed literature, regulatory guidance, and professional reports relevant to point-of-care nursing practice rather than a formal systematic review.

Recommendations were synthesized through narrative integration of recurring safety themes across these sources, with emphasis on risks most likely to affect point-of-care nursing workflows and patient safety.

What is artificial intelligence?

Defining artificial intelligence can be challenging due to rapid technological evolution. In general terms, AI refers to computer systems that model intelligent behavior with minimal human intervention [1,2]. More precise definitions include those provided by federal agencies, such as “systems that think and act like humans or are capable of unsupervised learning [3].”

Applications of AI in nursing and healthcare

AI was introduced in 1955 but applied in healthcare in the 1970s through systems such as the MYCIN expert clinical consultation system, named after the “–mycin” suffix used for antibiotics, and Causal Associational Network (CASNET) models, which assisted in medical diagnose [4]. AI adoption in healthcare has outpaced documentation in scientific literature [5,6]. AI has established a reliable track record in medical imaging, sometimes identifying pathology with greater accuracy than trained radiologists [7]. AI is also used to analyze large datasets from clinical trials, accelerate drug development, and optimize personalized treatment protocols [8].

AI-powered wearables further exemplify its potential in proactive healthcare. Verily, a San Francisco–based research company, is deploying AI-driven wearables that monitor vital statistics, such as blood glucose, by analyzing glucose in tears absorbed into contact lenses [9]. These capabilities allow processing of large datasets to develop solutions to clinical problems and improve patient care.

Despite these advancements, nurses remain cautious. A 2024 survey by the National Nurses United Union found that 60% of respondents did not trust employers to prioritize patient safety when implementing AI [10]. One of AI's most impactful contributions is predictive analytics. By leveraging large datasets from electronic health records (EHRs) and other sources, predictive analytics enables clinicians to forecast clinical outcomes and address patient needs proactively.

Predictive Analytics in Healthcare

Predictive analytics uses mathematical computations to analyze data from multiple sources to predict future events. EHRs are the largest data source in healthcare, and AI can synthesize this information to identify patterns not previously recognized. For example, predictive models could identify fall risk or provide early warning signs for sepsis. The FDA recently approved the Sepsis ImmunoScore system, a deep learning–based tool to identify patients at risk for sepsis [11]. Notably, such regulatory approval reflects validation within specific clinical contexts and does not imply generalizability across all patient populations or care settings. While FDA-approved tools demonstrate validated clinical use, many AI applications remain under active evaluation and require careful oversight in practice. Deep learning mimics neural networks in the human brain, enabling AI to uncover patterns in large datasets [12]. However, if trained on biased data, these algorithms can perpetuate disparities, leading to inaccurate predictions and suboptimal treatment.

AI is increasingly integrated into clinical workflows. Some health systems adopt ambient AI scribing technologies that “listen in” during visits and summarize conversations into clinical documentation. While these systems still require oversight, they could reduce administrative burden and improve documentation quality. AI can also enhance nursing practice by improving diagnostic accuracy (e.g., early warning alert systems), optimizing care plans (e.g., clinical decision support for personalized treatments), and improving resource allocation (e.g., deep learning models that align patient characteristics with nurse staffing levels). The future of AI includes preventive care systems, tools supporting dementia care, and mental health applications such as diagnosis and cognitive behavioral therapy for depression and anxiety [13].

Risks of AI in Healthcare

From a nursing perspective, AI-related risks most often emerge at the point of care, where algorithm-generated alerts, recommendations, and documentation intersect with routine clinical workflows. To clarify how these risks affect patient safety, we conceptualize AI-related risks according to where and how they surface within nursing activities, including clinical decision support, patient monitoring, medication management, and documentation. When AI systems are embedded in these workflows, errors may propagate directly to bedside decision-making, increasing the potential for patient harm.

Within this workflow-based framework, patient safety risks arise not from AI in isolation, but from mismatches between algorithmic outputs and real-world clinical contexts. For example, opaque or biased recommendations may influence clinical judgment, poorly calibrated alerts may contribute to alarm fatigue or missed deterioration, and automated documentation may obscure accountability. Understanding how AI-related risks manifest during everyday nursing tasks is therefore essential to safeguarding patient safety.

Some articles have described alarming AI-related scenarios ranging from robotic surgical errors to patient injury resulting from mismanaged medication algorithms [14]. Others have characterized AI as an “existential threat to humanity [15].” While such language may overstate certain risks, it reflects growing concern about the consequences of deploying powerful technologies in complex clinical environments. In testimony to the U.S. Congress, Sam Altman, CEO of OpenAI, cautioned, “I think if this technology goes wrong, it can go quite wrong [16].”

A European Parliament report identified several key risks of AI in healthcare: patient harm, misuse of medical tools, bias and inequity, lack of transparency, privacy and security, gaps in accountability, and implementation barriers [17]. AI systems developed by non-clinical scientists may introduce errors when caregivers are unfamiliar with technology limitations [18]. The World Health Organization noted that “precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, and erode trust in AI [19].” In response, the European Union passed the AI Act, banning systems believed to pose an “unacceptable risk” to safety, security, or fundamental rights [20]. These include AI applications that manipulate human behavior, exploit vulnerabilities, or threaten well-being. Such legislation signals a global push for oversight, reinforcing the need for nurses to critically evaluate AI tools before integrating them into practice.

Dataset shift is another risk, where unexpected variations between training data and real-world patient populations could lead to misclassification or misdiagnosis. At the bedside, this may appear as alerts or risk scores that do not align with a nurse’s clinical assessment, leading to either missed deterioration (false negatives) or unnecessary interventions (false positives) [21]. External validation has shown that imaging AI performance can drop substantially when models are applied in settings with different acquisition protocols and equipment (e.g., pediatric pneumonia chest radiographs area-under-the-curve decreased from 0.93 internally to 0.65 externally), underscoring deployment risks from dataset and device shift [22]. These findings highlight a recurring limitation in the current evidence base, where strong internal performance does not reliably translate to real-world clinical settings.

Consistent with this finding, systematic reviews of radiology AI report that models with strong internal performance frequently underperform on external data from different hospitals, scanners, or imaging protocols, with clinically meaningful declines in specificity and overall accuracy [23]. Separately, studies have shown racial, gender, and socioeconomic biases in some AI algorithms, generating disparities in diagnosis and treatment [24–26].

Many AI systems lack transparency, commonly referred to as “black box AI,” where complex computations are not visible or understandable to clinicians. Lack of transparency impedes error tracing and accountability. Significant medical errors can occur if clinicians depend on AI recommendations without understanding model limitations. For nurses, black-box AI may present as a recommendation or alert without clear clinical rationale, limiting the ability to explain decisions to patients, escalate concerns, or identify potential errors. This black-box nature also increases cybersecurity vulnerability. Since 2019, peer-reviewed literature has documented cybersecurity vulnerabilities in connected insulin pump technologies, where unauthorized access or compromised device integrity could lead to incorrect insulin delivery and pose serious risks to patient safety [27].

Tools are only as good as their users, and AI algorithms are only as reliable as their training data. Nurses’ understanding of AI systems directly affects assessment, diagnosis, and treatment. AI can perpetuate treatment inequity if trained on imbalanced datasets lacking demographic diversity. This may occur through historical bias, underrepresentation, and compounding effects over time.

Empowering Nurses to Navigate AI Safely

Nurses have long been at the forefront of data collection and analysis, a practice dating back to Florence Nightingale who used data to advance patient care and advocate for reforms [28]. Achieving proficiency in AI requires understanding its applications, applying knowledge effectively, and maintaining critical thinking skills. The following recommendations support nurses in safely integrating AI into practice. The bedside checklist below provides a point-of-care decision-support aid, while the subsequent recommendations focus on building the knowledge, skills, and organizational engagement needed to use AI safely over time.

Bedside AI safety checklist for nurses

When encountering AI-generated alerts, recommendations, or documentation at the point of care, nurses may consider the following quick checks:

  • Verify the data source: Are the patient inputs (vital signs, labs, demographics) complete and current?
  • Assess clinical alignment: Does the AI output align with your bedside assessment and the patient’s clinical context?
  • Watch for red flags: Is the recommendation unexpected, overly confident, or inconsistent with recent patient changes?
  • Pause before acting: If uncertainty exists, delay action and seek clarification rather than relying solely on the AI output.
  • Escalate appropriately: Use established clinical or safety escalation pathways when AI recommendations conflict with clinical judgment.
  • Document and report concerns: Note discrepancies or system issues and report them to informatics, quality, or patient safety teams.

Foundational competencies for safe AI use in nursing practice

  • Digital and AI Literacy - Explore how AI is used in healthcare and its impact on clinical practice. Engage in continuing education on AI and its applications. Begin with foundational concepts such as machine learning, deep learning, and algorithm development, then progress to interpreting AI outputs and integrating findings into care plans. Advocate for training programs that build AI and data science knowledge among nursing staff.
  • Understand AI in Your Workplace - Learn how AI tools function at your facility, such as predictive analytics, ambient AI scribes, clinical decision support, or early warning systems. Consult technology superusers, informatics teams, or digital health specialists to clarify system functions. Attend training sessions and review organizational policies regarding patient safety, data privacy, and accountability.
  • Get Involved with AI Initiatives - Participate in AI projects, committees, and technology review meetings. Nurses on the front line provide critical insight into how AI affects patient flow and safety. Engage with nursing informatics associations or attend conferences discussing AI innovations.
  • Stay Current with AI and Nursing Research - Monitor scientific journals, clinical guidelines, and reputable sources such as Becker’s Hospital Review and PubMed. Explore studies evaluating AI’s impact on nursing workflows, patient outcomes, and safety. Share findings with colleagues to foster collective learning.
  • Critical Thinking Still Reigns - AI should assist rather than replace professional judgment. Maintain independent evaluation when using AI recommendations, particularly when outputs conflict with clinical assessment, patient history, or contextual cues. In such situations, nurses should pause before acting, verify data inputs, seek clarification from informatics or clinical leaders, and address concerns using established safety pathways.
  • AI is a Tool, Not a Replacement for Clinical Judgment - AI should be used as a supportive clinical tool rather than a replacement for professional judgment. Nurses remain ultimately accountable for patient safety and must integrate AI outputs with ethical reasoning, clinical expertise, and patient preferences.

In this era of rapid technological advancement, nurses play a pivotal role in integrating AI into patient care while maintaining clinical judgment. Staying informed about emerging AI technologies is increasingly fundamental to nursing practice. By applying critical thinking and honoring ethical principles, especially the commitment to do no harm, nurses help protect patient safety and support thoughtful, transparent, and equitable use of AI in healthcare. Although access to AI technologies varies across healthcare systems and regions, the patient safety principles outlined in this guide are broadly applicable and could be adapted to diverse clinical and resource contexts.

AI-Related Terms

  1. Artificial Intelligence (AI): The use of computers to perform tasks that typically require human intelligence, such as decision-making, language understanding, and visual perception.
  2. Bias: The skewing of output from a system based on imbalanced input information that affects the output of a system. 
  3. Big Data: The massive data sets that generative AI programs are trained on before they are asked to perform work.
  4. Hallucinations: False answers when a system “makes up” information. 
  5. Machine Learning (ML): A subset of AI that involves training algorithms to learn from and make predictions based on data. It's used in healthcare to predict patient outcomes and identify disease patterns.
  6. Deep Learning: A type of machine learning that uses neural networks with many layers (hence "deep") to analyze complex data. It's particularly useful in image and speech recognition.
  7. Neural Networks: Computational models inspired by the human brain consist of interconnected nodes (neurons) that process information in layers. They are fundamental to deep learning.
  8. Natural Language Processing (NLP): A branch of AI that focuses on the interaction between computers and humans through natural language. In healthcare, it's used for tasks like analyzing clinical notes and patient records.
  9. Predictive Analytics: The use of data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical data. This can help in predicting patient deterioration or readmission.
  10. Clinical Decision Support Systems (CDSS): AI-powered systems that provide healthcare professionals with knowledge and patient-specific information to enhance decision-making. They can offer reminders, alerts, and diagnostic assistance.
  11. Augmented Intelligence: A term often used to emphasize the supportive role of AI in enhancing human intelligence rather than replacing it. This is particularly relevant in nursing, where AI tools assist in clinical decision-making.
  12. Ambient AI: refers to technology that operates in the background, analyzing encounters and delivering insights with minimal user involvement. In the realm of healthcare, Ambient Clinical Intelligence (ACI), also known as Ambient Clinical Documentation, is an advanced technology that leverages AI and NLP to capture and analyze physician encounters in real time without the use of a keyboard.

All term information is taken from (Artificial Intelligence (AI) Terms Defined for Healthcare Providers | Health-Notes, 2024).

Useful Tools and Resources

References

1. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017 Apr;69S:S36–S40.

2. Malek L, Murtha FL, Song K. Government oversight in managing risks of AI in health care Reuters. 12 Jan 2022. Available from: https://www.reuters.com/legal/litigation/government-oversight-managing-risks-ai-health-care-2022-01-12/.

3. Krazit T. Updated: Washington’s Sen. Cantwell prepping bill calling for AI committee. GeekWire. 10 Jul 2017. Available from: https://www.geekwire.com/2017/washingtons-sen-cantwell-reportedly-prepping-bill-calling-ai-committee/.

4. Stafie CS, Sufaru IG, Ghiciuc CM, Stafie II, Sufaru EC, Solomon SM, et al. Exploring the Intersection of Artificial Intelligence and Clinical Healthcare: A Multidisciplinary Review. Diagnostics (Basel). 2023 Jun 7;13(12):1995.

5. Lomis K, Jeffries P, Palatta A, Sage M, Sheikh J, Sheperis C, Whelan A. Artificial Intelligence for Health Professions Educators. NAM Perspect. 2021 Sep 8;2021:10.31478/202109a.

6. ‌Kelkar G. Top Challenges of AI in Healthcare: What Businesses Need to Resolve. Berkeley Executive Education. 9 Jan 2023. Available from: https://emeritus.org/blog/healthcare-challenges-of-ai-in-healthcare/.

7. McNemar ME. How Can Artificial Intelligence Change Medical Imaging? HealthITAnalytics. January. 2022 Jan;25:2022.

8. Harris S. Transforming Healthcare: Tailored Treatment Strategies through AI. Stepofweb.com. Dec 2025. Available from: https://stepofweb.com/personalized-treatment-plans-ai-health/.

9. “Verily. Verily | Research, care and health financing | Alphabet precision health company” Verily.com. 2023 [cited 2025 Feb 3]. Available from: https://verily.com/.

10. Twenter P. To build trust in AI, involve nurses early, leaders say. Becker’s Hospital Review. 14 Jan 2025. Available from: https://www.beckershospitalreview.com/nursing/to-build-trust-in-ai-involve-nurses-early-leaders-say.html.

11. Bhargava A, López-Espina C, Schmalz L, Khan S, Watson GL, Urdiales D, et al. FDA-authorized AI/ML tool for sepsis prediction: development and validation. NEJM AI. 2024 Nov 27;1(12):AIoa2400867.

12. Holdsworth J. Deep learning. IBM.com. 17 Jun 2024. Available from: https://www.ibm.com/think/topics/deep-learning.

13. Eckhardt J. AI chatbots—with careful guardrails—can help treat depression and anxiety. Forbes. 10 Oct 2023. Available from: https://www.forbes.com/sites/juergeneckhardt/2023/10/10/ai-chatbots-with-careful-guardrails-can-help-treat-your-depression-and-anxiety/.

14. Brozak S. AI in healthcare and other scary stories. Forbes. 7 Jul 2023. Available from: https://www.forbes.com/sites/stephenbrozak/2023/07/07/ai-in-healthcare-and-other-scary-stories/.

15. Federspiel F, Mitchell R, Asokan A, Umana C, McCoy D. Threats by artificial intelligence to human health and human existence. BMJ Glob Health. 2023 May;8(5):e010435.

16. Kang C. OpenAI’s Sam Altman urges AI regulation in Senate hearing. The New York Times. 2023 May 16;16. Available from: https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html.

17. Lekadir K, Quaglio G, Garmendia AT, Gallin C. Artificial intelligence in healthcare—Applications, risks, and ethical and societal impacts. European Parliamentary Research Service (EPRS) [Internet]. 2022. Available from: https://data.europa.eu/doi/10.2861/568473.

18. Giebel GD, Raszke P, Nowak H, Palmowski L, Adamzik M, Heinz P, et al. Problems and Barriers Related to the Use of AI-Based Clinical Decision Support Systems: Interview Study. J Med Internet Res. 2025 Feb 3;27:e63377.

19. World Health Organization. WHO calls for safe and ethical AI for health. Geneva (CH): WHO; 16 May 2023. Available from: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health.

20. Wiggers K. AI systems with ‘unacceptable risk’ are now banned in the EU. TechCrunch. 2 Feb 2025. Available from: https://techcrunch.com/2025/02/02/ai-systems-with-unacceptable-risk-are-now-banned-in-the-eu/.

21. Subasri V, Krishnan A, Kore A, Dhalla A, Pandya D, Wang B, et al. Detecting and Remediating Harmful Data Shifts for the Responsible Deployment of Clinical AI Models. JAMA Netw Open. 2025 Jun 2;8(6):e2513685.

22. Togunwa TO, Babatunde AO, Fatade OE, Olatunji R, Ogbole G, Falade A. Detection of pneumonia in children through chest radiographs using artificial intelligence in a low-resource setting: A pilot study. PLOS Digit Health. 2025 Sep 24;4(9):e0000713

23. Suleman MU, Mursaleen M, Khalil U, Saboor A, Bilal M, Khan SA, et al. Assessing the generalizability of artificial intelligence in radiology: a systematic review of performance across different clinical settings. Ann Med Surg (Lond). 2025 Oct 22;87(12):8803–11.

24. Yang Y, Zhang H, Gichoya JW, Katabi D, Ghassemi M. The limits of fair medical imaging AI in real-world generalization. Nat Med. 2024 Oct;30(10):2838–48.

25. Koçak B, Ponsiglione A, Stanzione A, Bluethgen C, Santinha J, Ugga L, et al. Bias in artificial intelligence for medical imaging: fundamentals, detection, avoidance, mitigation, challenges, ethics, and prospects. Diagn Interv Radiol. 2025 Mar 3;31(2):75–88.

26. Colacci M, Huang YQ, Postill G, Zhelnov P, Fennelly O, Verma A, et al. Sociodemographic bias in clinical machine learning models: a scoping review of algorithmic bias instances and mechanisms. J Clin Epidemiol. 2025 Feb;178:111606.

27. Ho CN, Ayers AT, Aaron RE, Tian T, Sum CS, Klonoff DC. Importance of Cybersecurity/The Relevance of Cybersecurity to Diabetes Devices: An Update from Diabetes Technology Society. J Diabetes Sci Technol. 2025 Mar;19(2):470–4.

28. Nashwan AJ, Cabrega JA, Othman MI, Khedr MA, Osman YM, El-Ashry AM, et al. The evolving role of nursing informatics in the era of artificial intelligence. Int Nurs Rev. 2025 Mar;72(1):e13084.

Author Information X