However you picture a robot-filled future (more Jetsons or Terminator?), it’s clear artificial intelligence (AI) is grabbing the interest of professionals across industries.
While the potential positive impact of AI is recognized, there is increasing concern about using emerging AI technology. To safeguard public health, the World Health Organization (WHO) recently issued a call for caution in using AI-generated large language model tools (LLMs). LLMs generate natural language texts by utilizing large (and growing) databases and include platforms such as ChatGPT, Bard, and other AI-based natural language-generating technology. As the use of this form of AI rises in both healthcare settings and in the general population, nursing professional development practitioners must be aware of how it’s being used, as well as its risks and benefits.
In this article, ANPD members from different backgrounds share their perspectives on LLMs, namely ChatGPT, which OpenAI launched in December 2022 to significant attention and discussion. The contributors evaluate how these LLMs could streamline certain processes for NPDs, while simultaneously posing challenges the industry must address.
Navigating Large Language Models in Academic Settings
Contributed by Patsy Maloney, EdD, RN, NPD-BC, NEA-BC, and Susan Bindon, DNP, RN, NPD-BC, CNE, CNE-cl, FAAN
The use of AI tools in the academic setting has come to the forefront with the launch of ChatGPT. “Chat” refers to this language model’s ability to “talk with” the user, and GPT stands for generative pre-trained transformer. AI presents academic education with opportunity and challenge.
ChatGPT has the capacity to imitate both written and oral human conversation, as AI trains via websites, books, and other data sources, and uses machine learning to identify patterns and create responses based on the initial prompts. Although the responses are logical and conversational, they are based on the specificity of the prompts and the accuracy and availability of the texts that ChatGPT accesses. So, as intelligent as the tool’s responses may sound, it cannot be trusted without some level of scrutiny.
AI’s ability to create logical and conversational information by analyzing numerous sources creates amazing opportunities for both teaching and learning. Yet, AI cannot discriminate between accurate and inaccurate information and even tends to fabricate. This, combined with its free access to students who may be desperate to complete an assignment, presents challenges for the faculty. Academic faculty must learn to take advantage of the opportunities while dealing with the challenges.
Concerns in Academia
Several challenges arise when students and faculty have access to ChatGPT and other artificial language-generating tools. We’ll highlight just a few in this short piece.
As mentioned, the accuracy and reliability of the information produced by the tool are inconsistent. While the writing may look well-organized and sound professional, the information may be biased based on the sites used for “training.” Information may be inaccurate or entirely false.
Next, the temptation for students to copy and paste information is high due to the pressures of time and the burden of school, work, and home responsibilities. Plagiarism, therefore, is a real risk. Faculty and administrators must put policies and tools in place to combat this threat to academic integrity. Of great concern to faculty (and patient safety downstream) is the risk of students not truly learning material, but rather bypassing the productive struggle of learning, per se, for the gratification of completing an assignment. It takes time, practice, and feedback to develop critical thinking skills and clinical judgment. Avoiding this critical step can be costly to all involved. There is some concern that ChatGPT-generated text may breach copyright and/or confidentiality best practices, and students may inadvertently face legal or ethical questions.
Finally, increased reliance on technology could reduce even further the human interaction necessary for learning and the development of professional identity. To combat these challenges, it is recommended that ChatGPT and similar tools are used as supplemental to, not in place of, students’ primary sources of information and study practices.
Could AI Facilitate Learning?
Despite the challenges presented by ChatGPT, there are opportunities to facilitate learning. Knowing how to frame a question is critical for research in school and life. A well-framed prompt is necessary for ChatGPT to write a useful response.
Once ChatGPT has generated a response, that response must be evaluated for accuracy, validity, and freedom from copyright violations and necessary corrections made; this first iteration is the starting point of a writing assignment. Now the student continues with their own review of the evidence, then adds the implications, applications, and conclusions.
ChatGPT serves as a co-writer or helpful friend, helping the student get past writer’s block. The student must still frame a prompt, evaluate, and review evidence, and apply concepts to real life. All are important skills to learn.
Moving forward, we must embrace the learning opportunities while minimizing the challenges through policies that minimize risk while facilitating the use of ChatGPT as the acknowledged starting point, not the endpoint of a writing assignment.
Harnessing Large Language Models in Healthcare Settings
Contributions from Lillian Jensen, MN, RN, CNL, NPD-BC; Lauren Kalember, DNP, MSN, RN, NPD-BC, CCRN-K; Rachel Kelter, MSN, RN, NPD-BC, CPC; and Stephanie Zidek, MSN, RN, AGCNS-BC, NEA-BC, NPD-BC
LLMs can be incorporated into the healthcare setting, from diagnosis to discharge, while also serving as a valuable tool for administrative tasks commonly performed by NPD practitioners. There is great potential to improve efficiency and quality within the hospital setting when LLMs are used thoughtfully, but these models bring both opportunities and challenges.
Challenges in Healthcare Settings
There are several challenges to address when utilizing LLMs in a healthcare setting, top of mind is the temptation to rely on them in place of medical experts. ChatGPT warns that it should not be used for healthcare advice, but it will allow users to ask questions. Patients or even healthcare professionals may use it without first consulting subject matter experts to validate its information.
Data privacy and security is another major concern. LLMs require large amounts of data to train and operate, but this data can be sensitive and confidential, so it must be protected from unauthorized access — and as of yet, there is no comprehensive federal regulation addressing AI in the U.S. Plus, if LLMs are trained on biased data, the system will be biased as well, leading to unfair or inaccurate decisions. Ethical considerations come into play on top of this. LLMs can be used to make decisions that impact patients, and it’s important to ensure these decisions are ethical and aligned with the ANA Code of Ethics for Nurses.
Relying on AI could also have the potential to reduce human abilities and interaction. For example, could overly relying on ChatGPT decrease an NPD’s writing skills? And could utilizing it regularly reduce human interaction, leading to further disconnection, empathy misses, and reduction in critical thinking and clinical judgment? To avoid these challenges, it will be imperative for educational institutions to prioritize teaching and role-modeling human skills (a.k.a. soft skills).
Uses and Benefits in Healthcare Settings
Although the challenges of utilizing LLMs in a healthcare setting are considerable, they can enhance nursing education when used in a safe and purposeful way. Nurses with a wide range of credentials can find value in them, especially as the experience-complexity gap widens and continual staffing challenges emerge. We must think differently, especially around care delivery, and this is where LLMs can be beneficial.
For example, LLMs could streamline tasks that could help alleviate the workload of healthcare professionals, e.g., acting as a scribe to document patient care when managing electronic health records and providing instant answers on-demand for any questions, supporting education needs. If these answers are taken with a grain of salt, they could be a helpful jumping-off point for professionals.
LLMs are being leveraged already to enhance the patient experience. Some healthcare organizations are using them to power their chat-based portals and programs within healthcare (pre-op, post-op, hospital stay). The Cleveland Clinic is using an AI language model as a virtual patient assistant that helps patients with chronic diseases manage their care by answering questions about their condition, providing educational materials, and even helping schedule appointments.
Utilizing LLMs in a patient’s room certainly carries ethical considerations, but some uses for NPD practitioners may be less ethically questionable. Automation of tasks, for example, could be a game-changer for NPDs. Rachel has utilized ChatGPT for project and meeting management, in her role as a project manager and nurse planner. She has used it to summarize articles, scan literature, outline steps for interprofessional continuing education (IPCE), and assess different nursing care delivery models.
She has also used the model when stuck on how to create questions for a staff survey that aligned with a learning outcome for an activity she was planning. It provided her with great ideas and examples. She took it a step further to come up with ideas on what data to measure long-term that is in alignment with Magnet standards. It gave her some fantastic ideas that hadn’t crossed her mind, and she prompted it to define which Magnet standards the education and outcome measured to ensure accuracy.
Lillian has used ChatGPT in her quest to find DNP programs that met specific criteria, and it provided her with a comprehensive list of programs that fulfilled her prerequisites. She also used it for grant proposal brainstorming. It helped her structure ideas and organize her thoughts, enabling her to create an outline for her proposal, incorporate a summary of the project, a description of the target audience, justification, and budget.
Using an LLM in simulations — to create more realistic and interactive simulations for nursing students and residents — is of particular interest to NPDs. This can assist with clinical judgment, critical thinking, and decision-making in the practice setting. Lauren Kalember, DNP, MSN, RN, NPD-BC, CCRN-K, an NPD manager at Seattle Children’s Hospital, has showcased the ability of ChatGPT to produce patient case scenarios that could be helpful for NPD events.
As LLMs continue to develop, it is likely we will see even more ways to use AI to improve nursing care and enhance the role of the NPD practitioner. But while LLMs hold immense potential for streamlining processes and enhancing learning, we must exercise caution in their usage to ensure accuracy, maintain academic integrity, protect data privacy, and preserve the essential human skills and interactions that are central to nursing and healthcare.
By navigating these challenges wisely, we can harness the power of AI while minimizing potential risks, ultimately shaping a future that benefits both patients and healthcare professionals.