Medicine and Health
Prompt Engineering as an Important Emerging Skill for Medical Professionals: Tutorial
B. Meskó
With the emergence of large language models (LLMs), with the most popular one being ChatGPT that has attracted the attention of over a 100 million users in only 2 months, artificial intelligence (AI), especially generative AI has become accessible for the masses. This is an unprecedented paradigm shift not only because of the use of AI becoming more widespread but also due to the possible implications of LLMs in health care. Numerous studies have shown what medical tasks and health care processes LLMs can contribute to in order to ease the burden on medical professionals, increase efficiency, and decrease costs. Health care institutions have started investing in generative AI, medical companies have started integrating LLMs into their businesses, medical associations have released guidelines about the use of these models, and medical curricula have also started covering this novel technology. Thus, a new, essential skill has emerged: prompt engineering.
Prompt engineering is a relatively new field of research that refers to the practice of designing, refining, and implementing prompts or instructions that guide the output of LLMs to help in various tasks. It is essentially the practice of effectively interacting with AI systems to optimize their benefits.
In the context of medical professionals and health care in general, this could encompass the following:
- Decision support: medical professionals can use prompt engineering to optimize AI systems to aid in decision-making processes, such as diagnosis, treatment selection, or risk assessment.
- Administrative assistance: prompts can be engineered to facilitate administrative tasks, such as patient scheduling, record keeping, or billing, thereby increasing efficiency.
- Patient engagement: prompt engineering can be used to improve communication between health care providers and patients (eg, medication reminders, appointment scheduling, lifestyle advice).
- Research and development: in research scenarios, prompts can be crafted to assist in tasks such as literature reviews, data analysis, and generating hypotheses.
- Training and education: prompts can be engineered to facilitate the education of medical professionals, including ongoing training in the latest treatments and procedures.
- Public health: on a larger scale, prompt engineering can assist in public health initiatives by helping analyze population health data, predict disease trends, or educate the public.
Prompt engineering, therefore, has the potential to improve the efficiency, accuracy, and effectiveness of health care delivery, making it an increasingly important skill for medical professionals. This paper summarizes the current state of research on prompt engineering and provides practical recommendations for health care professionals to improve their interactions with LLMs.
The State of Prompt Engineering: The use of LLMs, especially ChatGPT, comes with major limitations and risks. Since ChatGPT is not updated in real time and its training data only include information up to November 2021, it may lack crucial, up-to-date medical research or changes in clinical guidelines. It cannot access or process individual user data or context, limiting personalization and increasing the risk of misinterpretation. Users must verify responses with qualified health care professionals; the model may produce inaccurate or unsafe answers, lacks empathy, and poses privacy risks, potentially violating regulations such as HIPAA. Despite these risks, the potential benefits motivate improving prompt design.
Prior efforts include: (1) a catalog of prompt engineering techniques presented in pattern form to address common issues when conversing with LLMs; (2) summaries of advances in prompt engineering targeted to NLP researchers in the medical domain and academic writers; and (3) an empirical study demonstrating AI-generated health awareness messages via prompt engineering. While research exists, there has been no comprehensive, practical guide specifically for medical professionals—a gap this tutorial aims to fill.
This tutorial adopts a narrative, practice-oriented approach. It synthesizes current literature on prompt engineering and compiles concrete, actionable recommendations tailored for health care professionals. The paper outlines general strategies for skill development—understanding AI/ML fundamentals without requiring coding expertise, familiarizing oneself with specific LLM capabilities and limitations, and engaging in regular practice with iterative refinement. It then provides specific prompt-design recommendations with practical, health care–relevant examples (eg, being specific, providing context, experimenting with styles, defining goals, role-based prompting, leveraging threads, iterative refinement, asking open-ended questions, requesting examples, temporal awareness, setting realistic expectations, and one-shot/few-shot prompting). The tutorial also enumerates major limitations of ChatGPT and lists commonly used plugins relevant to health care (eg, ScholarAI, AskYourPDF, Wolfram). The author notes using GPT-4 during ideation to ensure comprehensive coverage and testing recommendations through imaginary scenarios.
- Generative AI, particularly LLMs like ChatGPT, has rapidly become accessible to the public (ChatGPT reached over 100 million users in 2 months), creating new opportunities and challenges in health care.
- Prompt engineering emerges as an essential skill for medical professionals to optimize interactions with LLMs across decision support, administration, patient engagement, research, education, and public health.
- Major limitations and risks of ChatGPT/LLMs in health care include: training cutoff (only up to November 2021), lack of real-time updates, inability to personalize based on individual patient data or context, potential inaccuracies and hallucinations, limited empathy, and privacy/confidentiality risks (eg, HIPAA concerns). All outputs require verification by qualified professionals.
- General strategies to improve prompt engineering: understand basic AI/ML principles without needing coding; learn system-specific capabilities/limits; practice regularly and iterate in real-world scenarios.
- Specific prompt recommendations with examples:
- Be as specific as possible.
- Describe your setting and provide context.
- Experiment with different prompt styles (direct question, list, summary, process).
- Identify the overall goal of your prompt.
- Ask the model to play roles (eg, Data Scientist, nutritionist).
- Use threads to build on prior conversation.
- Iterate and refine; ask the model to modify outputs based on prior responses.
- Ask open-ended questions.
- Request specific examples.
- Set realistic expectations.
- Additional techniques: incorporate temporal awareness in prompts; use one-shot and few-shot prompting; “prompting for prompts” (ask the model to suggest better prompts).
- Practical tooling: plugins relevant to health care tasks include ScholarAI, AskYourPDF, Show Me Diagrams, Wolfram, Shorties, and Video Summary.
The tutorial addresses the need for practical, health care–oriented guidance on interacting with LLMs by consolidating evidence, articulating risks, and providing step-by-step, example-driven prompt strategies. By focusing on specificity, context, iterative refinement, role-based prompting, temporal framing, and exemplar-driven methods (one-shot/few-shot), clinicians and other health care professionals can elicit more accurate, relevant, and useful outputs from LLMs while acknowledging model limitations and regulatory constraints. Implementing these practices can enhance decision support, streamline administrative workflows, improve patient education and engagement, and support research and training. Ultimately, improved prompt engineering helps maximize benefits and mitigate risks when deploying LLMs in health care settings.
As the skill of prompt engineering has gained significant interest worldwide, especially in the health care setting, it would be important to include teaching the practical methods described in this paper in medical curricula and postgraduate education. While the technical details and background of generative AI will likely be included in future curricula, it would be useful for medical students to learn the most practical tips of using LLMs even before that happens. The general message for every LLM user should be to use such AI tools to expand their knowledge, capabilities, and ideas rather than to replace their own work. Ideally, this approach and mindset would stem from trained medical professionals who could share it with their patients. In summary, as more patients and medical professionals use AI-based tools—LLMs being the most popular—it seems inevitable to address the challenge of improving prompt engineering skills. Because doing so does not require technical knowledge or programming expertise, prompt engineering can be considered an essential emerging skill to leverage the full potential of AI in medicine and health care.
Related Publications
Explore these studies to deepen your understanding of the subject.

