Founder & Medical Director at New York Cosmetic Skin & Laser Surgery Center
Answered 4 months ago
Dermatologists should keep their own clinical reasoning at the center and use generative AI only as a support tool. I remind trainees to form their assessment and plan before consulting AI, so their judgment stays strong. Always verify AI-generated suggestions against evidence-based guidelines and vetted clinical resources. I personally follow this code of conduct: https://nam.edu/our-work/programs/leadership-consortium/health-care-artificial-intelligence-code-of-conduct/ We must protect patient data, avoid unsecured platforms, stay alert to bias in model outputs, and be transparent with patients about how these tools contribute to their care.
Relying heavily on GenAI risks creating a dangerous educational dependency that hampers the development of crucial clinical pattern recognition and physical diagnostic skills among residents and fellows. To counteract this, physicians who direct training programs should utilize AI strictly for generating comprehensive differential diagnostic lists. Trainees must then be required to justify their final management plan using objective evidence gathered exclusively through a traditional history, physical examination, and independent critical thought.
Healthcare practitioners should treat AI as a supportive tool rather than a decision-making authority, maintaining human oversight at every stage of patient care. Research has shown the serious consequences of AI operating without proper human judgment, such as a Stanford study where a chatbot responded to a patient expressing suicidal thoughts by providing information about bridge heights instead of appropriate crisis intervention. Practitioners must continue to apply their clinical reasoning and training to validate AI-generated suggestions before implementing them in patient care. This approach allows healthcare providers to benefit from AI's capabilities while protecting against its limitations in empathy, context understanding, and bias.
Vice President and Lead Clinical Educator at Texas Academy of Medical Aesthetics
Answered 4 months ago
As the Vice President and Lead Clinical Director at Texas Academy of Medical Aesthetics, I understand how generative AI can be useful to healthcare professionals. AI can sort data using AI and give recommendations. However, it will not be in a position to replace the evaluation, experience and practical skills which are gained during the years of work with patients. These capabilities can be compromised by the excessive use of AI at some points, and the information it uses can be biased. The best alternative would be the idea of viewing AI as a helper rather than a decision-making guide. The practitioners must compare AI recommendations and the available clinical guidelines, as well as, personal needs of a specific patient. Audit of the outcomes periodically will help to identify potential bias, and will render care safe and appropriate. With human judgment as a core element, AI and lifelong training, and the experience on the job are still in the spotlight. Professional trainers can be replaced by AI only with a well-intentioned use of it to provide more efficient and personalized care.
Child, Adolescent & Adult Psychiatrist | Founder at ACES Psychiatry, Winter Garden, Florida
Answered 4 months ago
Treat generative AI like a brilliant but unlicensed medical student. It can recite textbook data perfectly, but it lacks the gut check that comes from years of face-to-face practice. It does not know the person sitting in front of you. To avoid losing our critical thinking skills, we must use these tools only after we form our own clinical opinion. In my practice, I write the core of my assessment before opening any software assistance. I use technology to format the paperwork or check for drug interactions, but never to derive the diagnosis itself. We also have to remember that algorithms are trained on old data, which often leaves out diverse groups. If we blindly trust the output, we risk missing symptoms that don't fit the "standard" model. The doctor must always be the final filter, checking the computer's work against the patient's reality.
The key to the successful incorporation of AI tools in health care is a shared decision-making process. That means utilising AI as an aid to decisions, not the primary decision-maker, so it assists rather than replaces human judgment. Facilitate interdisciplinary cooperation of healthcare workers for the interpretation of insights from AI. Through team implementation, the systematic AI-based recommendations are considered to match both effective and efficient care. Leverage AI as a second opinion, supplementing years of clinical experience to deeply review patient cases. It should also be responsible for reviewing AI outputs in regular team meetings so that there is a continuing conversation around ethical use. Create feedback loops for team to provide input on AI performance to drive the continuous improvement. Train all staff to AI tools and collaborative practices, wonderful things happen when human instinct meets machinery.
The clinics we found most effective use AI tools within decision support systems rather than depending on them to make decisions. The team we worked with implemented a rule requiring senior clinicians to review and annotate all AI-generated content before it could be saved or shared with patients. This approach preserved human thinking as the central function, rather than letting AI replace it. Bias and equity issues are harder to detect when no tracking systems are in place. Our team created standard operating procedures (SOPs) that required staff to explain their reasons for accepting or rejecting AI recommendations. This structure upheld medical accountability while giving leadership the visibility to spot patterns of overconfidence, bias, or deviations in system use. Healthcare organizations get the best results with AI when they embed it in systems designed to protect and prioritize human judgment over automation.
I have always viewed AI's role as enhancing diagnostic abilities but never replacing a surgeon's hand-on experience. In my surgical practice, AI generated assessments are merely the beginning of a clinician's due diligence. Clinical verification is the true safeguard and can only be achieved by continually maintaining competence in traditional examination methods while also using new technologies. In the evaluation of patients prior to undergoing surgeries such as LASIK or cataracts, I always compare the results of the AI images to the clinical data collected directly from each patient. The dual method of clinical data collection has prevented me from having numerous adverse surgical outcomes from AI image errors. The education of young surgeons to critically evaluate the output of an AI system will begin with the emphasis of a complete physical examination prior to review of automated analysis. Residents should initially develop a diagnosis based on the results of a direct patient encounter and manual tests. This method of developing clinical reasoning will ultimately provide the greatest benefit of AI systems as the physician will understand when the suggested diagnosis of an algorithm will correspond to or contradict what they observe through visual inspection of the patient during a direct patient encounter.
The most important safeguard against overreliance on generative AI is preserving clinicians' independent diagnostic reasoning. AI should never become the first filter through which information passes; it must be the last. When practitioners begin with their own clinical assessment—history, examination, differential formulation—they maintain the cognitive discipline that AI cannot replace. Only then should generative tools be used to validate assumptions, surface overlooked considerations, or accelerate documentation. Healthcare teams can also limit risk by building "structured friction" into their workflows. This means requiring clinicians to articulate their clinical rationale before viewing AI suggestions, and documenting why an AI-generated recommendation was accepted or rejected. These small steps keep critical thinking active rather than passive. Finally, practitioners should recognize that generative AI models inherit the biases of the data they are trained on. The way to counter this is through local calibration: regularly comparing AI outputs against real patient outcomes, diverse population data, and institutional equity goals. In practice, the safest use of AI in healthcare is not as an authority, but as a second reader—useful, fast, and sometimes insightful, but always subordinate to clinical judgment.
As healthcare professionals, we must scrutinize AI-generated data to remain vigilant about the AI output in order to properly assess its accuracy and applicability to the patient population before integrating it into the clinical decision process. We must recognize the convenience and rapidity of AI solutions as an opportunity to take a step back, critically evaluate the results, and validate through cross-referencing the AI's analysis against their own clinical knowledge and understanding of the patient's unique needs and circumstances. While AI is a powerful tool, it is always secondary to the critical thinking and professional judgment of the healthcare provider. By remaining vigilant and not solely relying on AI to inform decisions regarding patient care, we can ensure that the recommendations and insights we provide to our patients are individually relevant and prioritized to meet their specific needs. This approach will help prevent potential pitfalls and/or bias from the AI system itself and reinforce our commitment to deliver high-quality, patient-focused care.
At A S Medication Solution, we treat generative AI as a tool that supports clinical work rather than a shortcut that replaces the thinking behind it. The warning about erosion of judgment rings true because habits form quietly. A clinician who lets an AI summary shape every decision can drift from the discipline of checking sources, questioning patterns, and noticing the small irregularities that often reveal the real problem. We encourage teams to slow the process at key moments and pair any AI output with a verification step. A pharmacist reviewing a complex medication profile might use AI to surface interactions, then manually confirm the two or three highest risk ones against primary references. That simple pause keeps the skill set active and lowers the risk of repeating bias that may appear in training data. Rotation exercises also help. Asking staff to walk through a case without AI guidance once a week reveals how much they still rely on their own clinical memory. The combination creates a healthier balance. AI speeds the work, and humans keep the judgment sharp, which preserves the level of care our patients expect.
Executive President at Interdisciplinary Dental Education Academy (IDEA)
Answered 4 months ago
The most effective use of AI in a healthcare environment requires clinicians to engage both their eyes and hands to verify any digital result. Practitioners should perform manual checks on their patients, such as moving the patient's jaw in a controlled manner or checking for muscle tightness through palpation, so that they can base their clinical decision-making on actual clinical demonstrations. This allows clinicians to keep their "gut feelings" active and have an opportunity to recognize changes in function that AI may have mistakenly interpreted. Having a systematic way of reviewing AI recommendations allows an additional layer of protection for the practitioner. Research teams compare AI recommendations to their own independent investigations, as well as assess where AI uses biased databases. Having a system of checks provides some assurance that the reasoning used by AI matches the reasoning of the practitioner when developing a treatment plan for a specific patient.
If you're in healthcare, the safest way to bring generative AI into your workflow is to treat it like a second opinion, not a final answer. The danger isn't the tool itself, it's the temptation to let it replace the thinking. AI is great at summarizing charts, drafting notes, or highlighting patterns you might've missed, but the moment you let it make the call for you, your clinical instincts start to dull. The best approach I've seen is building intentional checkpoints into the workflow. For example, some practitioners run AI-generated summaries or recommendations through a quick "clinical gut check" before they act on anything. Others pair junior clinicians with senior mentors to review AI-assisted decisions together — the AI speeds up the work, but the teaching still happens. That combination keeps the critical thinking muscle alive. And honestly, transparency matters. When a tool is used, say it. Patients don't expect you to be a machine — they expect you to use every resource available and still apply your judgment to their specific situation. AI should make clinicians sharper, not more passive. The trick is staying in the driver's seat and letting the tech ride shotgun.
I've been working in IT and cybersecurity for over 17 years, with the last decade focused on protecting healthcare organizations through HIPAA compliance and security frameworks. At Sundance Networks, we've seen how medical practices steer technology adoption--and where they stumble. The biggest protection against AI overreliance is maintaining what we call "human checkpoints" in clinical workflows. When we help healthcare clients implement AI tools, we insist on building in mandatory review steps where practitioners must document their independent reasoning before the AI suggestion. One dental practice we work with uses AI for diagnostic assistance but requires doctors to record their preliminary assessment first--this keeps their clinical skills sharp while still benefiting from the technology. For addressing bias and equity issues, the answer is data auditing and diverse testing. We require our healthcare clients to regularly review AI outputs against actual patient outcomes, specifically tracking performance across different demographic groups. When one medical consultant we support noticed their AI scheduling tool was deprioritizing certain zip codes, we caught it because they were actually checking the patterns--not just trusting the automation. The practical step any practitioner can take today is treating AI like a junior colleague, not an oracle. You wouldn't let a first-year resident make decisions without supervision, so don't let AI do it either. Document your thinking process separately, review AI recommendations with skepticism, and track where the tools succeed or fail with your actual patient population.
At RGV Direct Care we have learned that AI settles into clinical work cleanly only when the guardrails stay visible. The simplest protection comes from slowing down the first mile. A clinician who runs every AI generated note, summary or recommendation through a brief manual check catches small shifts in wording or emphasis that could change a patient's understanding. That check takes under a minute once the habit forms. Another safeguard is keeping AI away from final decisions and using it instead to surface patterns that deserve human judgment. A diabetic patient's data trend may look stable on an algorithmic chart, yet the clinician notices that the patient's stress levels rose during the same period. That context changes the plan entirely. Transparency matters too. Patients feel safer when they know which part of their visit involved AI and which part came from the clinician's own assessment. It reduces the risk of misplaced trust and keeps communication steady. The strongest outcomes appear when AI supports the flow of care while the practitioner stays accountable for every conclusion that touches the patient's health.
I run a landscaping company, not a medical practice, but we've dealt with the exact same trap when new equipment started coming with "smart" features and diagnostic AI. Our newer crew members would just trust whatever the machine's computer said about soil conditions or irrigation needs without getting their hands dirty first. What fixed it for us was mandatory field validation before trusting any tech reading. Before anyone adjusts a system based on what a smart controller says, they physically check moisture levels, inspect the plants, and write down what they observe. We log both the AI recommendation and the actual field reality--caught our "smart" irrigation system recommending 40% more water for an area that was already oversaturated because the sensor couldn't distinguish between recent rain and actual soil needs. The key is making the technology prove itself against real-world results in YOUR specific environment. We review our tech recommendations against actual plant health monthly, and we've found our systems are wrong about 15-20% of the time because they're trained on different climates or soil types than what we see in Massachusetts. If doctors did the same--tracked AI suggestions against actual patient outcomes in their specific population--they'd quickly see where the algorithms don't match their reality. Train people to trust their observations first, then use AI as a second opinion to catch what they might have missed. Never the other way around.
I've spent 20+ years in event production where split-second decisions affect thousands of attendees, and we've been integrating AI tools into our workflow at EMRG Media for the past two years. The pattern I've seen mirrors what healthcare is facing--automation can make you lazy if you're not intentional about it. What works for us is the "explain it out loud" rule. Before our team accepts any AI recommendation--whether it's attendee segmentation or budget allocation--they have to verbally explain to another team member why that suggestion makes sense for our specific event. We caught a major issue last year when AI suggested cutting transportation services for an event because "most attendees drove themselves historically," but talking it through revealed those attendees were coming from a different demographic than our current audience. Build deliberate friction into your AI workflows. At our Event Planner Expo with 2,500+ attendees, we use AI for initial vendor matching, but our team must physically visit top venue options and meet vendors face-to-face before finalizing. It takes longer, but that real-world validation catches what algorithms miss--like a "highly-rated" caterer whose kitchen couldn't handle our volume or accessibility issues the data didn't flag. The key is treating AI outputs as rough drafts, not final answers. I train my team to ask "what is this missing?" rather than "is this correct?" That mindset shift keeps critical thinking active while still capturing efficiency gains.
I'm not a healthcare practitioner, but I've spent years building AI systems that directly impact business decisions--and I've seen how dangerously easy it is to treat AI output as gospel instead of a starting point. The biggest fix we implemented: forced documentation of the *why* behind every decision, not just the result. When our AI agents flag a hosting issue or recommend a site change, my team has to log their own assessment first--what they see, what they think is happening, and whether the AI's read makes sense in context. We caught cases where our monitoring AI flagged "suspicious traffic" that was actually a legitimate product launch driving 340% more visitors than normal. If we'd trusted the system blindly, we would've throttled real customers. For healthcare, I'd apply the same structure: require practitioners to document their clinical reasoning *before* and *after* consulting AI. What do the symptoms suggest to you? What does the AI say? Where do they align or conflict, and why might that gap exist? That process keeps the human brain engaged and creates a feedback loop that exposes when the AI is operating outside its training data--like when our systems were wrong 18% of the time on mobile layout predictions because they'd never seen our specific user demographic. The second piece is variance tracking. We log every AI recommendation against actual outcomes and flag patterns where the tool consistently misreads our environment. In medicine, that'd mean tracking AI suggestions against patient outcomes segmented by demographics, comorbidities, and local population characteristics--then using those gaps to retrain your own clinical instincts, not just the algorithm.
I'm not in healthcare, but I run AI implementations for small businesses daily and co-founded JustStartAI specifically to help people use these tools without losing their judgment. The pattern I see causing the most damage isn't bad AI--it's when someone stops asking "does this make sense for *my* situation?" We built a rule into our agency workflows: no one executes an AI recommendation without documenting one conflicting data point first. When our AI suggested cutting 40% of a contractor's service pages because they had "low traffic," my team had to find evidence that contradicted that before proceeding. Turns out those pages converted at 3x the rate of higher-traffic ones--the AI missed conversion value entirely because it was only trained on volume metrics. For healthcare practitioners, I'd enforce the same discipline but flip it: before you even prompt the AI, write down your top differential and your confidence level. Then compare. When they match, you've got confirmation. When they don't, you've got the most valuable learning moment--either the AI caught something you missed, or you're seeing context the model can't access. Both scenarios keep your diagnostic muscles active instead of atrophied. The bias issue gets worse when you can't see the training data, so track your own disagreement rate by patient demographic. If you're overriding the AI 40% of the time for patients over 65 but only 10% for patients under 40, that's your early warning system that the tool wasn't built for your actual population mix.
Shamsa Kanwal, M.D., is a board-certified Dermatologist with over 10 years of clinical experience. She currently practices as a Consultant Dermatologist at https://www.myhsteam.com/ Profile link: https://www.myhsteam.com/writers/6841af58b9dc999e3d0d99e7 In my view the safest approach is to treat generative AI as a clinical assistant, not a decision maker, and make it a rule that any AI suggestion must be checked against primary sources, guidelines, and the clinician's own reasoning before it reaches the patient. For trainees, I ask them to present their assessment and plan first, then compare it with any AI output so they keep exercising diagnostic and critical thinking skills. At a system level, we can build protocols that require human sign off, track when AI is used, and regularly audit outputs for patterns of bias, especially around race, gender, and access to care. Transparency with patients about when AI is used, plus ongoing education about its limits, helps keep the clinician, not the algorithm, at the center of care.