The most surprising thing wasn't AI helping me diagnose—it was AI removing the barrier that was making me a worse diagnostician in the first place. We implemented Cleo, an AI scribe in the emergency room that listens to patient encounters and generates documentation in real time. I expected it to save time. What I didn't expect was how much it would improve my clinical thinking. Before Cleo, I was mentally multitasking during every patient encounter—listening to the patient while simultaneously thinking about how I'd document this, what boxes I needed to check, what phrases would satisfy billing requirements. That cognitive load is invisible until it's gone. You don't realize how much bandwidth documentation is stealing from actual medicine. Now I walk into a room, sit down, make eye contact, and just listen. I ask better follow-up questions because I'm not mentally composing a note. I catch subtle details—the hesitation before answering, the symptom they mention offhand—that I might have missed when half my brain was focused on the EHR. The AI isn't diagnosing for me. It's giving me back the cognitive space to diagnose better myself. The other shift: I'm more thorough in my verbal assessment because I know it's being captured. I narrate my reasoning out loud—"given your symptoms and risk factors, I'm concerned about X, so we're going to rule that out with Y." The documentation becomes a byproduct of good medicine rather than a separate task competing with it. The surprise was realizing that the bottleneck in my clinical practice wasn't knowledge or skill—it was administrative burden fragmenting my attention.
The least expected was not accuracy. It was the way AI transformed the process of thinking. In either scenario, an AI tool has produced a less evident disparity at a very young stage, connected with a trend among symptoms that typically are taken up individually. No big or unusual thing, just an item that is easy to overlook when visits are not long. It was not a substitute of clinical judgment but it made the team have a better follow up question sooner. That change would make the mean proactive rather than reactive. Rather than following the symptoms in several visits, the dialogue shifted towards eliminating or including things more intentionally. It also ensured that documentation was more straightforward since the rationale was presented in a step-by-step manner as opposed to recreated at a later stage. The value at RGV Direct Care has been performing the role of a second set of eyes and not a decision maker with AI. It also strengthened the tendency of decelerating to a point that could consider other options without leaving the patient narrative. The largest change was in confidence. It is not the blind faith in some tool, but rather the trust that nothing so apparent is being neglected without at the same time making the care personal and human.
The most surprising experience I had was realising how useful AI could be at surfacing patterns I'd grown used to normalising. When I trialled AI supported analysis on recurrent blister cases, it highlighted links between footwear changes, activity spikes, and skin response timing that I'd seen anecdotally but hadn't consistently connected across patients. In one case, it prompted me to ask a different follow up question about training load rather than shoe fit, which changed the prevention plan completely and stopped repeat breakdown. What changed my approach was not relying on it for diagnosis, but using it to challenge my assumptions. My view is that AI works best as a second set of eyes, not a decision maker. The practical takeaway is to use diagnostic support tools to broaden your questioning, not narrow it. When AI nudges you to look again, clinical judgement becomes sharper, not replaced.
One of the most surprising experiences I have had with AI for diagnostic support came not from a single dramatic diagnosis, but from seeing how consistently it surfaced patterns that humans tended to underweight in early assessment. In one pilot I observed closely, clinicians were using an AI tool as a second reader for imaging and structured clinical data. In several cases involving early stage inflammatory and autoimmune conditions, the AI repeatedly flagged combinations of mild abnormalities that clinicians initially considered nonspecific. Individually, none of the signals were alarming. Taken together, they pointed to a narrower differential much earlier than usual. What stood out was not that the AI was "right" every time, but that it was disciplined. It did not get distracted by anchoring or by the most obvious explanation. That forced clinicians to slow down, revisit assumptions, and document why they agreed or disagreed with the AI suggestion. In a few cases, that meant ordering a targeted follow up test sooner, which shortened the time to a confirmed diagnosis. The experience changed how the teams approached that class of conditions. AI was no longer treated as a diagnostic oracle, but as a structured challenger. It encouraged earlier hypothesis testing, clearer documentation of reasoning, and more deliberate use of confirmatory tests. The biggest shift was cultural. Instead of asking "Is the AI correct?", the better question became "What is it seeing that I might be discounting?" That mindset turned AI into a safety and learning tool rather than a replacement for clinical judgment.
My most surprising experience using AI for diagnostic support came during a collaboration with a local clinic that serves our PuroClean community. The AI tool flagged subtle risk factors in patient data that a rushed manual review might miss. In one case, it highlighted patterns linked to early respiratory decline. The provider ordered follow up imaging sooner than planned. That early action led to faster intervention and a better outcome. It showed me that AI works best as a second set of eyes, not a replacement for judgment. I now approach complex cases with a blend of human review and AI cross checks. Balanced use improves confidence and patient safety.