The most surprising thing wasn't AI helping me diagnose—it was AI removing the barrier that was making me a worse diagnostician in the first place. We implemented Cleo, an AI scribe in the emergency room that listens to patient encounters and generates documentation in real time. I expected it to save time. What I didn't expect was how much it would improve my clinical thinking. Before Cleo, I was mentally multitasking during every patient encounter—listening to the patient while simultaneously thinking about how I'd document this, what boxes I needed to check, what phrases would satisfy billing requirements. That cognitive load is invisible until it's gone. You don't realize how much bandwidth documentation is stealing from actual medicine. Now I walk into a room, sit down, make eye contact, and just listen. I ask better follow-up questions because I'm not mentally composing a note. I catch subtle details—the hesitation before answering, the symptom they mention offhand—that I might have missed when half my brain was focused on the EHR. The AI isn't diagnosing for me. It's giving me back the cognitive space to diagnose better myself. The other shift: I'm more thorough in my verbal assessment because I know it's being captured. I narrate my reasoning out loud—"given your symptoms and risk factors, I'm concerned about X, so we're going to rule that out with Y." The documentation becomes a byproduct of good medicine rather than a separate task competing with it. The surprise was realizing that the bottleneck in my clinical practice wasn't knowledge or skill—it was administrative burden fragmenting my attention.
The least expected was not accuracy. It was the way AI transformed the process of thinking. In either scenario, an AI tool has produced a less evident disparity at a very young stage, connected with a trend among symptoms that typically are taken up individually. No big or unusual thing, just an item that is easy to overlook when visits are not long. It was not a substitute of clinical judgment but it made the team have a better follow up question sooner. That change would make the mean proactive rather than reactive. Rather than following the symptoms in several visits, the dialogue shifted towards eliminating or including things more intentionally. It also ensured that documentation was more straightforward since the rationale was presented in a step-by-step manner as opposed to recreated at a later stage. The value at RGV Direct Care has been performing the role of a second set of eyes and not a decision maker with AI. It also strengthened the tendency of decelerating to a point that could consider other options without leaving the patient narrative. The largest change was in confidence. It is not the blind faith in some tool, but rather the trust that nothing so apparent is being neglected without at the same time making the care personal and human.