When building AI systems, it's easy to focus on technical precision. We get caught up in accuracy scores and performance metrics, and the conversation about diversity often becomes about fixing biased data. While that's crucial, it frames the problem as a technical bug to be patched, assuming the fundamental goal we set for the system was correct in the first place. True inclusion isn't just about adding more varied data points; it's about questioning whether you're even trying to solve the right problem. The most profound insight I gained came from a project where we were building a tool to help managers identify employees who might be disengaged or at risk of leaving. Our team, composed mostly of engineers and data scientists, defined the problem as a prediction task. We looked for proxies in the data—things like decreased activity in shared documents or fewer messages in team channels. We were proud of our model's predictive power. But when we brought in a few experienced HR leaders and industrial psychologists to review our approach, they fundamentally challenged our goal. They pointed out that our model was selecting for a specific personality type: the highly visible, extroverted collaborator. An introverted but deeply engaged engineer who preferred to work quietly and think deeply before communicating would be flagged as a flight risk. A working parent who logged off promptly at 5 p.m. to be with their family might look "disengaged" next to a recent grad who was online late into the evening. Our tool wasn't measuring disengagement; it was measuring conformity to a narrow, neurotypical ideal of what a "good employee" looks like. The unexpected insight wasn't that our data was biased, but that our entire definition of the problem was. We ended up building a tool that gave managers insights into team collaboration patterns, not one that put red flags on individuals. It taught me that diverse perspectives don't just help you find better answers; they force you to ask better questions.
Industry Leader in Insurance and AI Technologies at PricewaterhouseCoopers (PwC)
Answered 3 months ago
During an AI project for an insurance client, we brought together underwriters, claims adjusters, compliance officers, and data scientists to design a claims triage model. At first, the technical teams focused on accuracy, but the adjusters pointed out a real-world problem. A model that seems perfectly "optimized" might flag sensitive cases, such as workplace injuries or fatalities, without considering emotional or regulatory details. By including their input, we created a workflow that uses predictive scoring along with ethical checks and human review for sensitive claims. The unexpected insight: diversity improves responsibility, not just more innovation. Accuracy alone is not enough without empathy. Real progress in AI happens when systems are shaped by both data and human experience, focusing on what is responsible as well as what is correct.
At Enable Healthcare, building AI has been about valuing and learning from different viewpoints. Designing our AI offerings with a cross-disciplinary focus meant integrating the competencies of data science, healthcare, ethics, and the voice of the patient. For instance, the construction of the AI-driven automated care management tool showed us early on the importance of including both patients and demographic clinicians. Diverse demographic patients showed us gaps and biases in accessing and prioritizing healthcare needs. Models designed before patient input overlooked social determinants like lack of transportation and different cultures that may hinder healthcare access. Initial models estimating risk were revised because social determinants were weighted in the more equitable risk models. Looking past the numbers has immense value, and so does clinical experience. Confluence of strategy and lived experience results in patient-centered challenges that need solving. In the end, working with different points of view allowed us to create an AI that more effectively supports tailored care and reduces unforeseen gaps. It emphasized that the successful development of AI revolves not just around the technology, but also around continuously incorporating different human perspectives. This inclusive approach has become central to Enable Healthcare's innovation strategy.
I recently had a very interesting AI project focused on an important aspect of AI: conversational empathy. For that, I had a mixed team of people with different backgrounds, linguists, cultural researchers, neurodivergent testers, and UX designers, all from various parts of the world. Every member of this group offered viewpoints that were different yet unique. Linguists helped interpret emotional nuances across different dialects, while testers from the neurodivergent community helped shift the perspective on how the AI system gauged tone and intent. The most peculiar discovery was that when we phrased things in a culturally neutral manner, perceived warmth was lower in many cases. The inclusion of the regional idioms and inclusive design language into the training data greatly increased the AI's contextual awareness and emotional responsiveness.
I learned this lesson the hard way when we started testing AI sourcing assistants inside SourcingXpro. I had engineers framing prompts based only on supply chain logic, but I pulled in 2 western brand owners and a Filipino VA who runs daily order ops to weigh in. They explained language that looked "clear" to us in Shenzhen actually confused real buyers and slowed purchase decision speed by almost 20 percent. So we rewrote the logic in simpler buyer phrasing. Honestly that changed everything for conversion behavior. The unexpected insight was that diversity wasn't moral, it was operational leverage. It literally saved us time and made the model smarter without extra cost.
Efficiency, namely a shorter wait time, fewer missed appointments, and fewer records, was the main priority of our first team when we initiated the implementation of AI-driven patient communication and scheduling. The only thing is that it was only when we sought the contribution of the nurses, medical assistants and even patients that we realized how different the system was perceived by each. What we found to be a good idea was impersonal or perplexing to them. The incorporation of those voices altered it completely. We trained language models to identify frequent wording patterns of patients of various ages and cultural affiliations. More natural interactions with patients, which enhanced engagement rates and minimized the confusion of follow-ups, were achieved. The surprise fact was that technical accuracy was of little without emotional accuracy. The most effective AI in healthcare should reflect the population that it serves, not the data it works with alone.