The AI revolution has empowered workers but also put many roles at risk, shaking up the traditional corporate ladder. Like many talent acquisition pros, I see job postings flooded with candidates who either one-click apply or carefully craft custom resumes and cover letters. No matter how skilled we are at skimming and notetaking, we cannot come close to the speed of applicant tracking systems. While no one expects us to be superheros, I insist on personally responding to every applicant who reaches out—whether it's for government-mandated accommodations (ADA), profile feedback before they apply, or follow-up questions when they are nervous about where they stand. Of course, I can't let these conversations compromise the fairness of the process, but I do my best to bring clarity and compassion—a human touch in a world of automated systems and resume black holes.
One cultural shift we emphasized was transparency around AI. Whenever new tools are introduced, people naturally question how it affects their role. We wanted to remove that uncertainty early. Instead of rolling out technology quietly, we talked openly about what AI can and cannot do. This gave employees a clear picture: it could make tasks faster, but judgment and decision-making would stay with people. That simple clarity reduced fear and built trust. In day-to-day work, this showed up in practical ways: Managers explained when AI saved them time but reinforced that choices were still human. Developers used AI suggestions as input, not as final answers. In HR, we shared that AI helped screen resumes but never made hiring decisions. By being transparent, people felt included rather than left in the dark. It encouraged them to test tools without pressure. More importantly, it kept the focus on people as the core of our work, with AI in a supporting role.
One of the most important cultural shifts we embraced was encouraging teams to experiment without being worried about failure — especially when it came to adopting AI. So, instead of pushing for flawless execution from the very start, we tried creating a safe space for teams to test, adapt, and even fail (and I want to believe we succeeded). Our main goal was to change the collective mindset from "what if this doesn't work out?" to "what can we learn from it?" Sounds simple, right? But this kind of approach with time started showing up in everyday work: teams felt much more comfortable trying AI-powered tools in real workflows, knowing that they wouldn't be penalized for needing some extra time to figure these new (and maybe even scary) things out. We celebrated every small win, openly discussed what didn't land, and kept iterating. I believe, that type of learning culture made AI feel less like a disruption and more like a tool that we were sharing and exploring together.
The cultural transformation I emphasized involved understanding AI as a relationship-supporting tool instead of a relationship-replacing tool. The behavioral health field faces a risk that technology will create a transactional experience so I directed our teams to see AI as a relationship-enabling tool. Staff received instructions to use AI applications for burden reduction and patient connection enhancement purposes. The daily practice showed this cultural shift through various small yet important changes. The evaluation of automated intake systems focused on processing speed but also included other factors. The evaluation process measured how clinicians spent more uninterrupted time with patients during their initial session. The change established itself as the key indicator for measuring success. Staff members developed a natural tendency to assess new tools by asking if they enabled them to be more present with people. The basic practice of this habit strengthened our organization's culture to view technology through human-centered perspectives which maintained AI as a care partner instead of an obstacle to care.
Involving the team in AI adoption processes from the outset, including initial software selection and mapping opportunities to existing tasks. This means that you're not looking at blanket AI integration from day one, rather you've actually involved your teams and know what makes sense to utilise AI on within existing systems.
By actually mapping-out the requirements of AI systems and how it can work alongside human strategies that are already in place, rather than going into AI adoption with the view of it replacing existing human systems. This means that your teams are engaged in the adoption and senior leadership can map AI requirements to small tasks, and then scale usage accordingly.
I have assisted Estonian SMEs through a lot of change and one of the values I have been returning to is personal responsibility when it comes to communication. As AI is playing an increasingly important role in workflows, people are tempted to hide behind tools. I encourage teams to be responsible about the way they communicate, tone, clarity, timing. AI can write things, code things, even choose things but can never possess the impact of a message, people can only do that. We dealt with a logistics company where internal updates began to arrive only by the form of AI compiled summaries. It saved time, however, it caused confusion. Therefore, I presented a DiSC based framework that allowed the leadership to realize how their teams like to be informed and receive information. We are pointing once again to the human to human conversations, with AI assistance, but not as a substitute. Managers began to present the main updates themselves during team huddles. Clarity improved. The involvement increased. The AI remained, yet people began to own their communication once again. That altered everything.
When we began integrating AI into HR processes at CultureShift HR, I was clear from the start: our values had to guide the technology, not the other way around. The purpose was never to replace the human element, but to give it more room to breathe. For us, a human-centric approach to AI means freeing up time for deeper connection, more creative thinking, and thoughtful decision-making. It is about amplifying what makes us human, not erasing it. One of the biggest cultural shifts we made was introducing our "human-first review" policy. It is simple but powerful. No AI-generated output, whether it is a draft policy, a candidate shortlist, or a piece of client content, goes out without a person reviewing it through the lens of our values, our context, and our lived experience. This ensures that every deliverable feels intentional and aligned with our standards. You can see this in our day-to-day work. If AI recommends a candidate solely on technical skills, we take the extra step to consider their adaptability, growth potential, and cultural alignment. If AI produces a content draft, we refine it with our own voice, personal insights, and a clear understanding of the audience we are speaking to. That human touch is what keeps the work authentic. AI can make us faster, but speed alone does not make the work better. The heart of our culture is reflected in the way we use technology as a partner, not a replacement. That balance allows us to stay efficient without losing sight of the relationships, trust, and purpose that define how we work.
AI came onto the scene and I very intentionally prioritized dignity as a value. Without that, I believe the discussion of automation becomes quite cold very quickly. The human element in HR must remain present. In practical terms, dignity was communicated in the way we presented automation. AI was a tool to assist, not replace. For instance, we used AI to reduce the time it took to proof payroll from two hours to 15 minutes. However, the employee responsible for distributing paychecks was still required to confirm accuracy in final preparation. The emphasis was, the person is still on the hook. The tool is to save labor. That reoriented people in their relationship to the technology. It went from "the machine watching me" to "the machine giving me time back". In the day-to-day, the value manifested in tangible things. Employees were required to review all AI decisions prior to escalation. They were included in the decision-making workflow. Employees had the autonomy to overrule the system without seeking authorization. That autonomy was more valuable than any training program because it communicated to people that their judgment was valuable. AI ran in the background, people remained in the foreground. The effect was positive. Productivity increased but so did confidence, because people did not feel devalued by the software. They felt equipped with a more precise tool.
In the rush to scale AI operations, there's a tendency to treat data labelers, remote ops staff, or QA teams as interchangeable cogs. But the reality is, the quality of AI outcomes improves dramatically when the people behind the scenes understand the why, not just the what. How this showed up: We worked with a client labeling nuanced behavioral data for an AI model. Instead of giving the offshore team rigid SOPs, we encouraged product managers to walk through use cases, explain the customer impact, and share what "wrong" predictions would look like in the real world. That small shift — giving context — changed everything. * Labelers started proactively flagging edge cases and suggesting category adjustments * QA errors dropped, and team engagement improved * Most importantly, the team started seeing themselves as part of the AI learning loop, not just task executors By respecting the human intelligence behind artificial intelligence, we created not just better data, but a more motivated, resilient workforce — especially across global regions like the Philippines and Latin America, where talent is often underutilized.
If you're going to work with a recruiter, think of it like choosing a trusted guide, the right one can get you to your destination faster, but the wrong one can send you in circles. I've seen candidates thrive when they partner with someone who truly understands their field and has the right network. Agencies, especially niche ones, often have long-term relationships with hiring managers and access to roles you'll never see advertised. Freelancers can be a good fit too, but in my experience, agency recruiters tend to have a wider reach and more resources behind them. Before you commit, do some digging. Read the agency's reviews, look at the kinds of roles they actually fill, and scroll through their LinkedIn. You'll get a quick feel for whether they're active, credible, and connected in your space. And once you start working together, treat it as a two-way street. Be upfront about your goals, share context on your experience, and keep communication quick and clear. When a recruiter knows exactly what you're looking for and trusts you to follow through, they can open doors you didn't even know existed.
At first, the conversations around AI in our workplace were filled with quiet unease. People rarely said it out loud, but I could sense an undercurrent of worry that the technology might strip away what made their roles meaningful. I realized the cultural shift needed wasn't about learning new systems, it was about reclaiming ownership of our work in the presence of AI. I began encouraging discussions where employees could name moments when their human judgment mattered most. One person shared how resolving a conflict between colleagues required more patience than any system could provide. That story became a grounding example, reminding us that AI could handle patterns, but not people. Gradually, this value showed up daily in the way tasks were assigned. Teams leaned on AI for data-heavy preparation, but the final decisions were always accompanied by human input. It was a small but important shift: AI became the background support, while people remained at the forefront of meaningful action.
At HRDQ, I reinforced that transparency is a key cultural value to enable the human-centric AI approach. I didn't want our teams merely to know what AI does, but why it's suggesting something and how it might improve human decision-making. Transparency fosters trust in HR, and that is especially important when technology gets into people management. At the day-to-day level, this meant sharing AI results with teams rather than offering them as directives. HRDQ practitioners applied AI analysis to identify areas of learning and development gaps, but always informed teams of the reasons behind the recommendations. This helped managers and workers make educated collective choices, further reinforcing a shared responsibility culture. The emphasis on transparency has reinforced HRDQ's core mission, developing skills and capabilities in a way that respects human judgment while embracing new technology. AI-based insights are now added to our programs along with workshops, podcasts, and interactive learning sessions, so that the employees do not feel dictated to but rather guided and helped. This has enabled our workplace to become more humane, giving HR professionals the authority to lead by understanding and clarity while still adopting the efficiencies and intelligence AI offers.
One cultural shift we made was to treat AI as a servant, not a master. We adopted a value we call people first, technology second. That means we evaluate any AI tool through the lens of how it will improve the human experience at work. Before implementing a system we ask simple questions: will it save time, reduce frustration or make it easier to serve our clients? We also make sure that a person reviews AI recommendations, especially in areas like hiring or performance feedback. Day to day this value shows up in the way managers talk about technology. They encourage teams to experiment with AI but remind them that judgement and empathy cannot be automated. As a result, AI augments our work rather than replaces the human connection that makes culture strong.
Built and scaled two tech companies (TokenEx exit in 2021, now leading Agentech AI). Learned that successful AI adoption isn't about the tech--it's about preserving human dignity while solving real problems. At Agentech, we established "AI as coworker, not replacement" as our core value after early resistance from insurance adjusters who feared job loss. Instead of pushing automation, we spent hundreds of hours doing ethnographic research with claims adjusters to understand their actual daily frustrations. This showed up in our product design--our AI handles the tedious document processing they hate, freeing them for complex decision-making they're trained for. The cultural shift was immediate when adjusters saw 4x productivity gains without losing their expertise or client relationships. Our current pet insurance partner saw 67% cost reduction while adjusters reported higher job satisfaction. The key was involving adjusters in product development rather than building something *for* them without them. Day-to-day, this means our AI agents are trained to escalate complex cases to humans and provide full audit trails so adjusters maintain control and accountability. We track human override rates and false positives obsessively--not to eliminate human judgment, but to support it better.
A key cultural shift involved redefining the relationship between humans and AI from one of replacement to one of augmentation. The principle emphasized was: "Let AI handle the repetitive, so people can focus on the meaningful." That simple shift helped ease resistance, especially among mid-level managers who were initially skeptical about automation's role in decision-making. By openly discussing where AI adds value—and where human judgment must prevail—the fear of being replaced started giving way to curiosity and engagement. This mindset translated into specific behaviors. Teams began using AI tools to draft reports, analyze learner data, and personalize training paths—but final decisions always rested with people. In daily check-ins, it became routine to ask not just what AI recommended, but why the team agreed or disagreed. That small prompt encouraged critical thinking and reinforced the idea that AI should amplify human intelligence, not diminish it.
A key cultural shift was reinforcing that AI is only as human as the intent behind it. That meant re-centering human values—empathy, transparency, and accountability—at every stage of implementation. The aim wasn't just to deploy AI efficiently, but to ensure it supported people, not replaced them. This required reworking how teams interacted with AI systems, starting with training and design discussions that included diverse voices, not just technical leads. In daily operations, this shift showed up in how decisions were made. For example, when AI flagged performance issues, managers were encouraged to look beyond the data—to consider context, behavior patterns, and personal circumstances. Instead of deferring entirely to algorithmic outputs, people learned to ask better questions and engage in critical thinking. It created a balance where AI handled the scale, but the final call stayed human. That blend has made the organization more thoughtful, not just more efficient.
A pivotal shift was redefining AI's role from a productivity tool to a human enabler. The narrative changed from "what can AI replace?" to "how can AI help people do their best work?" That meant designing systems and processes where technology amplified uniquely human qualities—judgment, empathy, and creativity—rather than overshadowing them. This mindset not only reduced fears around job displacement but also encouraged teams to see AI as a resource for growth and innovation. This shift became visible in everyday operations. When implementing an AI-based learning analytics platform, the rollout began with transparent discussions on ethical boundaries, data privacy, and bias prevention. Teams were trained to treat AI-generated recommendations as conversation starters, not final verdicts. Managers paired insights with personal observations, ensuring that performance reviews, coaching, and development plans retained a human touch. Over time, this approach built trust, strengthened adoption, and kept workplace culture grounded in the belief that technology should serve humanity—not the other way around.
We prioritized shifting the culture from "AI is replacement, not augmentation" into our AI adoption. One of the things we noticed during our AI integration was the quiet anxiety from staff, particularly in HR and Creatives. They had a fear of being replaced by automation. We chose not to ignore these concerns, but to bring them to the surface and create a principle: "Let AI do the mechanical so that humans can be more meaningful." To actualize this value proposition, we changed how we worked. For example, we use AI to identify interesting candidate traits and reduce inherent bias, but human recruiters make all final decisions. We also began "human-in-the-loop" training for every department that adopted AI tools, reinforcing that we don't just allow humans to make judgment calls. We require it. Over time, staff began to see AI as a team member rather than a threat. It almost became commonplace for staff to share openly about how they were using tools like ChatGPT to generate ideas that they then presented to the brainstorming meeting as drafts, and they worked together to refine them. So this small behaviour change was an enormous cultural shift. Our main idea is that AI should be a pencil, not the artist.
We embraced curiosity as a core value. Rather than rushing adoption we created space for thoughtful questioning like what can AI truly support in our workplace? Where does it fall short? This approach gave our teams the freedom to explore, assess and speak up when something did not align. It allowed us to move forward with greater clarity and confidence leading to more deliberate and effective integration. In practice this showed up in cross-functional workshops and collaborative Slack threads where teams openly shared experiments and insights. Our culture shifted from here is the tool to let us explore this together. That mindset made our AI journey more intentional and collaborative. It brought a deeply human element to how we adopted emerging technology.