I run one of the largest product and software comparison platforms online and have hired analysts, engineers, and content operators whose job titles stayed the same while their workflows became AI-assisted. The biggest shift is that output speed matters less than judgment. Roles now require deciding when to trust AI, when to override it, and how to validate results. The signal that matters most is decision quality under ambiguity, not raw production. Teams that keep hiring for outdated versions of roles end up with candidates who over rely on tools or blindly follow outputs. We updated interviews to include AI review exercises where candidates critique, correct, and explain model results. Hiring teams should stop measuring task execution and start measuring judgment, error detection, and system thinking. Albert Richer, Founder, WhatAreTheBest.com
I've seen this most clearly in design and production. The job title might still say "patternmaker" or "design assistant," but the day-to-day work has shifted. People are now reviewing AI-generated templates, cleaning up automated mockups, and deciding when a digital draft needs a human adjustment. The core craft is still there -- it's just layered with a constant stream of decisions about when to lean on the tech and when to step in. We used to hire almost entirely on technical skill and portfolio strength. Now I pay closer attention to someone's eye and their judgment. Can they spot when an AI output looks polished but doesn't feel right? Can they course-correct when automation goes sideways? Visual taste and a bit of emotional intelligence have become far more important. AI can produce endless variations, but only a person with a good sense of direction can tell which one actually works. When teams keep hiring for the old version of a role, you see it right away. Someone can handle the traditional tasks but gets thrown off by the new pace. They either slow everything down or lean too heavily on AI because they're not sure when to question it. It's less a lack of skill and more a lack of context. To get ahead of that, we've added open-ended problem scenarios to interviews. I don't need perfect answers -- I want to see how someone navigates messy, half-formed situations. Do they trust their instincts? Do they get rattled when the tools misbehave? Watching their process tells me far more than any polished response. And honestly, it's time to ease up on the checklist of software skills. What matters now is adaptability, taste, and sound judgment. The tools can generate the work; I need people who know when a design needs less precision and a bit more humanity.
AI has revolutionized jobs, necessitating that employees become more AI-friendly. Today, AI can ideate and automate more than 50% of routine tasks, and core skills have shifted from domain expertise to judgment and critical thinking skills. At Stairhopper Movers, we've noticed that our operations/project coordinator roles are not the same as they used to be in 2023. Operations workers have gone from being glorified multitaskers to strategic planners riding high on AI intelligence. To address this hiring gap, our team has reworked the skills categories based on which we make a hire. Our old hiring rubric used to be: - 40% domain expertise - 30% process execution - 20% communication - 10% tech comfort And now, our new hiring rubric is: - 25% domain expertise - 15% process execution - 35% critical thinking and judgment - 25% communication and relationship skills One can clearly see the hiring shift. We have transitioned from proven background and technical knowledge to more importance given to soft skills and relationships, which AI can't beat humans at (till now!).
I've seen this transformation firsthand at Fulfill.com. Our warehouse operations manager role still has the same title it did three years ago, but the actual work is completely different. Previously, these managers spent 60% of their time on manual inventory tracking and coordination. Now, AI handles that, and they spend their time interpreting predictive analytics, making judgment calls on algorithm recommendations, and coaching teams on exception handling. The title stayed the same, but we're essentially hiring for a different job. The biggest shift I've noticed is that technical aptitude now matters far more than technical expertise. When we hire for logistics coordination roles, I care less about whether someone knows our specific warehouse management system and more about whether they can quickly learn new tools, spot when AI outputs don't make sense, and improve prompts or parameters to get better results. We had a coordinator recently catch that our AI was routing shipments inefficiently because it wasn't accounting for a carrier's new regional hub. That kind of critical thinking and curiosity matters more than memorizing standard operating procedures. The problem with hiring for outdated role versions is expensive and immediate. We made this mistake with a fulfillment analyst position last year. We hired someone with strong Excel skills and logistics experience, but they struggled because the role had evolved to require comfort with machine learning models and data interpretation. They could execute tasks but couldn't evaluate whether the AI recommendations made strategic sense. We ended up restructuring the role within four months. Here's what we changed in our hiring process: We stopped asking candidates to demonstrate they can do repetitive tasks and started testing how they approach problems they've never seen before. In interviews, we give candidates an AI-generated report with intentional errors or questionable recommendations and ask them to evaluate it. We want to see if they blindly accept it or push back with reasoning. We also added questions about learning agility, specifically asking candidates to describe a time they had to quickly master a new technology that changed how they worked. What hiring teams should stop measuring is task completion speed for routine work because AI handles that now.
What hiring teams should measure is how candidates work with systems, not just how well they execute tasks in isolation. We're seeing roles where AI now handles the first draft, the routing, or the analysis, while the human owns judgment, quality control, and decision-making, even though the job title hasn't changed. The most important signals now are problem framing, ability to validate and improve AI outputs, comfort working across automated workflows, and judgment under ambiguity. When teams keep hiring for the old version of the role, they get people who are busy but ineffective because they optimise for manual effort instead of outcomes. The shift we've made is updating interviews and assessments to test how candidates think, adapt, and make tradeoffs with AI in the loop, and stopping the overreliance on credentials or tool-specific experience that goes stale quickly.
In my work, the job titles stayed the same, but the daily work flipped. A content writer is now an editor who can steer a model, spot weak claims, and tighten copy fast. My analysts use AI to draft audits and pull patterns from reporting, yet the real value is deciding what to trust and what to recheck by hand. The best people treat AI like a junior teammate. They ask better questions, keep a decision log, and protect brand voice. So I want hiring teams to score judgment, not tool fandom. Give candidates a short simulation: messy AI output, a client goal, and ten minutes. Measure how they verify facts, fix structure, and explain tradeoffs. Track time to a shippable result, error rate after review, and how well they set guardrails for privacy and quality. Stop rewarding keyword trivia and perfect resumes. Start rewarding clear thinking under uncertainty.