Request for Insight: Which “New AI Jobs” Will Actually Endure?
I'm working on a story for ISMG's AIToday.io examining which AI-created job categories from the past three years have real staying power -- and which ones were short-lived hype cycles. For example, early predictions around “AI prompt engineers” and $300k salaries have given way to more specialized roles like prompt red-teamers and AI safety evaluators.
Below are a few questions to guide your thoughts:
1. When you look at the last 2-3 years of AI hiring, which job titles turned out to be hype-driven (e.g., prompt engineers) and which roles have proven essential inside real organizations? What explains the divergence?
2. What “new” AI roles are gaining traction today that didn’t exist or weren’t mainstream three years ago -- such as model evaluators, red-teamers, AI governance specialists, or safety/assurance engineers? Which of these do you expect to survive a 5-10 year horizon?
3. Are companies genuinely retraining existing staff into these AI-adjacent roles, or are they hiring externally? What technical or domain skills are turning out to be the real differentiators for staying relevant in this new job market?
4. How has the shift from building models to operationalizing and governing them changed the nature of talent demand? Are we moving toward a stable class of AI “operators,” similar to SREs and DevSecOps, or will automation eventually cannibalize many of these new jobs?
5. What’s the biggest misconception you see in public narratives about AI job creation? Which roles are overstated, and which critical ones are being overlooked, particularly in security, compliance, safety, data quality and model evaluation?
Deadline: Dec 16th, 2025 06:30 PM (May close early)
Publisher:
A
AI Today
Need help? Learn how to answer your first Featured question here.