Backend RPA still isn't getting the attention it deserves. Everyone's distracted by the more glamorous AI features, but the real, steady gains are coming from automating the dull, structured work that eats up time--moving data between old systems, kicking off processes when a system event fires, things nobody wants to touch manually. We recently built a setup for an enterprise client using Power Automate alongside Azure Functions, and it cut their weekly reporting workload by about 80% without us having to tinker with the ERP itself. The one thing I keep an eye on is how deeply these RPA tools plug into the larger enterprise stack. When platforms like Power Automate or UiPath can connect cleanly with APIs, SAP modules, and locked-down data sources, that's when the real compounding value shows up.
I am in the data analytics consulting services industry. One emerging technology trend I believe will be tranformational is the rise of generative Business Intelligence. Generative BI uses artificial intelligence to create analytics reports from text prompts using the company internal data. Many people currently see generative BI as a novelty feature that helps build dashboards faster. In reality, its real impact is not speed, but who gets access to insights. Generative BI tools are shifting analytics from a specialist-only function to something business users can interact with directly through natural language. What's underhyped is how much this changes decision-making workflows. Instead of analysts acting as intermediaries for every question, non-technical users can explore data themselves, ask follow-up questions, and iterate in real time. This doesn't eliminate analysts—it changes their role toward data modeling, governance, and ensuring that AI-generated insights are actually correct and trusted. Organizations that ignore this shift risk building analytics teams that scale poorly as demand for insights grows. The single signal I monitor to track this trend is adoption depth, not feature announcements—specifically, how often business users (not analysts) are querying data through tools like Power BI Copilot and using those outputs in real decisions. When generative BI becomes part of weekly management meetings rather than a demo feature, that's when its impact becomes undeniable.
The most misunderstood shift in the next 5 years isn't the generative AI boom, but the rise of Agentic AI Workflows. The vast majority of organizations see AI as a fancy search tool or content generator. The real upside is agentic orchestration: AI systems that take action without our explicit command, and act across an enterprise platform. From a world of AI-assisted things to AI-led things, becoming overseers of AI, rather than executioners of tasks. The single signal that I watch for this development is density of Autonomous API calls in the Enterprise Service Bus: specifically I look for the weight of agent-initiated system-level "intercoms" - autonomous activity - versus human-initiated intercoms. When the agent-to-system traffic begins to substantially outstrip the human-initiated stuff across core company-offering functions like Procurement or Customer Service, it marks the moment that AI has gone from celebrity novelty to the operating tissue of the firm. Gartner research backs this trajectory. They predict that "by 2028, at least 15% of day-to-day work decisions will be made autonomously through agentic and other autonomous systems. This doesn't just mean a fancier chat interface, but a core redesign of the enterprise: if you're focused on the chat interface you're missing the revolution happening under your nose where the actual work is performed. The move to an agentic enterprise relies heavily on trusting your data and governance. It's easy to ride the hype of what the AI can say back to you. The winners are going to be those who actually make a leap of faith and let the AI do the work[(14+source)] .
Here's what everyone's missing while obsessing over AGI: decentralized digital identity. It's underhyped because it sounds boring—wallets, credentials, verification. But it's going to fundamentally rewrite how we interact with everything online. The numbers prove it. The decentralized identity market hit $3 billion in 2025 with a projected 70.8% CAGR through 2035. Digital wallets set to double from 83 million to 169 million users in just one year. This isn't speculative crypto anymore. It's real infrastructure getting deployed. What makes it transformative? Instead of every company owning your data, you own your identity. One verified credential works everywhere. No more password hell. No more giving your personal info to every random app that asks. The signal I watch: enterprise adoption of self-sovereign identity standards. When major banks and governments start letting you bring your own identity instead of forcing you to create new accounts for every service, that's when this goes from experimental to inevitable. We're already seeing the early signs. By 2030, you'll wonder how we lived any other way.
Autonomous exception handling in operations networks gets far less attention than it deserves. Most people still associate AI in supply chains with chatbots or demand forecasting. That view misses what is actually changing daily work. The real shift comes from systems that spot problems, weigh options, and resolve issues before anyone notices a failure. This moves operations from reactive to preventative without adding headcount. We already see this at Togo through HarnessOS. When a shipment runs late or a vendor misses a milestone, the platform does more than raise an alert. It evaluates the business impact, pulls context from similar past situations, and takes action. Sometimes it resolves the issue automatically. Other times it routes the problem to the right person with a clear recommendation. Traditional automation cannot do this. Scripts only follow predefined steps and break when conditions change. What makes this overlooked is the current obsession with generative AI for content. Operational AI that makes decisions in messy, real-world environments creates far more leverage. One customer we work with once had three people monitoring shipments full time. Today, one person handles exceptions because the system identifies and resolves about 60 percent of issues on its own. That change directly affects cost, speed, and reliability. The metric that matters most is resolution without human involvement. When that rate climbs from 40% to 60% to 75%, the work itself changes. Roles shift from monitoring to oversight and improvement. Companies that track this metric build real advantages. They improve outcomes instead of deploying tools that only look impressive in presentations. This technology matters because supply chains fail in unpredictable ways. No team can script every scenario. Systems trained to operate under uncertainty can adapt at scale. That is the shift worth paying attention to.
Agentic AI is the most underhyped trend that will reshape work over the next 2-5 years. Most people know chatbots. But agentic AI is different. These systems do not just answer questions. They reason, plan, and act. They complete multi-step tasks without constant human input. Here is why this matters: Gartner predicts 40% of enterprise apps will have AI agents by late 2026. That is up from just 5% in 2025. The market will grow from $7.8 billion today to over $52 billion by 2030. But the hype has not caught up to reality. Most coverage focuses on chatbots and image generators. The powerful shift happening in back offices stays hidden. AI agents now handle entire workflows - reconciling transactions, drafting proposals, managing supply chains. The signal I track: enterprise deployment rates. Right now only 11% of companies use agentic AI in production. But 93% of IT leaders plan to deploy agents within two years. That gap between intention and action tells the whole story. We are at the inflection point. I see this daily in my work. I use Claude Code to build systems that handle complex tasks - document processing, compliance checks, data analysis. The tool reasons through problems and writes code that works. Tasks that took hours now take minutes. The companies reporting 5x to 10x returns on agentic AI investments are not exaggerating. I have seen 66% productivity gains and 20-35% cost reductions firsthand. The breakthrough is quiet. It happens in spreadsheets and databases, not in headlines. But by 2028, Gartner says 33% of enterprise software will include agentic AI. That is when the world will notice what already changed.
An important trend that I believe will gain significance in the coming years is the creation of a system for provenance infrastructure of digital content. Although some people incorrectly classify this trend as a niche issue or simply a form of watermarking, in fact, the development of a system for provenance infrastructure of digital content will replace the lack of context for digital content created in an environment of cheap and indistinguishable generation. From my experience in developing GPTZero, it is clear to me that the most effective way to detect digitally published content after the fact is not sufficient. The existing classifications for digitally created content will not hold up when users and models evolve together. It will be more beneficial to begin verification of the content being attributed to these technologies earlier in the process, thereby allowing the institution to make determinations about the process of creation as well as the resulting content. This will change the incentives of users, allowing the effort and authorship of the content to become visible again, ultimately changing the way users behave. The only indicator that I track to determine whether provenance tools will become standard practice is whether or not they will transition from being optional accessories to becoming the default mechanismFoundational tools supporting existing platforms like LMSs, application systems and document management systems. Ultimately, the true measure of the success of provenance efforts will be determined by their acceptance and use by these intermediary organizations shipping these tools into existing systems, as opposed to flashy demonstrations found during the early stages of development.
Head of Business Development at Octopus International Business Services Ltd
Answered 2 months ago
Personalized AI copilots for professional workflows. Not the flashy, catch-all chatbots, but the quiet helpers built around very specific tasks--legal reviews, compliance checks, onboarding flows, structuring work. People still treat them as minor UI conveniences, yet when they're trained properly and paired with human judgment, they cut down on context-switching, speed up decisions, and make knowledge-heavy roles a lot steadier. We've been experimenting with custom LLMs in-house for regulatory monitoring, profiling client entities, and helping junior staff quickly track down precedent documents. What's surprised me isn't how fast they are, but how consistently they surface details and how easy they make it to audit a line of reasoning. A solid copilot doesn't take the wheel; it just lays out the road with fewer blind spots. The signal I watch most closely is how quickly teams are feeding and maintaining their own proprietary knowledge inside these systems. Once an assistant understands your internal compliance logic--not just what's in the public rulebooks--you start to see real leverage without adding risk. That's the shift I expect: less talk about AI as a client-facing feature and more about AI becoming part of a company's internal governance backbone. Almost no one is focused on that yet, but that's where the lasting value will come from.
An underhyped trend is how cyber insurance requirements are becoming the next set of security standards. In strategic planning, when clients pursue coverage, we see insurers shaping which controls get funded, which will drive broad adoption over the next two to five years. The signal I track is which security measures are required for various levels of cyber insurance within underwriting questionnaires and coverage terms. Even if businesses believe they have enough security solution in place to be protected, they may not be able to meet the requirements necessary to get cyber insurance.
One emerging technology trend I believe will genuinely matter over the next 2-5 years is AI agents integrated into core business workflows: not as standalone tools, but as decision-making layers across marketing, operations, and customer experience. It's often misunderstood as simple automation, when in reality it's about systems that can interpret context, act across platforms, and continuously optimize outcomes with minimal human intervention. The single signal I monitor is how many companies move AI agents from experimentation into revenue-impacting production use; especially in areas like personalization, lead qualification, and operational efficiency. Once AI starts owning measurable business outcomes, that's when its real impact becomes undeniable.
Energy storage beyond lithium is still flying under the radar. People are pouring their attention into bigger AI models, but none of that scales without a grid that can store huge amounts of power reliably and cheaply. I got a taste of the problem when I spent a week off-grid in Spain last summer. Solar panels were everywhere, yet the choke point was obvious: storage. Lithium works for phones and cars, but for the grid it's pricey, touchy, and tangled up in geopolitics. The real action is in sodium-ion, solid-state, and new thermal systems. Those are the technologies that will quietly reshape everything. The signal I watch is CATL's sodium-ion shipments and the early pilots running in China. Once you see consistent movement there, you know mass manufacturing is taking hold--and when that happens, Europe and the U.S. usually feel the shock wave a couple of years later.
Voice first interfaces are going to shake up the whole way people interact with business software, but for now everyone is still stuck on chatbots and text prompts. I'm already getting clients coming to me asking if they can basically just talk to their website instead of having to log into WordPress. The trend I'm watching is how accurate voice recognition gets in really noisy office environments. It's only when that accuracy jumps way up, I'm talking 95% or better in real-world office settings that we'll start to see people really start adopting. The tech wont be the thing holding them back at that point, rather just how used they are to doing things a certain way. What's interesting is everyone's building these text-based AI tools and completely ignoring the fact that most people would far rather just say what they want than type it out. This transition is happening faster than the industry realizes, and most companies are building for the wrong interface.
To me, one of the most misunderstood technology trends of this decade is neuromorphic computing. As the world is focused on large energy-hogging GPUs, neuromorphic chips use the same architecture as our brains and operate by using spiking neural networks to process information only when required. This gives an ability for AI to operate with extremely low power consumption and thus is critical to the next generation of autonomous edge devices. The tracking signal that I am keeping an eye on is the emergence of commercial "AI-at-the-edge" development kits coming from non-traditional chip manufacturers. A change will occur when developers can easily deploy low-power, real-time learning models to very small sensors, with no need for any connection to the cloud: that change will indicate that we are moving away from centralized AI to a truly distributed, locally created intelligence.
In my opinion, we have always been early adaptors when it comes to technology. We've built our own CRM, run our driver fleet through advanced planning systems and integrated AI into route planning to decrease carbon emissions and better our delivery times. But something that I think many people don't realize is that AI in logistics is not only about optimization. When you can use something like AI-driven digital twin simulations for logistics networks you can test a lot more, and understand things like our vehicles, our routes, our customer bookings, and even our call-centre workflows can all run in a simulated environment. I think this is underhyped because it can be seen as something only larger carriers can afford. But, as other tools and cloud-based platforms come to be, digital twins are actually becoming more accessible to smaller or more mid-sized operations. They allow us to explore "what-if" scenarios without it impacting real life deliveries. We can test things like sudden demand and alternate routes. And because we're carbon-neutral, digital twins are also the best way to measure and reduce emissions. Once we see affordable solutions becoming mainstream for companies our size, we'll know the trend has truly picked up.
Natural Language Interfaces for business apps don't get enough attention, even in AI circles. At Roy Digital, we've been using conversational AI to handle workflow tasks. Our non-technical users now work with complex systems easily. Jobs that took hours now take minutes. I keep an eye on how many enterprise tools are adding open-source LLMs - that tells me how quickly this is spreading in the real world.
Most people are obsessed with the cloud, but they're sleeping on edge computing. Running code closer to users instead of in some massive data center is just faster and cheaper. At my company CLDY, moving some workloads to the edge shaved several seconds off customer site load times, and they noticed. I track the trend by watching acquisitions. When big companies start buying edge startups, you know it's getting real.
Programmable tokens of private-market investments using blockchain technology will have dramatic effects on how we do business using private market investments and private market investment processes; however, most people only think about crypto speculation or public tokens of decentralized finance (DeFi) and not the enormous potential of using private market investments for the private sector. Tokenized programmable private market investments enable fractional ownership of private market investments (financing); instantaneous settlement (settling your financing transaction instantly); and increased transparency for historically opaque private market investments. Additionally, these programmable tokens of private market investments will greatly reduce investment friction and increase access by accredited investors to these types of private market investments. The signals I monitor to evaluate acceptance of tokenized private-market investments are as follows: the use of tokenized structures by established institutional players, including family offices and private equity firms, and registered investment advisory (RIA) firms and platforms. I believe that when credible institutions are allocating capital to tokenized private-market investments and building compliant architecture and infrastructure to support those investments, it is strong evidence that the technology has transitioned from being experimental to being a significant operational tool for organizations. Because of this belief, I have learned that monitoring institutional acceptance signals offers a much greater level of reliability than predicting how private market investments might evolve based on Hype Cycles. Additionally, monitoring institutional acceptance will drive where I invest my time and resources over the next two to five years.
Ambient Intelligence (AmI) is an important trend that will become more valuable in 2030 than it is today. Unlike AI that requires an interaction with a computer to guide it, AmI is also sensitive and will respond to human behavior without having a person interact with it directly. While many people assume that AmI represents the expansion of technology in the home labelled "Smart Homes," its true utility is in workplaces that seamlessly adjust lighting, temperature, and digital processes depending on a person's presence through real-time signals such as stress or emotion. I measure the progression of AmI development by the number of people adopting wireless protocols (e.g., WiFi Sensing Standards) that convert standard routers to be capable of capturing motion, gestures, and environmental data, providing the essential hardware for creating 'invisible' intelligence without using invasive cameras.
Decentralized identity (DID) will shape the future of trust in the workplace. Currently, a lot of hype surrounds this area, but the heavy jargon associated with blockchain and "Web3" technologies obscures it. In essence, DID gives individuals possession of their digital identity credentials and total control over them without the use of a centralized entity. With increased access to deepfakes and privacy breaches, demonstrating "verifiable credentials" will soon be necessary in order for teams to collaborate securely. I am currently monitoring the number of companies that integrate W3C Verified Credential Standards into their digital wallets and browsers. When individuals can easily prove their employment credentials or academic degrees with the same ease as swiping their credit card on their phone—without any involvement from the issuing companies—we will create a secure digital community.
Credential decay tracking is an overlooked but powerful trend in modern learning systems. Skills expire faster than most organizations admit but credentials are still treated as permanent proof. Over the next few years learning systems will track how competence fades without regular practice. This matters because outdated skills can create more risk than skills that are missing. The main signal I monitor is time based confidence scoring tied to active use of skills. When credentials lose value without real world use, learning becomes continuous by necessity. This trend is misunderstood because it challenges the comfort of static and permanent credentials. Systems that reflect skill decay will better support long term workforce development.