In my experience, the most significant threat we're witnessing right now is the evolution of social engineering attacks powered by AI backed by human research. The novel aspect isn't just automation. It's the combination of AI-powered research capabilities with domain spoofing. Attackers are using AI to scrape LinkedIn profiles, company websites, and public records to build detailed dossiers on decision makers. They then create nearly identical domains, changing just one letter, and craft communications that are contextually perfect because the AI has analyzed the company's communication style and internal relationships. Then the hackers send authentic looking emails trying to get a wire transfer. We had a couple of clients almost fall for this and the only real prevention is training and awareness. The fact is a lot of small businesses aren't trained and are not aware.
Beyond phishing attacks, one of the biggest growing threats with generative AI and cybersecurity is agentic genAI's ability to create newer, stronger malware at a faster pace. It has the potential to essentially figure out what it's dealing with cybersecurity-wise and then learn from that in order to generate new malware designed to get past it. This type of technology is not only sophisticated, but due to its agentic nature, it has the potential to run non-stop.
I build education software and we recently ran into a problem. AI is making fake phishing requests super convincing, and anyone can generate them now. Our support team suddenly got flooded with fake tickets. We had to scramble and update our systems to catch them. It's better now, but it's a constant fight. You patch one thing, they find another way in.
AI is making scams easier for everyone, even small e-commerce shops like ours. We caught an AI-powered customer service chat that used our exact greetings and tone. We started watching more closely and mixing up our responses. My advice? Tell your team to watch for conversations that feel slightly off. The fakes are getting harder to spot.
AI is making phishing way trickier. At my company Backlinker AI, we've seen attackers using large language models to write fake outreach emails that look like they're from real journalists or brands. They're spooky good. My advice? Assume any email asking for sensitive info is machine-generated. Before you click or reply, double-check who actually sent it. It's the only way to be safe now.
Here at Medix Dental IT, we're seeing a new problem. AI makes it easy for attackers to create fake appointment reminders that look just like the real thing. Your staff clicks, and their login info is gone. On a bigger scale, we're seeing AI generate false medical stories while dropping targeted ransomware. These attacks move fast, so dental practices really need constant staff training and more than one line of defense.
New classes of threats Modern generative systems have introduced dangers that were rare in the past. One major issue is highly tailored social manipulation that shifts tone and content based on how a target responds. This style of interaction feels natural and can move across email, voice, and messaging without losing coherence. Another concern is rapid automated scouting. These systems read public material and point attackers toward weak spots with far less effort than before. False identities are becoming more convincing as well, with long histories, realistic voices, and images that pass simple checks. A slower but equally serious danger is the influence on training inputs that can change how a model behaves over time. Ideas like these once felt theoretical. Today they can be executed with speed and at low cost. Lowering the barrier and strengthening advanced groups New tools allow people with very little experience to create polished messages or visual material. This raises the number of basic attempts that reach organizations. Advanced groups gain even more value. They can explore targets with greater speed, shape believable narratives, and adjust campaigns across multiple platforms without delay. The most pressure today is seen in attempts to take control of accounts, efforts to fool staff into sharing access, and quiet probing of suppliers who may not have strong protections. The result is a mix of frequent low skill attacks and fewer but sharper efforts by advanced actors. Underestimated danger over the next two to three years A specific danger that may not be receiving enough attention is the slow alteration of the knowledge resources that teams and automated helpers rely on. Public guides, learning notes, and online examples influence how people configure systems. If attackers introduce small but harmful variations into widely used sources, those variations can spread into company runbooks and into tools that learn from public material. Over time this can lead to unsafe settings across many projects. The risk grows quietly and is difficult to notice until the impact becomes broad. Many organizations secure their code and hardware carefully, but far fewer monitor the quality of the information that shapes daily decisions. Careful review of trusted sources, limits on what automated helpers ingest, and regular evaluation of internal guidance will be important steps to reduce this long term risk.
"New types of threats arise from AI-generated synthetic networks where attackers create an entire network structure that mimics a real network infrastructure for a long enough time to confuse monitoring systems so as to be able to use these simulations to run attack rehearsals to an extremely high degree of precision, thereby eliminating one of the barriers to intrusion that previously limited such attacks to only those carried out by intelligence agencies. It is becoming easier for new entrants to become involved due to the availability of AI generated script to help inexperienced attackers at every phase of an intrusion process, with step-by-step guidance, recommendations for tools and real-time correction, much like a silent tutor. The advantage to highly capable attackers continues to increase with the ability to develop custom models using stolen telemetry and to predict the behavior of defenses to a degree of accuracy, up to 70%, that results in a significantly shorter operational timeline, from days/weeks to hours, than would have been possible in the past. There is also an increasing blindness to autonomous procurement fraud where AI creates believable representations of entire vendor ecosystems through believable invoices, voice-based communications and supply chain data. This type of fraud can result in the extraction of millions of dollars prior to being detected due to its combination of financial deception with real-time adaptive behaviors that mirror the behaviors exhibited during human negotiations."
I am observing three distinct developments that are significant for defenders. First, new threat categories: generative AI is creating realistic synthetic persons (voice + video + writing) to make social engineering much more believable and it is creating automated vulnerability discovery - not by providing step-by-step exploits, but by rapidly hypothesizing, validating and chaining small vulnerabilities into complex exploits - faster and cheaper than ever. This transitions some attack vectors from "theoretically possible" to reality on a weekly basis. Model-targeted attacks - theft, poisoning and supply-chain manipulation of ML-created assets - also pushes the boundaries. Second, AI lowers the bar for amateurs by encoding complex skills (spear-phishing campaigns, reconnaissance and basic exploit orchestration) into easy-to-use tools or prompts. At the other end of the spectrum, it expands skilled adversary opportunities: state-level actors can automate long-term campaigns, scale custom malware instances, and respond in near real-time. Right now the greatest impact that I see is high-quality, scalable social engineering and faster vulnerability reconnaissance. Third, this one is underrated based on 2-3 years but the threat of automated exploit synthesis and chaining at-the-scale - AI systems that can speed-up finding and weaponizing complicated multi-stage defects across services - in conjunction with synthetic identity networks to operationalize breaches to - is staggering. For those defending their organizations, provenance of models/data, strong authentication, attack-surface minimization, continuous red teaming, and telemetry based detection should still cover the bases.