Hi, I'm Amanda, PR Manager at TrustNet and I'd like to pitch our CISO, Trevor Horwitz your subject expert for this project. He's previously shared his expertise with Dark Reading, CSO Online, Authority Magazine, and other reputable publications on cybersecurity and compliance. About Trevor: Trevor Horwitz is widely recognized as a cybersecurity leader with over two decades of experience. As the co-founder and CEO of two leading cybersecurity companies, TrustNet and iTrust, Trevor has pioneered innovative information security and data protection solutions. His expertise spans managing complex cybersecurity challenges, including regulatory compliance, privacy, and data governance. Trevor also leads each company's strategic direction, driving advancements in digital trust for a diverse global client base including Herbalife, CareerBuilder, TaxAct, Calendly, Grubhub, Northwestern Invesity, Goodwill. His contributions have been recognized globally, establishing him as a trusted advisor in the industry. His commitment to enhancing cybersecurity standards is reflected in his active participation in industry forums and his frequent contributions to thought leadership on emerging cyber threats and solutions. As for the rest of his background, Trevor previously served as President of InfraGard Atlanta in partnership with the FBI and has been a sought-after speaker at international security conferences including RSA Conference, SPIN, TAG, and ISACA. His qualifications include CISSP, PCI QSA, PCI PCIP, HITRUST CCSFP, CISA, ISO 27001 Lead Auditor. If you're interested to get Trevor for this, kindly connect with me via amanda.arambulo@trustnetinc.com.
Vice President, Digital Forensics and Incident Response at Packetwatch
Answered 8 months ago
I am a subject matter expert for Business Email Compromise, which is when a threat actor gains access to a legitimate user's account and attempts to redirect funds from the user's company or their vendors. The avenue of access into these accounts is generally phishing, which has become increasingly sophisticated through the use of AI and automated tools. Once a user has been compromised, the threat actors sometimes remain in their accounts for months, waiting for an opportunity to redirect ACH payments, invoices, payroll, etc. We have seen businesses look millions of dollars because they are paying the threat actor instead of their vendor. Often this goes on for months, and when it is discovered, both parties are impacted. Our company also deals with Ransomware investigations, cryptocurrency investigations, and a variety of other cybercrimes. AI has been highly leveraged by threat actors in each of these areas. Fortunately it can also assist in our investigations, as it can help sift data and correlate indicators of compromise. Many of the new forensic tools we use are beginning to leverage AI, but so far the advantage is still to the attackers.
Former FBI Special Agent, Cybersecurity & Safety Expert, Keynote Speaker, Author, and Consultant at FBI John
Answered 8 months ago
As a former FBI Special Agent and member of the Bureau's Cyber Division, I spent over 20 years investigating cybercrime, including ransomware, state-sponsored attacks, and emerging threats tied to AI. Today, I speak around the world on the intersection of cybersecurity and national security, advising Fortune 100 companies and government agencies on how to prepare for the next wave of AI-driven cyber threats. I've briefed at the White House, testified before Congress, and regularly appear on Fox News, CNN, and CNBC breaking down complex cyber risks in plain language. I'd be honored to contribute to The Futurist and offer firsthand insight into how ransomware, phishing, deep fakes, and AI-driven attacks are rapidly evolving - and what we must do now to stay ahead. I've authored multiple books on safety and cybercrime (including one with Simon & Schuster) and would be glad to help make this episode as sharp and impactful as possible. - John Iannarelli www.FBIJohn.com
I'm reaching out to express interest in participating in the upcoming Futurist episode on cybersecurity, ransomware, malware, and the impact of AI. I'm the founder and principal consultant at Input Output, a cybersecurity and infosec compliance firm operating since 2018. We specialize in offensive security engagements, including penetration testing, red team operations, and social engineering exercises across highly regulated industries. With AI reshaping both the threat landscape and the defense playbook, I can speak directly to: The current state of cybersecurity in the AI era Real-world risks and emerging AI-driven attack vectors The financial and operational consequences of cyber intrusions How ransomware, phishing, and malware are evolving in response to generative AI I'm actively engaged on the front lines of cyber offense and defense, which gives me a real-time view of what organizations are up against. Whether it's assessing how deepfake-enabled social engineering is maturing or confronting the limitations of current detection frameworks, I bring a perspective grounded in execution, not theory. I'd be glad to contribute in any format—on camera, behind the scenes, or providing technical consultation. Let me know what works best for your team. Looking forward to the opportunity. James F Bowers II CEO & Founder, Input Output JBowers@InputOutput.com 561.408.0028
I've been protecting Austin businesses since my days as IT Director at Chuys/Krispy Kreme, and the AI threat landscape has completely flipped the script on traditional cybersecurity. The most dangerous shift I'm seeing isn't just smarter attacks—it's AI that weaponizes your own employee behavior patterns against you. Last quarter, I helped a manufacturing client whose AI-powered attacker had studied their email patterns for months before striking. The system perfectly mimicked their vendor communication style, invoice timing, and even payment request language. The attack bypassed every traditional email filter because it wasn't using known phishing templates—it was generating completely original, contextually perfect fraud attempts. The ransomware evolution to "Ransomware 3.0" is creating a perfect storm with AI acceleration. I'm seeing attackers use machine learning to identify the exact backup systems companies rely on, then encrypt those first before touching primary data. One client lost both their primary systems and their "secure" backups because the AI had mapped their entire recovery infrastructure during a three-week reconnaissance phase. What's keeping Austin businesses resilient is shifting from signature-based detection to behavioral anomaly monitoring. Instead of looking for known bad actors, we're watching for unusual data access patterns, abnormal login behaviors, and communication anomalies that signal AI-driven reconnaissance activities before they escalate to full attacks.
It's easy to talk about cybersecurity like it's a technical problem. Update your systems. Train your staff. Patch early. Patch often. But that framing is outdated. Cybersecurity today is economic warfare. And we're losing. AI has turned every script kiddie with a laptop into a nation-state-level threat. Generative tools can now write malware that morphs on the fly, auto-adjusting to evade detection. Deepfakes are no longer novelty clips of celebrity impersonations - they're being used to clone CFOs' voices, greenlight wire transfers, and dismantle supply chains. One of our clients found out after a $1.3M transfer that their "vendor" had been a synthetic deepfake operation built entirely from scraped LinkedIn data and Slack leaks. The narrative that AI will "augment defenders" is wishful thinking. The cost to attack has dropped to zero. The cost to defend has exploded. We're not in an arms race. We're in an asymmetric war where attackers rewrite the playbook every 48 hours. When we advise customers on loss prevention, we don't talk about firewalls. We talk about revenue protection. If a phishing campaign takes out your billing systems for 3 days during quarter-end, how do you explain that to investors? How does the CFO forecast earnings when ransomware encrypted half your cloud stack and the keys are held by a 16-year-old in Kyiv with a ChatGPT plugin? Even the best-managed companies fall for the basics. A large firm we worked with had every endpoint solution imaginable, but a fake Zoom invite link sent to a board member led to lateral movement and two months of silent exfiltration. The attacker didn't need to break in. He walked in, with full calendar access, thanks to a perfectly-timed social engineering payload built by scraping board bios and podcast transcripts. The future of cybersecurity isn't defensive. It's transparent. Real-time evidence of your controls. Public attestations of your detection efficacy. A live pulse of your exposure map. Anything less will be dismissed as theater. The lie that "compliance equals security" has cost the industry billions. The next wave of cybercrime will not look like the last. AI doesn't need to scale. It just needs one open port. And when it finds it, it will replicate, impersonate, obfuscate, and devastate - faster than any team can respond. Security isn't a moat. It's a currency. If you can't prove you're secure, your customers will pay someone who can.
I've seen AI-powered attacks evolve dramatically since founding Titan Technologies in 2008. The scariest trend isn't just smarter phishing—it's AI that learns your security patterns in real-time and adapts faster than traditional defenses can respond. Last month, we detected malware that was rewriting its own code every few hours to avoid our client's antivirus software. By the time their security team identified the threat signature, the malware had already morphed into something completely different. We had to implement behavioral analysis that watches for suspicious activities rather than known threat patterns. The financial impact hits small businesses hardest—53% of companies we assess have been breached in the past year, with 21% saying it nearly killed their business. One New Jersey client lost $180,000 when AI-generated deepfake audio convinced their CFO to wire money to what sounded exactly like their CEO requesting an urgent transfer. What keeps me up at night is quantum computing's timeline accelerating. I'm already pushing clients toward quantum-resistant encryption because when that dam breaks, every current security protocol becomes worthless overnight. The businesses preparing now will survive; the ones waiting will scramble to catch up after their data is already compromised.
Cybersecurity and digital progression make an interesting pair because as tech surges forward, it both solves and creates new problems concurrently. Artificial intelligence is the perfect example of this, as it allows cybersecurity businesses to more effectively automatically monitor their attack surface while also creating a whole host of new cyber threats. Even in something as basic as phishing, the wide availability of AI tools has made the once tried-and-true signs of poor spelling/grammar in an email a thing of the past. This balance has made this year in cybersecurity one that's extremely fast-paced and continually defined by innovation. We're seeing new technologies launch and implement themselves into tech stacks faster than ever before. Equally, we're seeing an unprecedented number of strange or novel attack vectors. Part of what's driving this turbulence is a lack of global compliance and regulation. If AI tools were more difficult to access, develop, and launch, then it would be harder for threat actors to bring these new technologies that they're using to target companies to market. While there has been a major push (especially in China and Europe), the US is lagging behind. We need to move toward legislation that helps the cybersecurity industry protect both itself and its customers. The next 12 months is going to be pivotal in the cybersecurity space - and AI is both the culprit and the driving force.
Hi there, I'm Valerie Baccei, Head of PR at Huntress, a cybersecurity company founded by former NSA members. We would love the opportunity to be featured on The Futurist. Our executives (soon to be featured on Good Morning America) can speak to the state of cybersecurity and the role of AI in cybersecurity tools, as well as how it is enabling hackers to increase their attacks. Types of ransomware, phishing, malware and others and AI is our core area of expertise. We have an elite team of threat hunters who find these types of attacks in our partners and customer environments everyday. We are also experts on the risks and challenges of AI-based hacks. For example, we just posted this blog https://www.huntress.com/blog/inside-bluenoroff-web3-intrusion-analysis, which has received global press coverage. Honestly, we can talk about all the main discussion points you've shared. Below is a sample of commentary on one of the topics: - Loss prevention and financial implications of cyber hacks Loss prevention starts with protection and technology plays an integral role. The financial implications can be small to devastating based on the size of the business. The smaller the business, the more devastating ransomware can be. Enterprise companies have deeper pockets and can often pay. But it's not just the financial issues, it's the entire ecosystem that can be broken such as in healthcare with the Changehealth breach that had doctors not getting paid on time and having to shut down their practices. Please email me directly at valerie.baccei@huntresslabs.com to discuss which of our cybersecurity leaders, ex-NSA founders, or cybersecurity analysts would be the best fit for your needs. Thank you for considering Huntress for this opportunity. I look forward to connecting directly via email. Best regards, Valerie Baccei Head of PR | Huntress
I'd be interested in participating. Through Vertriax, we've conducted security assessments across 70 countries and what I'm seeing is a critical blind spot—organizations are fortifying their digital perimeters while physical security systems become AI attack vectors. Last year during assessments in the pharmaceutical sector, we finded that 40% of "smart" access control systems were vulnerable to AI-generated spoofing attacks. Attackers were using deepfake technology to bypass facial recognition systems at critical infrastructure points. One client's executive protection detail was compromised when threat actors used AI voice cloning to impersonate board members over internal communications. The financial impact is staggering in ways people aren't calculating. Beyond direct ransomware costs, we're tracking "cascade failures" where a single AI-driven breach of physical security leads to multi-million dollar operational shutdowns. A chemical plant client faced $12M in losses not from stolen data, but from AI-manipulated sensor readings that triggered false emergency protocols. What's coming next terrifies me more—AI systems learning to exploit the gap between physical and digital security. We're already seeing reconnaissance bots that map building layouts through publicly available IoT device data, then generate custom social engineering attacks targeting specific employees' daily routines. Traditional cybersecurity tools can't detect these hybrid threats.
I'm Rafay Baloch, CEO and Founder of REDSECLABS, a cybersecurity company specializing in security consulting, training, and other cybersecurity services. As a globally recognized cybersecurity expert and white-hat hacker, I have a proven track record of identifying critical zero-day vulnerabilities in web applications, products, and browsers. My work has helped protect millions of users worldwide, earning me accolades such as being named one of the "Top 5 Ethical Hackers of 2014" by Checkmarx and one of the "Top 25 Threat Seekers" by SC Magazine. How AI technology transformed cybersecurity into an exciting yet challenging domain. Attackers employ AI to develop sophisticated phishing emails and sneaky ransomware and tricky deepfakes which deceive even the most alert observers. I use AI power to develop advanced security measures at REDSECLABS which detect threats quickly to defend businesses from major financial losses. Our tools tackle both fake videos and crypto scams through learning adaptive systems. The upcoming future brings me excitement because AI systems will start predicting cyberattacks before they occur which will maintain our defensive advantage against hackers. The key to success in this game is to combine human instinct with AI technology because together they form an unbeatable combination.
The Futurist - crypto and cybercrime Crypto exchanges: security, threats and responses Cryptocurrency has transformed the way we think about money, representing a departure from traditional financial systems and offering a decentralised environment where transactions are verified through distributed consensus mechanisms. Since the birth of Bitcoin in 2009, the industry has exploded, with billions of dollars moving daily across decentralised systems, fueling everything from DeFi to NFTs, tokenised assets, and beyond. The decentralised nature of crypto empowers users with unprecedented ownership and control over their assets. However, that same decentralisation also introduces new risks. Limited traceability and a high degree of anonymity have made crypto particularly attractive to illicit actors and criminal activity. As attacks become more advanced, the ability to recover assets or prevent total loss is becoming a defining feature for platforms, custodians, and protocols alike. According to a 2024 report by Chainalysis, more than $2.2 billion was stolen from crypto platforms in a single year. One high-profile incident was the 2019 Binance hack, where attackers exploited API keys and used phishing and malware to steal over $40 million in Bitcoin. The fallout was significant; recovery was slow, costly, and in some instances, users never recouped their losses. More recently, the threat landscape has shifted toward ransomware and Distributed Denial of Service (DDoS) attacks that exploit vulnerabilities in centralised exchanges, blockchain systems and user interactions. Crypto exchanges are the most frequently and aggressively targeted entities in the digital asset space. With billions in user funds under management, 24/7 operations, and their role as the bridge between traditional finance and decentralised systems, exchanges are prime targets for both white-collar cybercriminals and highly sophisticated threat actors. What's particularly concerning is the evolution of these attack vectors. Advanced phishing campaigns, deepfake impersonations, and social engineering tactics are now common, making it harder to detect breaches and easier for attackers to manipulate internal systems or deceive employees.
Cybersecurity is growing more complex as digital transformation accelerates and technologies like AI, cloud computing, and smart infrastructure become widespread. These innovations bring great benefits but also widen the attack surface for cyber threats. Cybersecurity is no longer just a defensive measure, it's the cornerstone of national growth and global leadership. The path forward requires collective efforts from government agencies, private sector entities, and individuals ensuring that cybersecurity becomes an intrinsic part of every digital initiative. At CPX, our work directly supports the UAE's vision of becoming a global innovation hub in cybersecurity. We heavily focus on building our strength and expertise, especially around AI-driven defense, threat intelligence sharing, and OT (Operational Technology) security. Our role goes far beyond providing cyber and physical security solutions and services, we are strategic partners in ensuring cyber resilience. Cybersecurity is a shared responsibility and can only function well if there is global awareness. A good example of international collaboration is one of our key contributions in shaping collective defense through platforms like Crystal Ball, where we work alongside the UAE Cyber Security Council and international partners to encourage intelligence sharing and strengthen collective response. Here is my profile for further details: https://drive.google.com/file/d/1QEZu5ctCOZAIg1k7_23UBhQdBTxbQnxz/view?usp=sharing For any further queries - media@cpx.net
As the first email provider globally that has implemented post-quantum cryptography - https://tuta.com/blog/post-quantum-cryptography - we could speak about the threats arising from quantum computing. Once quantum computers are a reality, all online communication is at risk as currently used encryption algorithms can be broken. Encrypted emails will then be as open as a postcard. Post-quantum cryptography fixes this, but quantum computing is making great progress at the moment (IBM Condor, Microsoft Majorana-1, Google Willow, Amazon Ocelot) while cryptography experts struggle to get funding to secure the current IT infrastructure - as many decision-makers still see quantum computing as a far-away threat. The race is on, but who will win?
My name is Alberto, and I am the Founder of notjustvpn, a startup specializing in AI-based image and pattern recognition and cybersecurity. Our work gives us a unique, ground-level perspective on the exact cybersecurity challenges your episode aims to explore, particularly concerning AI, deepfakes, and the human element. Your call for experts resonates deeply with our core mission. Here's how we can contribute compelling insights to your key discussion points: 1. The Current State of Cybersecurity & The "Human Bug": The biggest vulnerability isn't in the code; it's in our psychology. While companies build higher digital walls, attackers are simply learning to get an employee to open the door for them. We've moved from brute-force attacks to "brute-force persuasion." The battlefield has shifted from servers to the human mind, and AI is now the ultimate weapon in that battle. 2. Risks and Challenges of AI-Based Hacks The conversation around deepfakes often focuses on misinformation, but the immediate financial threat is identity fraud at scale. Our research shows how current generative AI models can already create startlingly realistic avatars, cloning voice, gestures, and appearance from just a handful of social media photos. We can provide concrete examples and discuss the disturbing ease with which a "digital ghost" of an anonymous person can be created for phishing or social engineering attacks. This isn't theoretical; it's happening now in niche online communities. 3. The Future of AI-Based Cybersecurity The future of cybersecurity isn't just about building better firewalls; it's about building better reality detectors. The arms race has shifted. It's no longer just about detecting malware signatures, but about authenticating reality itself. At notjustvpn, we are developing systems precisely for this purpose: to help systems and humans discern if a face, a voice, or a video is genuine or AI-generated. This is a critical new layer of defense, but it faces a major hurdle: the "walled gardens" of major AI companies who are reluctant to share model details, inadvertently creating blind spots that malicious actors can exploit. We believe our expertise in AI-driven image analysis and our frontline view of these emerging threats would provide your audience with a powerful, and perhaps unsettling, look into the true future of cybersecurity. We are available for an initial discussion at your earliest convenience. Sincerely, Alberto
I'd be interested in participating. Through tekRESCUE, we've been tracking how AI is fundamentally changing the attack landscape, and I speak to over 1000 people annually on these exact topics. The most dangerous shift I'm seeing is AI-powered attacks that don't target code vulnerabilities—they target human perception. We documented cases where attackers manipulated physical objects to fool AI systems, like making a stop sign appear as a green light to autonomous vehicles. This isn't theoretical; we're seeing adversarial examples deployed against business security cameras and access control systems right now. What keeps me up at night is the economic scale of what's coming. Cybercrime already costs between $600B-$1.5T globally, but our projections show this hitting $10.5T by 2025—that's one-eighth of the entire world economy. We're moving into an era where AI will launch constant, automated attacks without human intervention, forcing businesses to treat cybersecurity as an essential operational cost, not an optional expense. The solution isn't just better defense—it's treating AI systems like any other vulnerable software. At tekRESCUE, we've implemented continuous vulnerability disclosure programs with bounty rewards for AI system weaknesses. Military, law enforcement, and civil sectors need this most because they handle the most sensitive data, but every business storing employee or customer information is now a target for these AI-driven attack methods.
I run a genomics platform that processes millions of sensitive health records across global institutions, so cybersecurity isn't theoretical for me—it's mission-critical. When we built federated analysis systems for organizations like Genomics England and major pharma companies, we had to solve a fundamental problem: how do you analyze data without exposing it to breach risks? The biggest AI cybersecurity threat I see isn't the flashy deepfakes everyone talks about—it's AI-powered reconnaissance that maps your data infrastructure before attacking. We've seen attempted attacks where malicious actors used ML to identify patterns in our API responses, trying to reverse-engineer our security architecture. Our counter-approach uses AI-driven anomaly detection that flags unusual query patterns within milliseconds, often catching reconnaissance attempts before they escalate. Here's what most people miss about ransomware in healthcare: it's not just about locking files anymore. Modern attacks target the algorithms themselves—corrupting AI models used for drug findy or patient matching. We've implemented cryptographic verification for all our ML pipelines, ensuring that even if someone breaches the perimeter, they can't poison the models that drive billion-dollar research decisions. The financial stakes are insane in our space. A single compromised genomic dataset can represent $100M+ in research investment, and pharma companies will abandon partnerships instantly if there's any security question. That's why we built our entire platform around zero-trust architecture from day one—every computation is isolated, every data movement is logged, and researchers never see raw sensitive data, just the analytical results they need.
I'm Roman Surikov, founder and CEO of Ronas IT, a custom software development firm specializing in cybersecurity and AI-driven technology solutions. The intersection of AI and cybersecurity is radically reshaping the threat landscape. AI-based attacks, including increasingly realistic phishing campaigns, sophisticated deepfake impersonations, and autonomous ransomware deployment, pose unprecedented challenges. The financial consequences of these cyber-attacks have surged, with the average cost now in the millions, underscoring the urgent need for proactive defense strategies. However, AI isn't just enabling attackers—it's essential for defenders. Emerging cybersecurity solutions leverage AI for anomaly detection, rapid-threat-response, and deepfake identification. Image recognition algorithms can pinpoint deepfakes, effectively combating misinformation attempts, while proactive behavior analysis and predictive security protocols help organizations preempt attacks before they occur. Yet, securing the future requires moving beyond purely reactive defenses. Businesses must adopt AI-powered security platforms that integrate seamlessly with employee training and incident response strategies. Cyber-criminals constantly innovate, so human-AI cybersecurity collaboration, centered around proactive threat anticipation, becomes crucial. The future of cybersecurity will be defined by an arms race of AI competition: smart attackers versus AI-equipped defenders. Organizations should strategically embrace AI's capabilities, continuously improving tools for deepfake detection, advanced threat hunting, and real-time risk assessment. Managing these risks proactively today will safeguard our digital economy tomorrow.
AI has fundamentally altered the cybersecurity equation—not just in the tools being used, but in the mindset required to stay ahead. Cyber threats are no longer isolated incidents; they're becoming dynamic, continuous, and often AI-driven. Malicious actors are now using generative AI to craft phishing campaigns that are indistinguishable from legitimate communication, and deploying polymorphic malware that evolves with every iteration. Defenders can't rely on static rules or signature-based detection anymore. Intelligence must be adaptive, learning in real-time. What's even more concerning is how the financial impact of these breaches is compounding. A single ransomware attack can cascade into regulatory fines, contract losses, and long-term reputational damage. The cost isn't just monetary—it's trust. AI holds promise not just as a shield, but as a predictive engine—flagging vulnerabilities before they're exploited, detecting deepfakes before they spread, and identifying behavior anomalies long before a breach occurs. But this promise only materializes when cybersecurity is integrated at a strategic level, not treated as a reactive function. The next frontier of InfoSec is about who can think—and act—faster: the attacker or the algorithm.
Ransomware attacks have a way of exposing every hidden vulnerability in an organization. I recall a hospital IT director who described the chaos after their systems were locked down, surgeons pacing the halls, unable to access patient records, and staff scrambling to revert to paper processes. The financial fallout was only part of the story; the real cost was the erosion of trust and the scramble to restore operations. AI has brought both promise and peril to cybersecurity. I have watched phishing emails evolve from clumsy scams to eerily convincing messages, crafted by AI to mimic internal communication styles. One security analyst told me he almost clicked a link himself, despite years of training. This shift underscores the need for constant vigilance and adaptive defenses, not just reliance on technical solutions. I see AI as both a shield and a sword. Deep fake detection and image recognition are advancing, but so are the tools used by attackers. The lesson I share with clients is simple: invest in your people, foster a culture of security, and be ready to pivot quickly. Cybersecurity is, at its heart, a human challenge as much as a technical one.