Vice President of Product Management: Platform, Mobile, Risk, and AI at VikingCloud
Answered 6 months ago
AI, used well, buys your team time. It filters the alert noise, stitches together context, and hands you the first draft of what's going on so you're not burning the first five minutes clicking through tabs. Think of it as the junior analyst who never sleeps—fast, consistent, and good at pattern work. The real value shows up in triage. When something pops, you get the "who/what/where" in plain English: has this user logged in from here before, has this host talked to that domain, have we seen this sequence of events? Analysts start with a short brief instead of a blank page. Smaller teams feel this the most; after hours, AI can sort routine stuff and only wake a human when it matters. Two realities to keep in view. First, attackers use the same tools. We're already seeing more convincing phishing, quicker credential abuse, and malware that changes its look mid-campaign. Second, patterns aren't judgment. A sales leader in a new city might be fine, or it might be the start of a problem. People still make the call. Over-trust is a risk. A confident dashboard can still be wrong. If an explanation isn't there—why a user was flagged, why an action is recommended—you'll struggle to defend the decision to an auditor or to your execs. Treat explainability like any other control: if you can't show your work, you don't ship it. The way to get this right is simple. Pair AI with humans. Let the system gather evidence and draft the timeline; let your people confirm, decide, and communicate. Close the loop every week by marking a few good catches and a few false alarms and feeding them back. Ask vendors three straight questions: what data do you ingest, how do you isolate my data from other customers, and can you walk me through a real decision end-to-end? Start small. Pick one use case—phishing triage, login anomalies, or duplicate-alert clustering—and prove it moves the numbers you care about: time to triage, time to contain, earlier detection. Keep guardrails around anything touching production data: role-based access, logging, change control. Measure outcomes, not dashboards. AI won't run your security program. It will make a good team faster and a stretched team more effective. Use it where it saves time now, keep humans in the sensitive loops, and hold it to the same standard you hold the rest of your controls: clear, explainable, and accountable.
Artificial Intelligence is no longer a futuristic concept in cybersecurity—it's the new frontline. At HEROIC, we see AI not just as a tool, but as a force multiplier that fundamentally reshapes how we detect, respond to, and predict cyber threats. The benefits are massive. AI enables cybersecurity systems to process billions of data points in real time—from network traffic anomalies and behavioral patterns to dark web activity and leaked credentials. Instead of relying on slow, reactive models, AI allows us to detect threats before they become breaches. We use AI at HEROIC to: - Analyze vast volumes of dark web and criminal intelligence to identify exposed identities and emerging threats. - Score and prioritize risks across enterprise environments using machine learning models trained on breach behavior. - Automate response actions to limit damage and prevent lateral movement in compromised systems. - Personalize security awareness—tailoring alerts and education to each user based on their unique risk profile. But with that power comes challenge. AI models are only as good as the data they're trained on—and biased or incomplete data can lead to false positives, blind spots, or overconfidence. Attackers are also using AI to supercharge phishing, social engineering, and malware development, creating an arms race that requires constant innovation. And as AI decision-making grows, transparency and accountability become essential—especially in highly regulated industries. Another risk: over-reliance. Some companies adopt AI expecting it to replace their security teams, when in reality, it should augment and empower human analysts, not replace them. In short, AI is a game-changer—but not a silver bullet. The future of cybersecurity will be won by those who combine human intelligence with machine learning, automation with context, and defense with anticipation. At HEROIC, that's our mission: to harness AI to not only protect identities, but to predict and prevent the threats of tomorrow—before they strike.
As an entrepreneur in AI and computer vision, I see strong parallels between visual threat detection and cybersecurity threat monitoring. In both cases, the key advantage of AI is speed and scale: models can analyze thousands of data points in seconds, flag anomalies, and trigger alerts faster than any human team could. In physical security, for example, computer vision models can detect unauthorized access, identify suspicious objects, or monitor restricted zones. In cybersecurity, anomaly detection algorithms monitor network traffic or user behavior patterns to spot potential breaches before they escalate. The benefits are clear: automation reduces response time, scales protection across large infrastructures, and frees human experts to focus on complex cases. However, the challenges are equally real. AI systems are only as good as the data they're trained on. Poor-quality or biased training datasets can lead to false positives (alert fatigue) or false negatives (missed threats). Another challenge is explainability: in high-stakes security scenarios, stakeholders need to understand why an AI flagged something as a threat. For organizations, the path forward is a hybrid approach, combining AI's pattern recognition power with human judgment. That means investing in high-quality, diverse datasets, regularly retraining models, and implementing review processes that ensure alerts are validated before action is taken. In my own work, I've found that transparency and robust quality control are as important as the algorithms themselves. For the moment, AI in cybersecurity isn't about replacing people, it's about augmenting them with tools that keep pace with today's evolving threats.
A year ago, we faced a ransomware attempt that slipped past traditional defenses. Our AI-powered anomaly detection flagged unusual file access within 14 minutes before encryption could spread. Instead of a full system lockdown, we only had to isolate a single server. That incident pushed us to integrate AI into every layer of our security stack. Post-implementation, we've seen a 60% reduction in false positives, incident response time cut from hours to minutes, and zero successful breaches in the past 18 months. More importantly, our security team spends less time chasing ghost alerts and more time strengthening defenses. Don't start by trying to AI-ify your entire security system. Identify one pain point like phishing detection, fraud prevention, or insider threat monitoring and pilot AI there first. Train it on your data, because context is king in threat detection. Also, pair AI with human oversight; algorithms are fast, but people are better at spotting the subtle social engineering plays that machines miss. AI isn't a silver bullet, but it is a force multiplier. The goal isn't to replace human security teams it's to give them superhuman speed and vision. In the end, the smartest defense is a mix of machine precision and human intuition.
AI already helps a lot in my work — it's useful for log analysis, handling routine tasks, preparing documentation, and classifying security issues. But the challenge is that you must always pay attention to details — you need to re-check and correct AI's work, because it can add unnecessary wording or miss a classification. AI also works well for drafting security checklists and estimation plans, but estimating hours is not its strong point, so I always correct that part. Another area where it's very effective is learning and knowledge summarization — this is one of AI's strongest features. Still, summaries should be checked against original sources, especially if they're for compliance or legal purposes. For pentest scripting, AI can be a huge time saver — what previously took a day or two can often be done in about an hour, plus a few rounds of testing. The catch is that you must understand the script yourself so you can adjust or fix it if needed. One interesting use case is when I need to check a standard — AI can show me the text from a particular section, so I can see if it applies to my documentation before buying the full text, or understand it in the context of the document. Sometimes, this small part is all you actually need. However, you shouldn't rely on AI for a full interpretation of a standard.
AI is becoming a powerful tool in cybersecurity — not because it's flawless, but because the volume and speed of threats today give teams very little room to breathe. In recent projects — especially in finance and SaaS — we've seen how machine learning helps reduce noise, surface the right patterns, and flag what truly matters. It doesn't replace people, but it gives them the space to think, prioritize, and act faster — and that alone can change the outcome. But the challenges are real too. One of the first things we run into is what people often call the "black box" problem. AI might flag something — but if no one understands why, what do you do with that information? You still need people in the loop — not just to double-check, but to take responsibility when it counts. And then there's the question of privacy. In AI-powered fraud detection, for example, models improve when fed behavioral data — but that requires access to sensitive workflows, logs, sometimes even client interactions. How much of that are you ready to open up just to make a system smarter? That's why I believe AI in security needs to do more than detect. It has to fit into the way people already work — clearly, safely, and with enough transparency that trust isn't eroded in the process. The potential is clear. But to make it work, we need to design systems that support human judgment — not bypass it.
My perspective on the use of Artificial Intelligence (AI) in cybersecurity is that it's a double-edged sword: incredibly powerful for defense, but also a rapidly evolving tool for attackers. Potential Benefits: AI excels at rapidly analyzing vast datasets, making it invaluable for threat detection and anomaly identification. It can spot patterns that human analysts might miss, improving the speed and accuracy of identifying malware, phishing attempts, and insider threats. AI-driven systems can also automate responses, like quarantining compromised systems or blocking suspicious traffic, leading to faster incident response times and reducing the window of vulnerability. Furthermore, AI can enhance predictive security, anticipating potential attack vectors before they materialize by analyzing global threat intelligence. Challenges: The primary challenge lies in the AI arms race. As defenders leverage AI, attackers also employ it to create more sophisticated malware, highly convincing deepfakes for social engineering, and autonomous attack campaigns. This leads to a constant escalation of tactics. Another significant challenge is false positives, where AI flags legitimate activity as malicious, leading to alert fatigue for human teams. Conversely, AI hallucinations can lead to false negatives, missing actual threats. Finally, the complexity of AI models can create a "black box" effect, making it difficult for security professionals to understand why an AI made a certain decision, which can hinder auditing and trust. Ultimately, while AI is crucial for scaling cybersecurity defenses against modern threats, it requires continuous human oversight, ethical guidelines, and an understanding of its limitations to be truly effective. It augments human capabilities rather than replacing them entirely.
AI is transforming cybersecurity in fascinating ways. On the positive side, it's like having a tireless security analyst who can spot patterns in millions of events that humans would simply miss. We're seeing AI handling the grunt work - automating those repetitive tasks that used to burn out our security professionals. But here's the reality check - the bad guys have AI too. They're using it to craft smarter phishing emails, automate their attacks, and find vulnerabilities faster than ever. We're essentially in an AI arms race. The biggest challenge I see day-to-day is trust. When an AI system blocks something critical or flags a legitimate user, the team needs to understand why. These aren't perfect systems - they can be fooled, they generate false alarms, and if the training data is flawed, they'll have blind spots that attackers can exploit. The key thing to understand is that AI isn't replacing human security experts - it's amplifying what they can do. You still need human judgment, creativity, and intuition. AI gives us superhuman speed at processing data, but we provide the context and critical thinking. That partnership is where the real power lies in defending against modern cyber threats.
From my work at SAP and ServiceNow, two major platforms deeply embedded in global enterprise workflows. I've seen how AI in cybersecurity is both transformative and high-stakes. In large-scale ERP and workflow systems, the attack surface is vast: millions of transactions, APIs, and user interactions happen daily across distributed, hybrid, and regulated environments. AI offers the ability to detect, predict, and respond to threats at machine speed, something human teams alone cannot achieve. At SAP, working on secure ERP integrations taught me that static, rules-based security is insufficient in today's dynamic threat landscape. AI-driven anomaly detection, fueled by behavioral analytics, can flag deviations in cost center transactions, payroll changes, or supply chain data that would otherwise go unnoticed. Similarly, at ServiceNow, developing Workflow Data Fabric and ERP Canvas with Zero Copy architectures reinforced the importance of minimizing data movement—reducing the attack vectors AI models must protect. The potential benefits are substantial: Proactive Threat Detection: AI models can identify subtle, emerging threats across networks, APIs, and workflows before they escalate. Adaptive Defense: Models learn from evolving attack patterns, closing vulnerabilities faster. Automated Incident Response: AI can trigger workflow-based remediation in seconds, reducing downtime and loss. Supply Chain Security: AI-powered risk scoring for vendors and transactions helps prevent compromised third-party access. However, challenges remain. Bias in AI models can lead to false positives or overlooked threats, straining security teams. Model explainability is critical security decisions must be auditable for compliance (e.g., GDPR, FedRAMP). Data privacy risks emerge when training models on sensitive ERP or HR datasets, making privacy-preserving ML essential. And finally, adversarial AI where attackers manipulate models will demand equally adaptive defensive AI. In my view, the key is Responsible AI in cybersecurity, embedding ethical design, access controls, and human-in-the-loop validation. AI should augment, not replace, human expertise turning security teams into strategic responders rather than constant firefighters. With the right architecture and governance, AI can make enterprise systems not only more secure, but also more resilient, adaptive, and trusted.
Artificial Intelligence is transforming cybersecurity from a reactive function into a proactive, predictive capability. Instead of simply responding to threats after they occur, AI allows us to detect anomalies, analyze vast datasets in real time, and anticipate potential attack patterns before they cause harm. This fundamentally changes the game — enabling faster incident response, smarter threat hunting, and more accurate risk assessments. The benefits are undeniable: Speed and Scale: AI can process and correlate millions of signals in seconds, far beyond human capacity. Predictive Insights: Machine learning models can spot subtle patterns that indicate an attack long before traditional systems would flag them. Automation: Routine security tasks can be automated, freeing skilled professionals to focus on complex problem-solving. That said, there are challenges we must address: Bias and False Positives: Poorly trained models can generate noise or miss critical threats. Adversarial AI: Attackers are now using AI to craft more sophisticated, evasive threats. Human Oversight: AI should enhance — not replace — human expertise. The final judgment on critical security decisions must remain with trained professionals. At its best, AI is not a silver bullet but a force multiplier. In cybersecurity, it works most effectively when paired with strong governance, skilled analysts, and a deep understanding of the evolving threat landscape. It's a partnership between human intelligence and machine intelligence — and that's where the real power lies.
Artificial intelligence is already playing a major role in cybersecurity. It is not a future concept; it is built into many tools we use today. Endpoint detection platforms rely on AI to identify suspicious behavior, while email filters use machine learning to catch phishing attacks that traditional rules might miss. AI is especially useful for analyzing large volumes of data and spotting patterns quickly. It can detect unusual activity, automate parts of the response process, and reduce the time it takes to identify real threats. In under-resourced environments, this speed and automation can make a real difference. However, current AI tools have limits. Many rely on historical data, making them less effective against novel attacks. Biases in the data can also lead to false positives or missed alerts. And while AI can flag anomalies, it often lacks the context to determine whether something is genuinely dangerous or simply unusual. Looking forward, AI will only become more critical, especially as cybercriminals begin using AI themselves to automate attacks and evade detection. Defenders will need to improve AI's adaptability and better integrate it with human analysis. The future of cybersecurity will be shaped by how well we balance automation with human insight. AI is a powerful tool, but it works best when paired with experienced professionals who can interpret and act on what it finds.
I see artificial intelligence as a game-changer in cybersecurity, offering powerful tools to stay ahead of increasingly complex threats. One of the biggest advantages is scale. AI can analyze massive volumes of data in real time, helping identify patterns, detect anomalies, and flag suspicious activity much faster than human analysts alone. This makes it especially useful for threat detection, vulnerability scanning, and incident response. AI also improves operational efficiency. By automating repetitive tasks like log analysis and alert triage, it reduces fatigue and frees up time for teams to focus on deeper investigation and strategic planning. Some AI systems even support predictive threat modeling, helping organizations take preventive action before damage occurs. But there are real challenges. Attackers are also using AI to launch more sophisticated phishing campaigns, generate deepfakes, and identify system weaknesses at scale. The same tools that protect us can also be turned against us. Another risk is overreliance. AI models can produce false positives or miss subtle threats, especially if they are trained on biased or incomplete data. When the model's decision-making is not explainable, it becomes hard to trust or verify its outputs. This can lead to either misplaced trust or missed opportunities to intervene. That's why I believe AI should be used to support, not replace, human judgment. The best outcomes happen when AI is paired with skilled analysts who can provide context, question assumptions, and guide ethical use. This human-in-the-loop approach ensures we benefit from AI's speed and scale without losing sight of security fundamentals or accountability. Responsible use of AI in cybersecurity means regularly validating models, keeping humans engaged in oversight, and creating a culture where speed does not come at the cost of accuracy or trust. When used thoughtfully, AI can help us shift from reacting to threats to proactively managing and reducing risk.
For the most part, AI will amplify existing workflows and data governance a company already has in place. Used effectively, it removes busywork and catches weak signals. Used badly, it will reduce your cybersecurity efforts to theater and create vulnerabilities. Benefits - Speed - log summaries, correlate weak indicators, catch weak signals - giving analysts more time to focus on decisions. - Just-in-time guardrails: AI powered inline risk scoring on payments, access changes, or vendor edits. - Safer defaults - automatic suggestions for privilege levels, token expiry, and FIDO2 enforcement. - Realistic drills: generate safe, plausible pretexts to test workflows against vishing/voice-clone scenarios. Challenges - Privacy creep; collecting content for the mode can turn into monitoring your employees - avoid eroding trust to avoid encouraging people to route around controls. - Hallucinations & overconfidence; even with the right data, outputs can be hallucinated or confident and wrong. Human verification is essential. - Explainability & accountability; if you can't explain why an AI-driven nudge happened, you can't defend its existence. - Vendor & model drift: Models change, risks shift, and your controls may silently degrade unless you monitor them. Using AI in cybersecurity isn't inherently positive or negative - it really depends on the company's systems, processes and data handling. It's similar to other SaaS in this sense, but the consequences of bad practice may be realized faster, or at a greater scale due to AI's flexibility and broad applications. The real "AI threat" is external - generative AI models are being developed rapidly, and threat actors will often be among the first to begin levering new technology. Expect more voice and video clones, targeted, customized phishing, and sophisticated social engineering as time goes on. These kinds of attacks will likely get consistently easier and cheaper to run over time.
AI might be fueling a new wave of cyber threats, but it will also be the sharpest weapon we've got to fight back. We've heard how AI is giving rise to more sophisticated phishing scams, prompt injections, deepfakes and other security risks. But not everyone has considered how the technology might help us stay safe. For example, here are some ways AI might help: - Detect phishing emails faster by analyzing email patterns, unusual metadata, or linguistic red flags. - Analyze voice or facial micro-patterns to spot inconsistencies in deepfakes. - Detect poisoned data by monitoring datasets for anomalies or outliers that suggest tampering. - Run constant simulations and stress-tests to predict how a model might be tricked and flag suspicious inputs. - Detect network threats — AI excels at scanning huge volumes of data and logs, detecting patterns that indicate intrusions, malware, or unusual behavior. - Behavioral anomaly detection — AI can build a baseline of normal user behavior, and flag anything odd. Model watermarking — AI can embed watermarks or fingerprints in models to track and identify stolen versions, even after they've been slightly modified. The biggest challenge will be keeping up with fast-evolving tech. If you're trying to fight that with human-only defenses, you're bringing a knife to a gunfight. In this AI era, only AI can match AI.
AI is transforming cybersecurity from reactive to proactive. As a cybersecurity professional and founder, I see its biggest benefit in surfacing threats faster than humans ever could by spotting patterns in behavior, access, or anomalies that would otherwise go unnoticed. But the challenge is that the same tech is available to attackers. We're in an arms race where speed and context are everything, and AI without human oversight can just automate mistakes at scale.
As the founder of tekRESCUE, a company deeply involved in both AI consulting and cybersecurity, I've seen the dual nature of artificial intelligence. AI offers immense productivity benefits by automating tedious tasks and freeing up human talent, yet integrating it introduces unique and complex security challenges. The primary challenge lies in the inherent vulnerability of AI systems to subversion. Adversarial examples can fool AI models into drawing incorrect inferences, like changing a stop sign into a green light in the eyes of a self-driving car, potentially leading to catastrophic real-world consequences. We anticipate an escalation of cybercrime costs, with future attacks increasingly initiated by AI controlled by human adversaries. To address this, our perspective is that AI models must be treated with the same rigorous cybersecurity approach as any other critical software. This involves amending current cybersecurity initiatives to specifically envelop AI vulnerabilities and establishing an updated, routine vulnerability disclosure process for AI systems, often through incentivized findy.
Artificial Intelligence in cybersecurity is like hiring a 24/7 pattern-obsessed analyst into your team. It may speed up the process but it's not totally safe and needs constant monitoring. A few years back, I was working with a financial technology business that needed cybersecurity to monitor any irregular network behavior in their system. We integrated Darktrace AI cybersecurity for this. A couple of days later, we were alerted that it detected unusual API call patterns at 2 AM. At first we thought it was a data breach but later on found out that it was just a wrong configuration script of a junior developer. Despite the false alert, it was still a good thing that it detected because it prevented an outage that would affect thousands of users. This is the best advantage of AI in cybersecurity: it can detect and process irregularity in a snap more than any human team can do. Meanwhile, the challenges for AI in cybersecurity are doubled. One is false alert detection can cause alert fatigue to the team, and second is the risk of over-reliance on the AI. Artificial Intelligence is best at spotting patterns but in understanding its intent, human context is still needed. AI in cybersecurity must always be with skilled analysts interpreting the data because without humans, it is like a plane flying in autopilot towards a thunderstorm.
Hi, My name is Dario Ferrai and I'm the co-founder of LLMAPI.dev, which is a unified API platform where developers can use one interface to power AI models like GPT-4o, Claude, or Mistral in response to text input. For one of our cybersecurity-focused clients, they are using an AI model to simulate an attack. A red team engineer used open-source LLMs to create a system that mimics threats by generating realistic phishing emails and targeting soft victims based on staff biographies. The field of AI models includes both offensive and defensive applications. A team is using Claude for long-context intrusion analysis and Mistral for real-time anomaly detection. These AI models significantly enhance threat detection, allowing quick log file analysis and automatic incident report generation, ultimately improving threat monitoring. It is remarkable how quick the attack was, whereas defensive capabilities often struggle due to reliance on false positive detections or slow implementations. This issue is frequently discussed. Attackers are innovating faster than SOC teams, which often rely on outdated software and workflows within restricted environments or feel limited by compliance requirements. I would be happy to provide more details if that is useful. Website: [https://llmapi.dev](https://llmapi.dev/) LinkedIn: https://www.linkedin.com/in/dario-ferrai/ Headshot: https://drive.google.com/file/d/1i3z0ZO9TCzMzXynyc37XF4ABoAuWLgnA/view?usp=sharing Bio: I am the co-founder of LLMAPI.dev. I focus on building AI tooling and infrastructure and have deep expertise in large language models (LLMs) and their real implementation. We help teams simplify and scale their use of advanced AI systems. Best, Dario Ferrai Co-founder, LLMAPI.dev
Artificial intelligence may be very useful in threat analysis because immense sets of data can be analyzed in real time to identify anomalous behaviors. Predictive functionalities provide the ability to identify vulnerabilities in advance, before breaches are achieved. Automation of all mundane activities enables cybersecurity professionals to work on more complicated challenges. Designed defense mechanisms keep getting tougher since machine learning models adjust to the emerging threats. To deploy AI and make it responsible, ethical considerations and sound oversight are needed. The advantages will be increased speed of detection of threats, decreased response time and increased accuracy in determining vulnerabilities. Scheduling of monotonous work increases the productivity and frees up teams to work on strategic interests. The ability to adapt is also present which makes the defenses change with new threats. Some of the challenges are the possibility of staging attacks through the use of AI systems and the necessity to be prepared to invest in technology and qualified specialists. One of the barriers is striking a balance between innovation, data privacy, and ethical concerns.
The scanning ability of AI allows cybersecurity personnel to analyze vast amounts of network traffic for irregularities before performing vulnerability assessment and initiating quick responses. The system outperforms all other scaling solutions particularly when operating in areas where human staff is limited. AI systems experience two main disadvantages which include the potential to misinterpret adversarial inputs as well as the risk of being overwhelmed by excessive noise when the system parameters are not properly set. Team members who rely solely on AI technology become unable to detect sophisticated security threats that need human intuition for proper identification. The effectiveness of AI depends on its implementation as a valuable tool to enhance human analyst work with scheduled audits and new training material and supervision to keep the system accurate and resilient against new threats.