Vice President of Product Management: Platform, Mobile, Risk, and AI at VikingCloud
Answered 7 months ago
AI, used well, buys your team time. It filters the alert noise, stitches together context, and hands you the first draft of what's going on so you're not burning the first five minutes clicking through tabs. Think of it as the junior analyst who never sleeps—fast, consistent, and good at pattern work. The real value shows up in triage. When something pops, you get the "who/what/where" in plain English: has this user logged in from here before, has this host talked to that domain, have we seen this sequence of events? Analysts start with a short brief instead of a blank page. Smaller teams feel this the most; after hours, AI can sort routine stuff and only wake a human when it matters. Two realities to keep in view. First, attackers use the same tools. We're already seeing more convincing phishing, quicker credential abuse, and malware that changes its look mid-campaign. Second, patterns aren't judgment. A sales leader in a new city might be fine, or it might be the start of a problem. People still make the call. Over-trust is a risk. A confident dashboard can still be wrong. If an explanation isn't there—why a user was flagged, why an action is recommended—you'll struggle to defend the decision to an auditor or to your execs. Treat explainability like any other control: if you can't show your work, you don't ship it. The way to get this right is simple. Pair AI with humans. Let the system gather evidence and draft the timeline; let your people confirm, decide, and communicate. Close the loop every week by marking a few good catches and a few false alarms and feeding them back. Ask vendors three straight questions: what data do you ingest, how do you isolate my data from other customers, and can you walk me through a real decision end-to-end? Start small. Pick one use case—phishing triage, login anomalies, or duplicate-alert clustering—and prove it moves the numbers you care about: time to triage, time to contain, earlier detection. Keep guardrails around anything touching production data: role-based access, logging, change control. Measure outcomes, not dashboards. AI won't run your security program. It will make a good team faster and a stretched team more effective. Use it where it saves time now, keep humans in the sensitive loops, and hold it to the same standard you hold the rest of your controls: clear, explainable, and accountable.
Artificial Intelligence is no longer a futuristic concept in cybersecurity—it's the new frontline. At HEROIC, we see AI not just as a tool, but as a force multiplier that fundamentally reshapes how we detect, respond to, and predict cyber threats. The benefits are massive. AI enables cybersecurity systems to process billions of data points in real time—from network traffic anomalies and behavioral patterns to dark web activity and leaked credentials. Instead of relying on slow, reactive models, AI allows us to detect threats before they become breaches. We use AI at HEROIC to: - Analyze vast volumes of dark web and criminal intelligence to identify exposed identities and emerging threats. - Score and prioritize risks across enterprise environments using machine learning models trained on breach behavior. - Automate response actions to limit damage and prevent lateral movement in compromised systems. - Personalize security awareness—tailoring alerts and education to each user based on their unique risk profile. But with that power comes challenge. AI models are only as good as the data they're trained on—and biased or incomplete data can lead to false positives, blind spots, or overconfidence. Attackers are also using AI to supercharge phishing, social engineering, and malware development, creating an arms race that requires constant innovation. And as AI decision-making grows, transparency and accountability become essential—especially in highly regulated industries. Another risk: over-reliance. Some companies adopt AI expecting it to replace their security teams, when in reality, it should augment and empower human analysts, not replace them. In short, AI is a game-changer—but not a silver bullet. The future of cybersecurity will be won by those who combine human intelligence with machine learning, automation with context, and defense with anticipation. At HEROIC, that's our mission: to harness AI to not only protect identities, but to predict and prevent the threats of tomorrow—before they strike.
As an entrepreneur in AI and computer vision, I see strong parallels between visual threat detection and cybersecurity threat monitoring. In both cases, the key advantage of AI is speed and scale: models can analyze thousands of data points in seconds, flag anomalies, and trigger alerts faster than any human team could. In physical security, for example, computer vision models can detect unauthorized access, identify suspicious objects, or monitor restricted zones. In cybersecurity, anomaly detection algorithms monitor network traffic or user behavior patterns to spot potential breaches before they escalate. The benefits are clear: automation reduces response time, scales protection across large infrastructures, and frees human experts to focus on complex cases. However, the challenges are equally real. AI systems are only as good as the data they're trained on. Poor-quality or biased training datasets can lead to false positives (alert fatigue) or false negatives (missed threats). Another challenge is explainability: in high-stakes security scenarios, stakeholders need to understand why an AI flagged something as a threat. For organizations, the path forward is a hybrid approach, combining AI's pattern recognition power with human judgment. That means investing in high-quality, diverse datasets, regularly retraining models, and implementing review processes that ensure alerts are validated before action is taken. In my own work, I've found that transparency and robust quality control are as important as the algorithms themselves. For the moment, AI in cybersecurity isn't about replacing people, it's about augmenting them with tools that keep pace with today's evolving threats.
A year ago, we faced a ransomware attempt that slipped past traditional defenses. Our AI-powered anomaly detection flagged unusual file access within 14 minutes before encryption could spread. Instead of a full system lockdown, we only had to isolate a single server. That incident pushed us to integrate AI into every layer of our security stack. Post-implementation, we've seen a 60% reduction in false positives, incident response time cut from hours to minutes, and zero successful breaches in the past 18 months. More importantly, our security team spends less time chasing ghost alerts and more time strengthening defenses. Don't start by trying to AI-ify your entire security system. Identify one pain point like phishing detection, fraud prevention, or insider threat monitoring and pilot AI there first. Train it on your data, because context is king in threat detection. Also, pair AI with human oversight; algorithms are fast, but people are better at spotting the subtle social engineering plays that machines miss. AI isn't a silver bullet, but it is a force multiplier. The goal isn't to replace human security teams it's to give them superhuman speed and vision. In the end, the smartest defense is a mix of machine precision and human intuition.
AI already helps a lot in my work — it's useful for log analysis, handling routine tasks, preparing documentation, and classifying security issues. But the challenge is that you must always pay attention to details — you need to re-check and correct AI's work, because it can add unnecessary wording or miss a classification. AI also works well for drafting security checklists and estimation plans, but estimating hours is not its strong point, so I always correct that part. Another area where it's very effective is learning and knowledge summarization — this is one of AI's strongest features. Still, summaries should be checked against original sources, especially if they're for compliance or legal purposes. For pentest scripting, AI can be a huge time saver — what previously took a day or two can often be done in about an hour, plus a few rounds of testing. The catch is that you must understand the script yourself so you can adjust or fix it if needed. One interesting use case is when I need to check a standard — AI can show me the text from a particular section, so I can see if it applies to my documentation before buying the full text, or understand it in the context of the document. Sometimes, this small part is all you actually need. However, you shouldn't rely on AI for a full interpretation of a standard.
AI is becoming a powerful tool in cybersecurity — not because it's flawless, but because the volume and speed of threats today give teams very little room to breathe. In recent projects — especially in finance and SaaS — we've seen how machine learning helps reduce noise, surface the right patterns, and flag what truly matters. It doesn't replace people, but it gives them the space to think, prioritize, and act faster — and that alone can change the outcome. But the challenges are real too. One of the first things we run into is what people often call the "black box" problem. AI might flag something — but if no one understands why, what do you do with that information? You still need people in the loop — not just to double-check, but to take responsibility when it counts. And then there's the question of privacy. In AI-powered fraud detection, for example, models improve when fed behavioral data — but that requires access to sensitive workflows, logs, sometimes even client interactions. How much of that are you ready to open up just to make a system smarter? That's why I believe AI in security needs to do more than detect. It has to fit into the way people already work — clearly, safely, and with enough transparency that trust isn't eroded in the process. The potential is clear. But to make it work, we need to design systems that support human judgment — not bypass it.
My perspective on the use of Artificial Intelligence (AI) in cybersecurity is that it's a double-edged sword: incredibly powerful for defense, but also a rapidly evolving tool for attackers. Potential Benefits: AI excels at rapidly analyzing vast datasets, making it invaluable for threat detection and anomaly identification. It can spot patterns that human analysts might miss, improving the speed and accuracy of identifying malware, phishing attempts, and insider threats. AI-driven systems can also automate responses, like quarantining compromised systems or blocking suspicious traffic, leading to faster incident response times and reducing the window of vulnerability. Furthermore, AI can enhance predictive security, anticipating potential attack vectors before they materialize by analyzing global threat intelligence. Challenges: The primary challenge lies in the AI arms race. As defenders leverage AI, attackers also employ it to create more sophisticated malware, highly convincing deepfakes for social engineering, and autonomous attack campaigns. This leads to a constant escalation of tactics. Another significant challenge is false positives, where AI flags legitimate activity as malicious, leading to alert fatigue for human teams. Conversely, AI hallucinations can lead to false negatives, missing actual threats. Finally, the complexity of AI models can create a "black box" effect, making it difficult for security professionals to understand why an AI made a certain decision, which can hinder auditing and trust. Ultimately, while AI is crucial for scaling cybersecurity defenses against modern threats, it requires continuous human oversight, ethical guidelines, and an understanding of its limitations to be truly effective. It augments human capabilities rather than replacing them entirely.
AI is transforming cybersecurity in fascinating ways. On the positive side, it's like having a tireless security analyst who can spot patterns in millions of events that humans would simply miss. We're seeing AI handling the grunt work - automating those repetitive tasks that used to burn out our security professionals. But here's the reality check - the bad guys have AI too. They're using it to craft smarter phishing emails, automate their attacks, and find vulnerabilities faster than ever. We're essentially in an AI arms race. The biggest challenge I see day-to-day is trust. When an AI system blocks something critical or flags a legitimate user, the team needs to understand why. These aren't perfect systems - they can be fooled, they generate false alarms, and if the training data is flawed, they'll have blind spots that attackers can exploit. The key thing to understand is that AI isn't replacing human security experts - it's amplifying what they can do. You still need human judgment, creativity, and intuition. AI gives us superhuman speed at processing data, but we provide the context and critical thinking. That partnership is where the real power lies in defending against modern cyber threats.
From my work at SAP and ServiceNow, two major platforms deeply embedded in global enterprise workflows. I've seen how AI in cybersecurity is both transformative and high-stakes. In large-scale ERP and workflow systems, the attack surface is vast: millions of transactions, APIs, and user interactions happen daily across distributed, hybrid, and regulated environments. AI offers the ability to detect, predict, and respond to threats at machine speed, something human teams alone cannot achieve. At SAP, working on secure ERP integrations taught me that static, rules-based security is insufficient in today's dynamic threat landscape. AI-driven anomaly detection, fueled by behavioral analytics, can flag deviations in cost center transactions, payroll changes, or supply chain data that would otherwise go unnoticed. Similarly, at ServiceNow, developing Workflow Data Fabric and ERP Canvas with Zero Copy architectures reinforced the importance of minimizing data movement—reducing the attack vectors AI models must protect. The potential benefits are substantial: Proactive Threat Detection: AI models can identify subtle, emerging threats across networks, APIs, and workflows before they escalate. Adaptive Defense: Models learn from evolving attack patterns, closing vulnerabilities faster. Automated Incident Response: AI can trigger workflow-based remediation in seconds, reducing downtime and loss. Supply Chain Security: AI-powered risk scoring for vendors and transactions helps prevent compromised third-party access. However, challenges remain. Bias in AI models can lead to false positives or overlooked threats, straining security teams. Model explainability is critical security decisions must be auditable for compliance (e.g., GDPR, FedRAMP). Data privacy risks emerge when training models on sensitive ERP or HR datasets, making privacy-preserving ML essential. And finally, adversarial AI where attackers manipulate models will demand equally adaptive defensive AI. In my view, the key is Responsible AI in cybersecurity, embedding ethical design, access controls, and human-in-the-loop validation. AI should augment, not replace, human expertise turning security teams into strategic responders rather than constant firefighters. With the right architecture and governance, AI can make enterprise systems not only more secure, but also more resilient, adaptive, and trusted.
Artificial Intelligence is transforming cybersecurity from a reactive function into a proactive, predictive capability. Instead of simply responding to threats after they occur, AI allows us to detect anomalies, analyze vast datasets in real time, and anticipate potential attack patterns before they cause harm. This fundamentally changes the game — enabling faster incident response, smarter threat hunting, and more accurate risk assessments. The benefits are undeniable: Speed and Scale: AI can process and correlate millions of signals in seconds, far beyond human capacity. Predictive Insights: Machine learning models can spot subtle patterns that indicate an attack long before traditional systems would flag them. Automation: Routine security tasks can be automated, freeing skilled professionals to focus on complex problem-solving. That said, there are challenges we must address: Bias and False Positives: Poorly trained models can generate noise or miss critical threats. Adversarial AI: Attackers are now using AI to craft more sophisticated, evasive threats. Human Oversight: AI should enhance — not replace — human expertise. The final judgment on critical security decisions must remain with trained professionals. At its best, AI is not a silver bullet but a force multiplier. In cybersecurity, it works most effectively when paired with strong governance, skilled analysts, and a deep understanding of the evolving threat landscape. It's a partnership between human intelligence and machine intelligence — and that's where the real power lies.
Artificial intelligence is already playing a major role in cybersecurity. It is not a future concept; it is built into many tools we use today. Endpoint detection platforms rely on AI to identify suspicious behavior, while email filters use machine learning to catch phishing attacks that traditional rules might miss. AI is especially useful for analyzing large volumes of data and spotting patterns quickly. It can detect unusual activity, automate parts of the response process, and reduce the time it takes to identify real threats. In under-resourced environments, this speed and automation can make a real difference. However, current AI tools have limits. Many rely on historical data, making them less effective against novel attacks. Biases in the data can also lead to false positives or missed alerts. And while AI can flag anomalies, it often lacks the context to determine whether something is genuinely dangerous or simply unusual. Looking forward, AI will only become more critical, especially as cybercriminals begin using AI themselves to automate attacks and evade detection. Defenders will need to improve AI's adaptability and better integrate it with human analysis. The future of cybersecurity will be shaped by how well we balance automation with human insight. AI is a powerful tool, but it works best when paired with experienced professionals who can interpret and act on what it finds.
For the most part, AI will amplify existing workflows and data governance a company already has in place. Used effectively, it removes busywork and catches weak signals. Used badly, it will reduce your cybersecurity efforts to theater and create vulnerabilities. Benefits - Speed - log summaries, correlate weak indicators, catch weak signals - giving analysts more time to focus on decisions. - Just-in-time guardrails: AI powered inline risk scoring on payments, access changes, or vendor edits. - Safer defaults - automatic suggestions for privilege levels, token expiry, and FIDO2 enforcement. - Realistic drills: generate safe, plausible pretexts to test workflows against vishing/voice-clone scenarios. Challenges - Privacy creep; collecting content for the mode can turn into monitoring your employees - avoid eroding trust to avoid encouraging people to route around controls. - Hallucinations & overconfidence; even with the right data, outputs can be hallucinated or confident and wrong. Human verification is essential. - Explainability & accountability; if you can't explain why an AI-driven nudge happened, you can't defend its existence. - Vendor & model drift: Models change, risks shift, and your controls may silently degrade unless you monitor them. Using AI in cybersecurity isn't inherently positive or negative - it really depends on the company's systems, processes and data handling. It's similar to other SaaS in this sense, but the consequences of bad practice may be realized faster, or at a greater scale due to AI's flexibility and broad applications. The real "AI threat" is external - generative AI models are being developed rapidly, and threat actors will often be among the first to begin levering new technology. Expect more voice and video clones, targeted, customized phishing, and sophisticated social engineering as time goes on. These kinds of attacks will likely get consistently easier and cheaper to run over time.
I see artificial intelligence as a game-changer in cybersecurity, offering powerful tools to stay ahead of increasingly complex threats. One of the biggest advantages is scale. AI can analyze massive volumes of data in real time, helping identify patterns, detect anomalies, and flag suspicious activity much faster than human analysts alone. This makes it especially useful for threat detection, vulnerability scanning, and incident response. AI also improves operational efficiency. By automating repetitive tasks like log analysis and alert triage, it reduces fatigue and frees up time for teams to focus on deeper investigation and strategic planning. Some AI systems even support predictive threat modeling, helping organizations take preventive action before damage occurs. But there are real challenges. Attackers are also using AI to launch more sophisticated phishing campaigns, generate deepfakes, and identify system weaknesses at scale. The same tools that protect us can also be turned against us. Another risk is overreliance. AI models can produce false positives or miss subtle threats, especially if they are trained on biased or incomplete data. When the model's decision-making is not explainable, it becomes hard to trust or verify its outputs. This can lead to either misplaced trust or missed opportunities to intervene. That's why I believe AI should be used to support, not replace, human judgment. The best outcomes happen when AI is paired with skilled analysts who can provide context, question assumptions, and guide ethical use. This human-in-the-loop approach ensures we benefit from AI's speed and scale without losing sight of security fundamentals or accountability. Responsible use of AI in cybersecurity means regularly validating models, keeping humans engaged in oversight, and creating a culture where speed does not come at the cost of accuracy or trust. When used thoughtfully, AI can help us shift from reacting to threats to proactively managing and reducing risk.
AI might be fueling a new wave of cyber threats, but it will also be the sharpest weapon we've got to fight back. We've heard how AI is giving rise to more sophisticated phishing scams, prompt injections, deepfakes and other security risks. But not everyone has considered how the technology might help us stay safe. For example, here are some ways AI might help: - Detect phishing emails faster by analyzing email patterns, unusual metadata, or linguistic red flags. - Analyze voice or facial micro-patterns to spot inconsistencies in deepfakes. - Detect poisoned data by monitoring datasets for anomalies or outliers that suggest tampering. - Run constant simulations and stress-tests to predict how a model might be tricked and flag suspicious inputs. - Detect network threats — AI excels at scanning huge volumes of data and logs, detecting patterns that indicate intrusions, malware, or unusual behavior. - Behavioral anomaly detection — AI can build a baseline of normal user behavior, and flag anything odd. Model watermarking — AI can embed watermarks or fingerprints in models to track and identify stolen versions, even after they've been slightly modified. The biggest challenge will be keeping up with fast-evolving tech. If you're trying to fight that with human-only defenses, you're bringing a knife to a gunfight. In this AI era, only AI can match AI.
AI is transforming cybersecurity from reactive to proactive. As a cybersecurity professional and founder, I see its biggest benefit in surfacing threats faster than humans ever could by spotting patterns in behavior, access, or anomalies that would otherwise go unnoticed. But the challenge is that the same tech is available to attackers. We're in an arms race where speed and context are everything, and AI without human oversight can just automate mistakes at scale.
As the founder of Titan Technologies, a company dedicated to cybersecurity in Central New Jersey, I see how artificial intelligence is reshaping the cyber landscape. My work and my company's mission are fundamentally about understanding and responding to these evolving threats. AI is undeniably a double-edged sword, making cyber threats smarter, faster, and harder to detect. We're witnessing a rise in AI-powered phishing scams that create eerily personalized messages, mimicking trusted contacts, and AI-driven malware that learns to evade traditional defenses. For example, deepfake social engineering using AI-generated voices can trick CFOs into making fraudulent wire transfers. However, AI is also crucial for defense. We advise businesses to invest in advanced detection tools that use machine learning to spot these smarter attacks in real-time. Equipping your business with an AI-driven defense is essential for staying ahead. Beyond technical solutions, the human element remains critical; deepfakes and AI-improved social media exploitation demand a culture of verification. Furthermore, emerging threats like quantum computing have the potential to render current encryption obsolete, requiring businesses to adapt their long-term cybersecurity strategies now.
The Role of AI in Cybersecurity: Offensive and Defensive Perspectives From my vantage point as both a cybersecurity engineer and technical director, AI is not just a tool; it's rapidly becoming an active participant in the cyber battlefield. Its role can be viewed through two distinct lenses: offensive and defensive security. On the offensive side, AI-powered agents are already capable of continuously probing and attacking targets — testing defenses, exploiting vulnerabilities, and evolving their tactics at machine speed. These systems won't replace human penetration testers or red teamers, but they will act as force multipliers. Skilled engineers will manage, tune, and interpret the outputs of these AI agents, focusing on higher-level strategy while letting automation handle the repetitive reconnaissance and exploitation attempts. On the defensive side, AI promises transformative capabilities in code security. We're moving toward AI-driven code reviews that can detect vulnerabilities in real time, propose patches, and even auto-remediate certain classes of bugs. But just as with offensive AI, the human role remains crucial — engineers will still be needed to oversee, validate, and approve AI-generated actions to ensure security measures are accurate, context-aware, and aligned with the project's risk tolerance. The benefits are clear: routine, time-consuming activities will be automated, allowing security teams to focus on higher-impact work. Automation also means broader and deeper coverage — AI systems can monitor and test at scales no human team could sustain. The challenges are equally real: AI doesn't eliminate the complexity of building secure systems. We still need to design tailored application security programs that fit each project's architecture, threat model, and business priorities. AI can accelerate the work, but it won't replace the need for rigorous planning, governance, and integration into a broader security framework. In short, AI is changing cybersecurity from a purely human-versus-human contest into a human-and-machine-versus-human-and-machine landscape. The organizations that win will be the ones that master both the technology and the orchestration between AI capabilities and human expertise.
As the founder of Sundance Networks, which deeply integrates IT, AI, and cybersecurity, my perspective is that AI is increasingly indispensable for robust defense. We see it as a silent, powerful partner that allows businesses to focus on growth, not tech struggles. A key benefit is AI's ability to proactively identify and resolve potential issues before they impact operations, significantly reducing disruptions and enabling faster response times for critical functions. For instance, our network security services leverage AI protection in firewalls and gateways to improve real-time threat neutralization, moving beyond traditional reactive measures. Furthermore, AI-driven Endpoint Detection and Response (EDR) provides advanced threat detection and comprehensive visibility far beyond conventional antivirus, offering crucial incident response and forensic analysis capabilities. These intelligent solutions lead to improved data protection and deliver meaningful insights for better business decisions. The primary challenge we help clients steer isn't just the technical complexity, but also selecting the right scalable AI solution that truly aligns with unique business goals and integrates seamlessly with existing infrastructure without excessive cost. It's about bridging the gap between cutting-edge technology and demonstrable results.
My role as CEO and Co-founder of Lifebit involves building platforms that secure incredibly sensitive biomedical and genomic data using AI and high-performance computing, making data security and privacy paramount. My background in computer science, AI, and developing secure Trusted Research Environments gives me a unique perspective on leveraging AI for cybersecurity. AI offers immense benefits, particularly in proactive threat detection. Just as AI in clinical trials detects subtle safety patterns before human review, it can identify anomalous network behaviors and emerging cyber threats in real-time. It also greatly improves compliance and governance, automating granular access controls and ensuring robust audit trails within complex data ecosystems. However, significant challenges exist. Training effective AI models requires continuous access to high-quality, unbiased data, mirroring the data preprocessing and harmonization we emphasize for our own AI applications. The rapidly evolving landscape of cyber threats, often involving adversarial AI, necessitates constant model updates and highly skilled human oversight to stay ahead. AI functions best as a powerful partner to human expertise, not a replacement. This collaboration is key to ensuring that sophisticated AI systems are effectively guided by expert interpretation for optimal security outcomes.
AI Is Exponentially Changing Cybersecurity Subtitle: Why Defenders and Attackers Are Both Racing to Wield the Same Weapon Artificial Intelligence has become a force multiplier in cybersecurity. It brings speed, scale, and intelligence to both threat detection and defense automation. But it also brings new risks—especially when attackers use AI too. Let's break it down. The Big Wins Real-Time Detection AI models monitor system behavior, not just known threats. This makes them ideal for catching zero-day exploits, insider abuse, and anomalies that fly under the radar of traditional tools. Speed and Scale AI ingests thousands of logs per second, flagging patterns and alerting security teams before a human could blink. It's perfect for cloud, IoT, and enterprise environments. Automated Defense With SOAR platforms, AI can revoke credentials, isolate endpoints, or demand extra authentication the moment something suspicious occurs—no waiting for human approval. Fraud and Risk Scoring AI protects against account takeovers and financial fraud by recognizing subtle shifts in behavior. It also predicts risk, helping prioritize what to fix before it breaks. The Catch Hackers Use AI Too Cybercriminals are using AI to write better phishing emails, create voice deepfakes, and find model weaknesses. The tech that guards us can be turned against us. False Positives and Black Boxes AI models can misfire. Poorly trained systems flood teams with noise or miss real threats. And when something is flagged, many models can't explain why. Data Privacy Tensions AI needs data, but laws like GDPR and HIPAA limit what you can use. Balancing visibility and compliance is tricky, especially without privacy-aware AI. High Cost, Uneven Access Enterprise AI security tools are expensive. Smaller businesses often can't afford them, leaving dangerous gaps in the global supply chain. Where It's Going The best systems pair human judgment with AI speed. Red-teaming with AI, improving model transparency, and sharing threat intel across federated systems are all promising directions. But let's be honest—AI has changed the game. It's not just a tool. It's a player. Success in cybersecurity today means understanding how AI helps, how it can hurt, and how to stay ahead as the curve steepens. Citations CrowdStrike - crowdstrike.com Microsoft Security Blog - microsoft.com/security/blog IBM X-Force Threat Index - ibm.com/reports/threat-intelligence Gartner SOAR Guide - gartner.com