Q1. Code generation by AI means going from being a logic writer to a curator of intent. What's really changing is that developers now need to treat the snippets from the AI as untrusted third-party libraries. We're trending towards producing code securely through prompt and review, not manual line-for-line syntax typing. Developers now have to secure the vibe by verifying that the AI's architectural assumptions are congruous with an organization's specific threat model. Q2. Five years ago we were training developers around syntax level issues like SQL injection and buffer overflows. Today it's "AI Output Validation" and the integrity of the supply chain for AI'sconsuming libraries. Developers need to know how LLMs can hallucinate insecure libraries and Intel the use of weak or outdated cryptographic patterns, and that the core fundamental we're sharing is less around "how to do secure development" than "how to audit this automated output for structural vulnerabilities." Q3. We're leaving the day of the year-long slide-deck compliance and heading more quickly towards contextual "Just-in-time" training in real time. The rise of "just-in-time security feedback" - like if an AI suggests a risky pattern, the contextually relevant training content is delivered in a digestible way alongside the pull request, making security a continuous conversation (as opposed to an event). Q4. Cyber ranges are becoming more important now because they're the only safe spaces to be able to train on how code AI generated behaves when it's being under stress. Handson training is much more about "adversarial debugging" now - how engineers can spot where a ML assistant introduced a subtle logic flaw that wouldn't necessarily be caught in an automated scanner. Building muscle memory that questions the machine. Q5. The AppSec team is more "Policy Architects." Rather than simply chasing bugs, they're building the automated guardrails themselves - like custom linters and pre-commit hooks - around what the AI is allowed to do. Reinforcing training isn't just building awareness, it's being codified into the platform engineering layer where even if a developer does have a "vibe" here, at least their tools then won't let it through to production.
The speed of developer security is influenced by coding with the assistance of artificial intelligence. As teams write more code at an increased rate, the speed by which potentially unsafe practices are written and distributed increases; this is also true for potentially dangerous suggestions made by the tool, particularly if those suggestions are accepted without first identifying and mitigating risk. In today's environment of rapid development, the responsibility of being aware of security risks is no longer merely about remembering the rules but rather about developing the habit of verifying every piece of code written. To that end, treat the output generated by AI like code from a junior developer—useful, but not trustworthy unless it undergoes a review process, testing, and a thorough assessment of the security controls applied. Don't confuse "the code compiled" with "the code is secure." Validate the code added or modified by the AI tool, including the authentication and authorization limits placed on the application, how input was handled, whether the code exposed secrets, if there were any dependencies created or altered, and how logging was configured prior to merging. While some of the fundamental principles of security remain the same least privilege, safe defaults etc. how these principles are delivered has been modified. Most successful training programs are shorter, more frequent, and more integrated into the workflow in which the developer works. Example, micro-training tied to pull requests is far more effective than traditional annual training, and the same is true for the integration of CI/CD checks with the same security standards as the developer writes code. Hands-on training will always play an important role in a developer's education regarding security; however, hands-on training needs to be representative of the actual systems the developer works with daily. The cyber ranges used for training purposes will provide the greatest benefit when they replicate the company's technology stack and allow developers to rapidly identify and remediate problems and not simply score points in a generic laboratory setting. AppSec teams should concentrate on creating paved roads and providing guardrails to their developers using pre-approved templates, secure libraries, policy-as-code, and default pipelines that automatically deny code changes that could create risk thus making the safest course of action the easiest course of action.
The way that developers approach "writing secure code" is changing. This is due to the increased use of AI to generate code (and thus the potential for attacks). Today's developers do not simply write "secure code"; they must validate each generated code pathway and dependency produced by AI for potential threats. As such, developers are being forced to understand threat models, user permissions, and data exposure, regardless of whether they wrote every line of code themselves. Training 5 years ago was primarily based on OWASP lists. Today, training is required to teach developers how to review AI-generated code and predict the potential blast radius of their AI-generated applications. It is imperative to use hands-on simulations in training, as this provides developers with the best opportunity to understand how minor errors can rapidly escalate. AppSec teams will need to implement guardrails within the CI/CD pipeline to ensure security is enforced automatically, rather than relying on manual oversight.
From the operator's perspective, AI-assisted development increases development speed; however, it also increases the risk to the application, as developers will rely solely on the tools provided to them and not question the underlying code. The primary change is that the definition of security awareness no longer limits itself to writing secure code, but also includes knowing what to ask. Compared with the training requirements of 5 years ago, training will require a greater focus on fundamental concepts such as access control, API security, and environment isolation, while reducing reliance on strict syntax rules. Training programs will need to be shorter, incorporate scenario-based training, and embed security training directly into workflows. To minimize mistakes before reaching production, security teams will need to enforce platform-level rules to detect errors automatically.
President & CEO at Performance One Data Solutions (Division of Ross Group Inc)
Answered 3 months ago
Our AI coding tool caught security issues we missed in manual review, which was great. But we soon had to retrain developers not to trust the AI blindly. Now it's about combining the fundamentals, like input validation, with a real skepticism of automated suggestions. You can't just memorize security patterns anymore. You have to question what the machine gives you.
After 20 years in dental cybersecurity, I'm watching AI change how developers learn security. Hands-on cyber ranges are great, especially when you add automated checks to their workflow. If I were training a team now, I'd tell them to worry less about memorizing exploits and more about understanding secure design principles. AI tools can help, but you still need that gut instinct for what feels risky.
AI writes code fast. Too fast. And 45% of it has security holes. I build AI systems for banks. What I see scares me. GitHub Copilot has 15 million users. 90% of Fortune 100 companies use it. But here's the ugly truth: AI-generated code has 1.7x more critical bugs than human code. The old security playbook is dead. Three problems destroy traditional AppSec: 1. AI Learns Bad Habits - Copilot studies your codebase. If your code has holes, Copilot copies them. One study found AI code is 2.74x more likely to have XSS flaws. 2. Traditional Scanners Miss It - SAST tools use rules and pattern matching. AI creates new bugs with novel patterns. 78% of AI vulnerabilities slip through standard scans. 3. Speed Beats Caution - Developers love Copilot because it's fast. PRs increase 113%. Nobody stops to review. Accept. Accept. Ship. How Training Must Change: The old model: Teach developers OWASP Top 10. Run annual compliance training. The new model: Teach developers to distrust AI. What we train now: - Prompt Engineering for Security: "Write secure login code that validates input" produces better results than "Write login code." - AI-Specific Threat Models: Prompt injection, phantom dependencies, secret leakage. In 2025, researchers found 30+ vulnerabilities in AI coding tools. - Human-in-the-Loop: AI writes. Humans verify. Always. What Actually Works in banking: - Layer 1: Immediate SAST at the IDE. Scan as the developer types. - Layer 2: AI-Native Scanning with LLMs that understand context. - Layer 3: Provenance Tracking. Tag every AI contribution. - Layer 4: Adversarial Testing. Red team your AI before attackers do. The teams that figure this out build secure software faster. The teams that don't will wonder why they got breached.
AI-assisted coding is transforming how developers approach security, and organizations need to rethink training. At OnlineGames.io, we've seen that automation and AI tools handle repetitive checks, but they can't replace developer awareness. The fundamentals: threat modeling, secure coding practices, and understanding dependencies remain critical, yet the focus has shifted toward integrating security into everyday workflows rather than separate modules. Hands-on training like cyber ranges now complements AI-driven toolchains, giving engineers practical experience while reinforcing guardrails set by security teams. The most effective approach combines AI automation, continuous learning, and human oversight to ensure developers write resilient code without losing agility. __ Contact Details: Name: Cristian-Ovidiu Marin Designation: CEO, OnlineGames.io Website: https://www.onlinegames.io/ Headshot: https://imgur.com/a/5gykTLU Email: cristian@onlinegames.io Linkedin: https://www.linkedin.com/in/cristian-ovidiu-marin/
The shift you are describing forces a reset of what developer security awareness means. In an AI assisted environment, developers are no longer the sole authors of code. They are supervisors of systems that generate code at speed. Security moves away from syntax and toward judgment. The most fundamental change is where responsibility sits. AI tools optimise for plausibility and velocity, not threat models. I have seen generated code that looked correct, passed tests, and still introduced insecure defaults or risky dependencies. Developers now need to know what to question rather than how to write everything themselves. Inputs, permissions, data flow, and dependency chains matter more than line level implementation. That changes what fundamentals matter. Five years ago, training focused on specific flaws and patterns. Injection, sanitisation, and defensive coding techniques. Today, developers need a stronger understanding of system behaviour. How identity propagates across services. How secrets move through pipelines. Where automated checks apply and where they stop. The risk is not ignorance of known vulnerabilities. It is blind trust in automation. Training content and delivery have had to adapt quickly. Long, static courses do not hold up. The teams I see making progress use short, scenario driven training tied directly to real tooling. When a new platform or AI assistant is introduced, security context comes with it. What assumptions it makes. Where it can fail. What guardrails exist. Learning sticks because it is anchored in daily work. Hands on training still plays a role, but its purpose shifts. Cyber range style exercises are most effective when they expose failure modes rather than teach exploitation. Breaking a realistic pipeline once under controlled conditions teaches more than reviewing policies for months. Security teams also change role. They become designers of guardrails instead of reviewers of output. In automated environments, enforcement scales better than instruction. Training then explains why a control exists and how to respond when it triggers. From a leadership perspective, the risk is teaching yesterday's version of secure coding. The real skill now is knowing where human judgment must intervene in automated systems. Organizations that train for that reality move faster with less risk. Those that do not often feel compliant until something fails quietly.
The concept of becoming a secure developer is radically changed due to AI-assisted coding. The developers are no longer generating all the code, but are guiding, validating and combining machine-generated output. This means security is no longer a developer's personal choice, but it needs to be integrated at each level of the system: control of the use of models, control of the validation of outputs, the introduction of dependencies, and the enforcement of policy by pipelines. Another rule in this new paradigm is that any AI-generated code must be considered untrusted input, just like any data obtained via an external API. The security basics that the developers require in the present times are more functional than what they were five years ago. The old skills of secure coding, input validation, authentication, encryption, etc., remain important, but now they are merely a factor. The engineers should also know software supply-chain security, secrets management in CI/CD, SBOMs, vulnerability scanning and the interpretation of automated security signals. Right five years ago, security training was mostly focused on the ability to write safe code. Today it concerns how it is possible to work within one of the most automated systems of delivery. This has resulted in a change in the way organizations conduct training. Rather than running security courses every year, teams are introducing code and pipeline enforcement by using warnings in IDEs and pull request validation and providing pipeline blocking. It is during the risk that learning takes place and not in a classroom. Hands-on simulation and cyber ranges are getting more significant, rather than smaller, since developers should know what occurs when automated systems do go wrong. These conditions form muscle memory related to incident response, paths to exploitation and the actual-world impact of insecure AI generated code. Security teams do not merely read code anymore, they are putting together and sustaining the guardrails that enable secure development to scale. They are supposed to program security into platforms, implement it automatically, and strengthen it by feedback at all times.
I'm Eric Lamanna, the VP of Sales at SEC.co, and my role puts me in constant conversation with CISOs, AppSec leaders, and engineering orgs who are actively adapting to AI-assisted development, not in theory, but in live production environments. AI-assisted coding has changed what "developer security awareness" even means. Developers are now shipping large volumes of code they didn't fully write, and often didn't fully reason through. Security awareness today isn't about memorizing vulnerability lists, it's about knowing how to question, validate, and constrain machine-generated code. The strongest teams are retraining developers to act as reviewers and risk evaluators rather than sole authors. That means understanding trust boundaries, permissions, dependencies, and failure modes, not just syntax-level issues. Security has to move earlier and deeper into pipelines, because AI doesn't slow down for post-hoc reviews. Five years ago, secure coding training focused heavily on common bugs like SQL injection or XSS. Those still matter, but today's risks are more systemic. Developers now need fluency in identity and access management, supply chain risk, cloud misconfigurations, and how automated security controls actually work and where they don't. The goal isn't to turn developers into security experts, but to help them recognize when speed introduces risk. Training delivery has shifted just as much as content. Long slide decks and annual compliance courses don't work in modern engineering orgs. The most effective programs use short, scenario-based training tied to real development workflows, with feedback delivered directly inside the tools developers already use. Hands-on training plays a critical role here. Cyber ranges and realistic simulations let developers see how small decisions lead to real incidents. That experience sticks, and it builds trust between engineering and security teams in a way theory never does. AppSec and security teams are no longer gatekeepers. They're building guardrails, defining policies, automating enforcement, and reinforcing training through real-world feedback. In AI-driven development environments, security works best when developers can move fast within clearly defined, well-enforced boundaries. Eric Lamanna, VP of Sales at SEC.co LinkedIn: https://www.linkedin.com/in/eric-lamanna-b11b781b7/ Company Website: https://sec.co/ Email: eric@sec.co
1.AI removes part of the cognitive load from the developer, but at the same time increases the risk of "blind trust" in the generated code. Security-thinking today — is not so much writing secure code manually, but the ability to evaluate, test and contextualize what AI offers within pipelines and environments.2. Today, the key are threat modeling, dependency security, secrets management and understanding automated controls. Five years ago, the emphasis was on OWASP checklists and manual code review, now — on the ability to work with automated security systems and understand their limitations. 3.Content is increasingly focused not on "knowledge", but on decision-making in real scenarios, where AI, automation and release speed create new types of risks. Learning has become more contextual and short-cycle. Instead of rare large trainings, companies are moving to in-flow training — tips, policy-checks and micro-learning right in dev tools. 4.Cyber range becomes critical because it allows "to crack its own assumptions in a safe way. Developers better understand the consequences of their actions when they see how errors in AI-generated code are exploited in practice. 5.Their key role is — to build security into the platform, not to require it from each individual developer. Training, automation and enforcement should work together and not as separate initiatives.
AI-assisted coding is reshaping developer security awareness by shifting emphasis from manual code writing to system oversight. As AI enables developers to increase output, it's essential for humans to remain fully responsible for runtime behavior. Security effectiveness now depends less on writing "safe" code and more on understanding how systems function together, including trust boundaries and how automations propagate through environments and pipelines. Higher level risks now dominate, such as identity and access misconfiguration, supply chain exposure, and infrastructure-as-code blast radius. Developers must understand how AI-assisted tools compose code, select libraries, and configure environments, as well as how wrong assumptions made by AI errors can scale into systemic risk. In terms of training delivery, periodic instruction just isn't sufficient to keep pace with today's AI-augmented workflows. We must now embed security learning into daily engineering activity—through contextual feedback in code review, pipeline checks, policy-as-code enforcement, and just-in-time guidance. Training is continuous, and situational. Hands-on training such as cyber ranges plays a critical supporting role in this model. Simulated environments allow developers to observe the downstream effects of misconfigurations, over-permissive automation, and dependency failures they did not explicitly design. This experiential exposure builds judgment and intuition. In parallel, appsec teams must evolve from reviewers to platform architects. By defining secure defaults, codifying guardrails, and aligning tooling with training, they are able to enforce security automatically and continuously. In AI-assisted, hyper-automated toolchains, security becomes a property of the system design, not an after-the-fact intervention. Organizations that align developer education, hands-on experience, and platform-enforced controls will be best positioned to sustain both speed and resilience as software creation accelerates.
Developers used to write code. Now they review code that AI wrote for them. That flip breaks most security training programs. Think about what we taught five years ago. Memorize OWASP. Follow secure patterns. Check inputs. Escape outputs. Developers built code brick by brick and learned security the same way. AI skips that process entirely. It generates hundreds of lines in seconds. The code looks clean. It often runs fine. But AI has no clue about your specific threat landscape. It doesn't know which data is sensitive or which endpoints face the internet. It produces confident-looking code with buried flaws. So what do developers actually need to learn now? How to question output they didn't create. How to trace data flows through unfamiliar logic. How to recognize when something feels off even if tests pass. Cyber ranges become more valuable here. Reading about SQL injection differs from exploiting one yourself. That direct experience builds instincts. Those instincts catch problems that AI introduces. Appsec teams should focus on building automated guardrails into the pipeline. Platform engineering gives you the enforcement point. But guardrails work better when developers grasp what they're catching and why. Training delivery needs rethinking too. Annual workshops don't stick. Contextual nudges inside the IDE do.
AI coding helpers changed how I coach employees. I treat the model like a fast junior dev who never owns the blast radius. The rule is simple. AI output is untrusted input. We prove safety with threat modeling, secrets handling, dependency hygiene, and tests that try to break things. Five years ago, OWASP Top 10 and a checklist got teams moving. Today I teach prompt hygiene and data boundaries, because context leaks and confident code lies. Delivery changed too. I do fewer big trainings and more small reps. A pull request becomes the classroom. We annotate the diff, read scanner findings, and ask what could go wrong in production. Cyber range time still pays off because engineers remember what hurts. AppSec and platform teams should ship guardrails as product features. Secure templates, policy checks in CI, signed artifacts, and preapproved libraries. Then we coach when the automation flags a pattern.
With AI-assisted coding, developers are shifting from being the primary author to being the senior editor (and security engineer) for all code created by AIs (artificial intelligence). In this new model, engineers must examine all pieces of code generated by an AI through the lens of Zero Trust because there is no guarantee that any piece of code generated by an AI has been written correctly. Therefore, developers must concentrate on safeguarding the prompt engineering of the code and examining the automated toolchains through which the AI-generated code will be integrated into their own code. Therefore, the focus has gone from manually checking syntax to creating resilient environments that contain and validate AI-generated code (using extensive sandboxing and automation policies). Security fundamentals focus on architectural literacy and "secure-by-design" methodologies rather than simply memorizing the syntax errors listed in OWASP's Top 10. Five years ago, security-trained developers found bugs like buffer overflows; today, they must understand not only how data flows but also the boundaries of identity (who owns what) and the security of AI model supply chains. Therefore, the developer must also learn how to perform security "triage" or analyze the volume of results from automated platform engineering tools. Today's engineer is both a security strategist and an engineer who can understand how all the parts of a complex automated pipeline fit together to work properly.
In today's world of AI-powered development and vibe coding, the biggest fundamental security knowledge that developers must acquire—and are starting to—is threat modeling, secure architecture design, data handling best practices, and knowledge of how vulnerabilities arise from interactions between system components rather than at the line level of code because an AI programming tool can spit out a line of code in a nanosecond, but it can't determine whether said line of code is suitable within a certain security context. In contrast with how coding security knowledge has developed over the past five years, today's training programs are no longer revolved around language-specific anti-patterns and are shifted toward educating developers on how to analyze, scrutinize, and qualify AI-developed outputs and machine security findings. Overall, with this rapidly changing technology landscape and increasing use of AI within coding, it's more important that training programs on coding security knowledge are incorporated within programmers' everyday practices and that security knowledge is taught within a contextual setting that provides actual working examples from within the organization's actual codes.