Software & Apps

AI-DLC: The Good, the Bad, and the Dangerous

AI coding assistants have moved from experimental to enterprise level faster than any technology in recent memory. In a recent StackHawk survey of 250+ AppSec stakeholders, 87% of organizations have adopted tools like GitHub Copilot, Cursor, or Claude Code. More than one-third have already spread or are fully recovered.

The productivity gains are real. So are the security implications. But the discussion about the risk of coding AI remains stuck on AI “writing vulnerable code” – missing profound shifts in how software is built and how it needs to be protected.

Good

I think this one is obvious. Speed ​​matters when it comes to product differentiation and innovation—and AI delivers. Developers are producing more code than they did six months ago. Features that used to take weeks are now being shipped in days.

AI can also improve the quality of the code base. Assistants trained in millions of repositories have internalized similar patterns, including secure ones. For standard things – input validation, standard auth flows, standard API patterns – AI-generated code is often more consistent than what a junior developer wrote from scratch. The “AI writes insecure code” story ignores that human-written code has never been the gold standard for security.

And boilerplate safety is automatic. Parameterized queries, standard encryption patterns, OAuth scaffolding — this is exactly where AI assistants shine. Repetitive security cleanups that developers used to cut off because they were boring are now generated correctly automatically.

Bad

The content gap is real and growing. When you write code line by line, you develop an intuition about how it works, what it affects, where the edge cases lie. When you review AI-generated code, you ask a different question: “Does this work?” Not “Is this secure?” Not “How does this work with our authorization model?” Developers who adopt a complete implementation without a deep understanding of it are a very different risk profile than developers who build those implementations themselves.

Documentation and knowledge of the institution suffers. AI-assisted development often means less time spent on the codebase. Developers understand the features at the operational level but may not follow the security implications. That knowledge gap is compounded—six months later, no one remembers exactly why a particular API endpoint exists or what data it can access.

Manual processes cannot keep up with the speed. When the development speed increases by 5-10x, everything below breaks. Security reviews, property approvals, asset documentation, attack site tracking—any process that depends on people to keep up with progress is now permanently behind schedule. Our survey found “keeping up with rapid development speeds and AI-generated code” was the number one challenge cited by AppSec stakeholders.

Danger

Risk isn’t code—it’s confidence. The real danger is not that the AI ​​writes vulnerable code (although it can). It’s that organizations are sending fast while understanding little about what they are sending. Tests pass, code reviews are mandated, features are implemented—but the app security team’s mental model differs greatly from the reality of each AI-assisted runner.

Shadow applications are multiplying faster than ever. That proof-of-concept weekend the developer came out to “test something”? AI assistants make it easy to build, which means less forgettable. Our survey found only 30% of AppSec stakeholders are “highly confident” they know 90%+ of their attack surface. AI-assisted development is making that number worse, not better.

Security teams try, they don’t protect. If the volume of code increases but the value of AppSec does not, there must be something to offer. Our data shows 50% of AppSec teams spend 40% or more of their time evaluating and prioritizing findings—deciding what’s real before they talk about what’s important. That rate was already unsustainable. The speed of AI development is absolutely killing it.

What This Means for Security Leaders

Organizations that get this right aren’t trying to slow down AI adoption—that ship has already sailed. They adjust their security systems to make the world where:

  • Visibility is fundamental. You can’t protect what you don’t know exists. Automatic attack point detection from source code is not a good thing to have if developers are shipping faster than documentation can be tracked.
  • Uptime validation is more important than ever. When developers have little context about the code they’re deploying, they need tests that validate how applications behave—not just how the code looks mathematically.
  • Intelligence hits the volume. A code response of 5x more is not a 5x testable finding. Smart prioritization that connects vulnerability to business risk, so limited AppSec resources focus on what really matters.

AI coding assistants aren’t going away. The productivity gains are significant, and the adoption curve is behind us. The question isn’t whether you accept them—whether your defense system is built for the world they’ve created.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button