All posts
OWASPResearch

OWASP Top 10 in AI-Generated Code: What We Found After 1,000+ Scans

The OWASP Top 10 is the industry standard for web application security risks. Here's how each vulnerability category shows up specifically in AI-generated code — and how to prevent them.

January 22, 202611 min read

The OWASP Top 10 is the most widely referenced list of web application security risks in the industry. Published by the Open Web Application Security Project, it reflects the vulnerabilities that cause the most real-world damage. After scanning thousands of vibe-coded applications with Pentrust, we can say with confidence: AI-generated code hits almost every category.

This post breaks down each category, explains why AI tools are susceptible to it, and gives you concrete steps to check your own application.

A01: Broken Access Control — The #1 Risk

Broken access control moved to the top of the OWASP list in 2021 and has stayed there. It encompasses any situation where a user can perform actions or access data beyond what they're authorized for. This includes IDOR, privilege escalation, and missing authorization checks on sensitive endpoints.

In AI-generated code, broken access control is endemic. The pattern is consistent: AI models generate functional CRUD operations, then add authentication as a secondary concern. The result is API endpoints that verify a user is logged in but never verify the user is allowed to access the specific resource they're requesting.

Finding

In our scan data, broken access control was the most frequently exploited vulnerability class. The typical attack: change a numeric ID in a URL or API request body from your own resource ID to another user's resource ID. In over 70% of apps tested, this succeeds.

A02: Cryptographic Failures

Cryptographic failures include storing passwords in plaintext, using weak hashing algorithms, transmitting sensitive data over unencrypted connections, and misconfiguring encryption. AI-generated code typically handles password hashing correctly when using established libraries like bcrypt — but it stumbles in less obvious areas.

Common cryptographic failures in vibe apps include using Math.random() for security-sensitive tokens, storing JWT secrets in version control, using weak HMAC algorithms (HS256 with a short secret is crackable), and returning full credit card numbers or SSNs in API responses that don't need them.

A03: Injection

Injection attacks — SQL, NoSQL, LDAP, OS command, template injection — occur when user-supplied data is interpreted as code or a command. Most modern AI models know to use parameterized queries for basic SQL, but they create injection vulnerabilities in more subtle ways.

  • Dynamic ORDER BY and filter clauses: AI often interpolates sort parameters directly into SQL, creating SQL injection vectors in otherwise safe code.
  • Prisma $queryRaw without tagged template literals: Prisma's escape hatch for raw SQL is safe when used with tagged template literals, but AI sometimes generates string concatenation instead.
  • Server-side template injection: AI-generated email or PDF rendering code may pass user data directly to template engines.
  • MongoDB $where with JavaScript execution: rare but seen in AI-generated aggregation pipelines.

A04: Insecure Design

Insecure design refers to flaws in the architecture and design of an application — problems that code-level fixes can't fully address because the fundamental approach is wrong. Examples include a password reset flow that relies on security questions, a multi-tenant system that doesn't properly isolate tenant data at the database level, or an admin panel accessible from the same origin as the public site.

AI coding assistants are particularly prone to insecure design because they're optimizing for functionality, not threat modeling. They'll build you a feature that works without considering how it could be abused at scale.

A05: Security Misconfiguration

Security misconfiguration is the broadest category: default credentials, overly permissive CORS headers, directory listings enabled, unnecessary features enabled, missing security headers (Content-Security-Policy, X-Frame-Options, HSTS), and verbose error messages that expose internal details.

AI-generated deployment configurations are routinely misconfigured. CORS policies are often set to wildcard (*) to make development easier. Security headers are rarely configured. Debug mode is left enabled. Environment variables are hardcoded in configuration files that end up in version control.

A06: Vulnerable and Outdated Components

This one is straightforward: using dependencies with known vulnerabilities. AI tools often generate package.json files with specific dependency versions — and those versions get outdated quickly. A Cursor-generated app from six months ago might have multiple critical CVEs in its dependency tree that the developer has never thought to check.

A07: Identification and Authentication Failures

This category covers weak password policies, missing multi-factor authentication, insecure session management, and credential stuffing vulnerabilities. AI-generated authentication flows typically have no brute force protection — no rate limiting, no account lockout, no CAPTCHA. A login endpoint with no rate limiting can be attacked with credential stuffing lists containing billions of compromised credentials.

A08: Software and Data Integrity Failures

This includes insecure deserialization and CI/CD pipeline weaknesses. In the context of vibe apps, the most common manifestation is trusting data from cookies or localStorage without verification. For example, an AI-generated app might store the user's role in a cookie and read it back without verifying it against the database — allowing any user to escalate their own privileges by editing the cookie.

A09: Security Logging and Monitoring Failures

AI-generated apps almost never include security logging. There's no audit trail of who accessed what, no alerting when an account is brute-forced, and no way to detect an ongoing attack or investigate a breach after the fact. This doesn't create a vulnerability in the traditional sense, but it dramatically increases the impact of every other vulnerability.

A10: Server-Side Request Forgery (SSRF)

SSRF occurs when an attacker can cause the server to make HTTP requests to arbitrary URLs — including internal services, cloud metadata endpoints, and other infrastructure that should never be publicly accessible. AI-generated code that fetches external URLs based on user input (webhook validators, URL preview features, image proxies) is frequently vulnerable to SSRF.

On cloud platforms like AWS, GCP, and Azure, SSRF can be used to access the instance metadata service, which typically exposes IAM credentials. An SSRF vulnerability in an AWS-hosted app can lead to full cloud account compromise.

What to Do About It

Reading this list can be overwhelming. The important thing is to prioritize. Start with A01 (Broken Access Control) — it's the most common and often the easiest to exploit. Add ownership checks to every database query that fetches user-specific data. Then tackle A07 (Authentication Failures) by adding rate limiting to login and password reset endpoints.

For a comprehensive assessment of where your app stands across all ten categories, automated penetration testing gives you the fastest, most complete picture. Pentrust runs AI agents that actively attempt to exploit each of these vulnerability classes against your running application, then reports exactly what it found and how to fix it.

Ready to check your app?

Find your vulnerabilities before attackers do.

Pentrust runs AI agents that chain real exploits against your application and provides copy-paste fixes for every finding. Full pentest in under 30 minutes.

Run a free pentest