All posts
Security BasicsVibe Coding

Why Vibe-Coded Apps Are a Security Nightmare (And How to Fix It)

AI coding tools like Cursor, Bolt, and Replit make shipping fast easy — but the apps they generate are riddled with security holes. Here's why, and what you can do about it.

January 15, 20269 min read

You shipped your app in a weekend. Cursor wrote 90% of the code. It looks great, it works, and users are signing up. Then someone DMs you: "I can see every user's data if I change the ID in the URL." Congratulations — you've just discovered your first IDOR vulnerability.

This scenario plays out dozens of times a day across the vibe coding ecosystem. Developers using AI tools like Cursor, Bolt, Replit, Lovable, and v0 are shipping faster than ever — but the security debt they're accumulating is enormous. This isn't a knock on these tools. It's a structural problem with how AI models are trained and what they optimize for.

What Is Vibe Coding, Really?

Vibe coding is the practice of building software primarily by prompting AI assistants rather than writing code by hand. A developer describes what they want — "build me a SaaS app with Stripe billing, user authentication, and a dashboard" — and the AI generates the scaffolding, routes, database schema, and business logic.

The productivity gains are real and significant. A solo founder can now build in a week what used to take a team of four a month. But speed creates a specific kind of blind spot: the code works, the features work, and the developer never deeply reads what was generated. This means authorization checks, input validation, and secure defaults often get glossed over.

Why AI-Generated Code Has More Vulnerabilities

AI coding assistants are trained to be helpful and to produce working code. They are not trained to be paranoid. A human security engineer approaches every API endpoint with suspicion: who can call this? What parameters can be tampered with? What happens if an attacker sends unexpected input? AI models rarely ask these questions unprompted.

There are several specific failure modes we see repeatedly in vibe-coded apps:

  • Authorization is bolted on, not built in. AI generates the route first, then adds an auth check — but the check often verifies identity (are you logged in?) without verifying authorization (are you allowed to do this specific thing?).
  • Direct object references are everywhere. AI-generated CRUD apps routinely expose database IDs directly in URLs and API responses, then forget to validate ownership before serving data.
  • Input is trusted. SQL queries, shell commands, and HTML templates are often built by concatenating user input, opening the door to injection attacks.
  • Error messages are verbose. AI-generated error handling tends to return full stack traces, database errors, and internal paths to the client — a goldmine for attackers.
  • Rate limiting is missing. Authentication endpoints, password reset flows, and OTP checks often have no rate limiting, making brute-force attacks trivial.

The 5 Most Common Security Issues in Vibe Apps

1. Insecure Direct Object Reference (IDOR)

IDOR is the king of vibe app vulnerabilities. It occurs when an application uses a user-controllable identifier (like a database ID) to access a resource without verifying that the current user owns that resource. The fix sounds simple — add an ownership check — but AI models frequently forget to add it, especially in complex nested resource hierarchies.

2. Broken Authentication and Authorization

Authentication (who are you?) and authorization (what can you do?) are distinct concepts that AI models frequently conflate. An app might correctly verify that a user is logged in while completely failing to check whether that user has permission to view the requested resource. Admin endpoints are the most common victim: a route guarded only by checking that a JWT exists, not that the JWT belongs to an admin.

3. SQL and NoSQL Injection

Despite decades of awareness, injection attacks remain prevalent in AI-generated code. AI models sometimes generate raw SQL queries with string interpolation, especially when dealing with dynamic filter or sort parameters. Even when using ORMs, AI-generated code occasionally uses raw query escape hatches (like Prisma's $queryRaw or Supabase's rpc) without parameterization.

4. Sensitive Data Exposure

AI-generated API responses are often over-permissive — they return the entire database row rather than selecting only the fields the client actually needs. This leads to password hashes, internal IDs, admin flags, and other sensitive fields leaking to frontend clients. In user listing endpoints, this can expose the personal information of every user in your system.

5. Missing Rate Limiting and Brute Force Protection

Login endpoints, password reset flows, OTP verification, and invite code redemption all need rate limiting. Without it, an attacker can make thousands of guesses per second. AI-generated authentication flows almost never include this protection unless explicitly prompted.

Key insight

In our analysis of vibe-coded applications, over 85% had at least one IDOR vulnerability, and over 60% had at least one critical authentication flaw. These aren't edge cases — they're the default output of AI coding assistants.

How to Fix Security in Vibe-Coded Apps

The good news is that most of these issues are fixable once you know where they are. The challenge is finding them efficiently. There are three approaches:

  1. Manual code review: read every API route and ask 'who can call this and what can they do?' This is thorough but slow, and it requires security expertise most vibe coders don't have.
  2. Static analysis: tools like Semgrep can catch some injection patterns and obvious misconfigurations, but they can't verify runtime authorization logic or chain exploits together.
  3. Automated penetration testing: a purpose-built tool that actually attempts to exploit your application the way an attacker would. This is the fastest path from 'I think my app is secure' to 'I know what's broken and here's how to fix it.'

Pentrust is built specifically for this problem. It spins up AI agents that chain real exploits — attempting IDOR, auth bypass, injection, configuration leaks, and more — against your live application, then provides copy-paste fixes for every finding. A full pentest runs in under 30 minutes.

Ready to check your app?

Find your vulnerabilities before attackers do.

Pentrust runs AI agents that chain real exploits against your application and provides copy-paste fixes for every finding. Full pentest in under 30 minutes.

Run a free pentest