All posts
·5 min read

5 Vulnerabilities I Found in Random Startups This Month (And How They Fixed Them)

Five real vulnerabilities pulled from this month's free audits — anonymized, explained, and with the exact fix the team shipped.

5 Vulnerabilities I Found in Random Startups This Month (And How They Fixed Them)

This month we ran 23 free audits. We found something in 19 of them. Here are the five most interesting findings — anonymized, explained, and with the exact fix the team shipped.

If you recognize your own app in any of these, you're not alone. These are common.

1. The "admin" route that wasn't admin

A YC-backed B2B SaaS, ~80 customers. Their app had an /admin route gated by a check that looked like this:

if (req.headers["x-admin-key"] === "letmein") { ... }

The header was hardcoded. Every employee knew it. The string had been in the codebase for 18 months. We found it in 90 seconds by grepping their public JS bundle, where it had been accidentally inlined by a bundler config change six weeks earlier.

Severity: Critical. Anyone who looked at the source could become admin.

The fix they shipped: Removed the hardcoded check. Moved admin to a separate subdomain protected by their actual auth provider, with a role check enforced server-side. Rotated all session tokens. Time to fix: 1 day.

The lesson: Anything you "ship as a placeholder" will become production. There is no temporary in code.

2. The IDOR in the file download endpoint

A document-collaboration tool, Series A. Their file download endpoint was:

GET /api/files/:fileId/download

It checked that you were authenticated. It did not check that the file belonged to your account. Changing the fileId in the URL would download anyone's file.

Severity: Critical. Cross-tenant data exposure.

The fix they shipped: Added an ownership check at the start of the handler:

const file = await db.file.findFirst({
  where: { id: fileId, accountId: session.accountId },
});
if (!file) return res.status(404).json({ error: "Not found" });

Backfilled the same pattern across 31 other endpoints. Wrote a single requireOwnedResource() helper to enforce it. Time to fix: 3 days.

The lesson: Authentication is not authorization. "Logged in" doesn't mean "allowed."

3. The exposed .git directory

A polished marketing site for a Series B startup. Going to /.git/config returned the contents. From there we cloned the entire repository. From the repo we got their staging database credentials. From staging we could read their production user table because someone had pointed staging at the production replica "temporarily."

Severity: Critical. Full repo + database access from one URL.

The fix they shipped: Three things, in this order. Removed .git from the deployed artifact (added to their build's exclude list). Rotated every credential the repo contained. Disconnected staging from the production replica and seeded it with synthetic data. Time to fix: 2 days, mostly the credential rotation.

The lesson: Your deployed bundle is a public document. Treat it like one.

4. The password reset that didn't expire

A consumer SaaS, 12k DAU. Their password reset emailed a link with a token. The token was valid forever and did not invalidate previous tokens.

We requested 50 password resets for our test account, then used the *first* token (sent 30 minutes ago) and successfully reset the password.

Severity: High. Password reset tokens that don't expire are a credential-stuffing accelerant — if any old reset email leaks, the account is compromised forever.

The fix they shipped: Set token expiration to 30 minutes. Invalidated all previous tokens for an account when a new reset was requested. Invalidated the token after one successful use. Added a "Your password was changed" notification email. Time to fix: 4 hours.

The lesson: Tokens have a lifecycle. Issued, used, expired. Skip any of those steps and you have a problem.

5. The CSV export that was a SQL query

A data-heavy SaaS, around $2M ARR. Their "Export to CSV" feature took a filter object from the client, JSON-stringified it, and concatenated it into a SQL query.

The filter was meant to look like { status: "active" }. We sent { status: "active') OR 1=1; --" }. The export downloaded their entire database for that table.

Severity: Critical. SQL injection in 2026 should not exist, and yet.

The fix they shipped: Switched to parameterized queries via their ORM (they were using Knex but had bypassed it for this one feature). Audited the codebase for other places that built SQL with string concatenation — found two more. Added a CI lint rule that flags raw SQL strings for review. Time to fix: 2 days for the immediate fix, two weeks for the broader audit.

The lesson: SQL injection is solved. If you have it, it's because someone bypassed the solution. Find out where else they bypassed it.

What these have in common

None of these were exotic. Every one is in the OWASP Top 10. Every one was in a codebase being actively maintained by a competent team. Every one was found in under an hour of looking.

The pattern: someone made a defensible local choice that became indefensible globally. A hardcoded admin key for testing. An ownership check that was added to most endpoints but missed one. A staging database wired to production for a one-time backfill.

The fix is rarely the hard part. The hard part is having someone look from outside, with no skin in the local choice, and notice.

That's what we do. If you'd like a free read on your app, request one at /free-audit. One week turnaround. Three findings, ranked. No upsell unless you ask.

Want this read on your own app?

Free audit. Three findings, ranked. No credit card.