Five failure modes of LLM-written auth middleware
LLMs ship code that looks like auth. The surface reads well: guards, JWTs, bcrypt, state parameters. It passes review because each line, in isolation, is plausible. The bugs live in ordering, scope, and side effects - the things no diff highlights.
Over the past year I have audited roughly forty AI-generated codebases. A consistent pattern: the auth middleware is the most confidently written file in the repository, and also the most broken. Here are the five classes of bug I now check for first.
1. The role check runs after the data fetch
The most common failure mode, and the easiest to miss in review. The LLM knows it needs an authorization check. It also knows the endpoint needs to return data. It writes both. It does not reliably write them in the right order.
// What ships
async function getInvoice(req, res) {
const invoice = await db.invoices.findById(req.params.id);
if (invoice.ownerId !== req.user.id) {
return res.status(403).send('Forbidden');
}
return res.json(invoice);
}
This looks fine. It is not. The ownership check happens after the query has already executed. On a read endpoint that only problem is a wasted query. On a write endpoint, the equivalent pattern - validate, fetch, check, mutate - reorders into validate, mutate, check and now you have an IDOR that silently writes before it refuses.
The correct shape is a policy gate that runs before any data access, typically against a can(user, action, resource_id) predicate that only reads the minimum needed to decide. Anything less and you are depending on the LLM to respect an invariant that nothing in its training data enforces.
How to audit it
Grep the repo for every findById, findOne, get* inside a route handler. For each one, trace upward: is there a guard that references req.user before this line? If the guard is below, or references the fetched object's fields, flag it.
2. Password reset has no rate limit
Password reset endpoints are where LLMs predictably cut corners, because the test data never has adversarial volume. The generated controller sends an email, writes a token, and returns 200. No per-IP throttle, no per-account throttle, no exponential backoff.
Two real consequences: email bombing (you become a spam vector against your own users) and token enumeration (an attacker who knows the token structure can race the user). I have seen both in production AI-built SaaS this year.
// What ships
app.post('/password/reset-request', async (req, res) => {
const user = await User.findByEmail(req.body.email);
if (user) {
const token = crypto.randomBytes(32).toString('hex');
await Token.create({ userId: user.id, token, expiresAt: +60min });
await mailer.send(user.email, resetTemplate(token));
}
res.json({ ok: true });
});
Missing: rate limit keyed on (ip, email), invalidation of prior unused tokens, and constant-time response whether or not the email exists. That last point matters more than people think - the if (user) branch is timing-distinguishable and lets attackers enumerate valid emails.
3. OAuth state parameter bound to nothing
If I had to name the single most load-bearing OAuth detail that LLMs get wrong, it is the state parameter. The spec is clear: state must be bound to the initiating session and verified on callback. What you typically get instead:
// What ships
app.get('/auth/google', (req, res) => {
const state = crypto.randomBytes(16).toString('hex');
res.redirect(googleAuthUrl({ state }));
});
app.get('/auth/google/callback', async (req, res) => {
const { code, state } = req.query;
// state is accepted without checking
const tokens = await exchangeCode(code);
// ...sign in user
});
The state is generated, sent out, and never referenced again. Any state string satisfies this callback. That is textbook CSRF on the login flow - an attacker links a victim to /auth/google/callback?code=attacker_code and the victim is now logged in as the attacker. Which is exactly how account-takeover chains start.
The fix is unremarkable: put state in a signed, short-lived cookie scoped to the callback path, and reject any callback whose state does not match. The reason LLMs skip it is that it takes two extra functions and a cookie config, and the happy-path test still passes without it.
4. Session fixation on login
The pattern: an anonymous user gets a session ID on first request. They log in. The session ID is reused with a userId stamped on it. That is session fixation - an attacker who planted that pre-login session ID (via a reflected link, a shared terminal, anything) is now signed in as the victim.
LLMs ship this because rotating session IDs on privilege transitions is one of those things you only learn by having it exploited. The fix is one line: regenerate the session on successful authentication, and on every privilege elevation (MFA, sudo mode, role change). Frameworks expose it - Express's req.session.regenerate(), Django's cycle_key(), Rails's reset_session - and most AI-generated login handlers forget to call it.
5. JWT verification accepts the wrong algorithm
The classic one, still alive and well in 2026. The LLM writes jwt.verify(token, secret) without pinning the algorithm. Some libraries default to "any algorithm in the header," which means an attacker can send a token signed with HS256 using the public key as the HMAC secret, and it verifies. Or send alg: none and skip the signature entirely.
// What ships
const payload = jwt.verify(token, config.jwtSecret);
// algorithm is whatever the header says
Correct shape: jwt.verify(token, key, { algorithms: ['RS256'] }). Always pin. Always whitelist. And if you are rotating keys, key selection must be driven by your own logic, not by kid claims you trust blindly - kid can and will be used to point at arbitrary files when the code does string concatenation on it.
What to do on Monday
If you ship AI-generated auth today, run these five checks before you do anything else:
- For every data read in a route handler, identify the authorization check and confirm it runs first, against the resource identifier, not the resource object.
- Reset endpoints: rate-limited per-IP and per-email, constant-time response, tokens invalidated on use and on new requests.
- OAuth callbacks: state bound to a cookie set at redirect time, verified on return, single-use.
- Login handler regenerates the session ID. MFA prompts regenerate again.
- JWT verify calls pass an explicit
algorithmsarray. No exceptions.
None of these require rewriting the architecture. They require one engineer reading the auth layer with suspicion, for an hour. That is the job. LLMs do not do it for free, and they will not next year either.
Ship boring releases.
Book a 20-min call.