Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AI coders keep saying they review all the code they push

Those tides have shifted over the past 6 weeks. I'm increasingly seeing serious, experienced engineers who are using AI to write code and are not reviewing every line of code that they push, because they've developed a level of trust in the output of Opus 4.5 that line-by-line reviews no longer feel necessary.

(I'm hesitant to admit it but I'm starting to join their ranks.)





In the past week, I saw Opus 4.5 (being used by someone else) implement "JWT based authentication" by appending the key, to a (fake) header and body. When asked to fix this, it switched to hashing the key (and nothing else), and appending the hash instead. The "signature" still did not depend on the body, meaning any attacker could trivially forge an arbitrary body, allowing them to e.g. impersonate any user they wanted to.

Do I think Opus 4.5 would always make that mistake? No. But it does indicate that the output of even SotA models needs careful review if the code actually matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: