Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To my believe there was not a goal to write good code. The goal was maintainability and to keep it simple, so that people understand. People come and go, you constantly get to see foreign code and you have to do something with it.

Anyways, i see the maintainability hell coming onto us. I still wonder how i organize this with AI. I definitly do not want to touch it what is written by AI.



I think the industry-wide hope is that AI manages the AI-written code, but it’s unclear whether that’s actually going to work out in practice. Right now, my experience is that is dicey. I’ve had AI mess up a codebase to the point where I threw it away and restarted. Maybe I was doing it wrong, though, in that I was looking at the code and was increasingly horrified by the slop. I get the feeling that in this new world, we’re supposed to ignore how the sausage is made and just focus on the final outcome.


IME AI-native engineering requires a lot of infrastructure to make it viable. Teams who are just opening up cursor and putting it on "auto" and trying to one shot features may get stuff that works but is indeed slop.

Since the beginning of the year, I've been spearheading a low-stakes AI-native project (an internal tool). No one's written a single line of code. And we've learned so much from this experience. The first rule was our product manager, who is technical but isn't typically in the weeds, needs to be able to one-shot prompts with cursor auto. And so many rules stem from there, from e2e tests to ensure he doesn't break stuff, to custom linters to ensure that code lives in the right place, to architectural spec sheets so the LLM doesn't try to do raw DB queries from the client.

We're still not there, but we're getting closer and learning and improving every day.

I think the folks who are vibe coding a lot either aren't working in a team, or they are omitting the fact that they have spent a long time building harnesses to ensure the LLM doesn't run amok.

And I think the people who hate vibe coding are likely just asking Claude Code to do X without using Skills that have opinionated ways to do X.

All that said, I don't think we should ignore how the sausage is made at all. Part of what makes me able to move quickly in this project is knowing where stuff lives. I may not understand the line-by-line code, but if I know where to look to find out why I'm missing data that's in the DB, I can move a lot faster than if I have no idea what's going on in the codebase. Then when I find the problematic file or function, I can ask the LLM why it's like X and tell it it should be like Y.


Cool. Are you restricting the AI to be very focused on a function or an architectural blocks that is envisioned, or are you giving it more freedom? I seem to have less slop when I really constrain things, but that takes a lot of work (e.g., specs) and dialogue with the AI (“focus on X, now let’s design block Y,” etc.).


I give it freedom but with the predefined restrictions. I use a plug-in called Obra Superpowers. Whenever I want to start on a block of work, whether it's a ticket or if I just want to tackle tech debt, I start with the brainstorm command. I say something vague like "implement X" or "last time i tried to vibe code Y, Z happened. I don't want that to happen again. Let's improve the harness."

It'll ask follow questions, which I answer, then generate specs that I manually review. If it looks good, it'll generate a plan. If not then I'll give it feedback.

When the scope of work is well-defined (ie my boss says users should be able to do Y) then this process is fairly seamless.

When it's not well-defined then it does take a bit longer and more dialogue as you said. But because everything is documented and written down, we have a pretty good feedback loop (boss asks why it works like X, I can look at the generated spec/plan, or ask the AI to, to understand why).


Ok, so it’s constrained by specs, but you dialog with the AI and have it create the specs. I should try that. I’ve been creating my own specs and having it work from those and then iterating, but that’s not exactly quick and I find myself thinking, “At this rate I could do it faster myself.”


Yeah definitely agreed. I'm lucky I'm that my boss is willing to invest in this little experiment so the point isn't "can we do this faster manually" it's "how can we build our AI infrastructure such that it can actually be faster."

And also, I'm taking care of my infant daughter while working so my workflow is often "launch an AI agent from my computer while she's asleep, review plan on my phone while feeding or napping the little one, approve it and execute it" so it's often running when I'm not really in a mental space to be thinking deeply.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: