I feel like the comments are taking away the wrong message from this. He used it to do the part of the project that he has poor knowledge/experience/capacity for. He needs a visualization but the project isn't about visualization. He's spent a lifetime coding in C, not writing python-based visualizations.
There's a big difference between vibe-coding an entire project and having an AI build a component that you lack competency for. That is what is happening here.
It's the same principle as a startup that builds it core functionality itself in-house and then uses off-the-shelf libraries for all the other uninteresting details.
>Add README and LICENSE file
I'm pushing it out to github not because it needs to be public, but
because of my policy of using the internet as my backups. And because
it makes it so much easier to just sync between machines.
Very cool to see a legends side project. I'll check this out when I have time even though I can't understand C well.
I think many people reading your comment and making a cursory glance at what Linus wrote will conclude he's against AI in kernel development. He may very well be, but the post you link to is in in response to this:
I'm just saying that we should highlight that:
1. LLMs _allow you to send patches end-to-end without expertise_.
2. As a result, even though the community (rightly) strongly disapproves of blanket dismissals of series, if we suspect AI slop [I think it's useful to actually use that term], maintains can reject it out of hand.
Point 2 is absolutely a new thing in my view.
What Linus is saying in his response to the above, is that if people use AI to produce slop and submit that, the documentation saying "don't do that" won't discourage them, therefore there's no need to pollute the documentation with that.
What's truly "sad" is that it's ok if Google does it. It's apparently not ok when OpenAI, Musk, etc do literally the same thing.
This whole controversy reminds me of that Rhodesian designed shotgun marketed as "Street Sweeper" in the US. Making a tool is one thing. Brandishing its unsafe working end to potential customers in an attempt to impress them, alas, could lead to interesting situations...
Depends on stage. Codex/Claude-generated code have hinting and doc strings, and will gladly add tests for every change you're doing. (That's why vibe coded projects have a gazillion tests.)
There's a big difference between vibe-coding an entire project and having an AI build a component that you lack competency for. That is what is happening here.
It's the same principle as a startup that builds it core functionality itself in-house and then uses off-the-shelf libraries for all the other uninteresting details.
reply