Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The “anti-AU hype” phrase oversimplifies what’s playing out at the moment. On the tech side, while things are a bit rough around the edges still the tech is very useful and isn’t going away. I honestly don’t see much disagreement there.

The concern mostly comes from the business side… that for all the usefulness on the tech there is no clearly viable path that financially supports everything that’s going on. It’s a nice set of useful features but without products with sufficient revenue flowing in to pay for it all.

That paints a picture of the tech sticking around but a general implosion of the startups and business models betting on making all this work.

The later isn’t really “anti-AI hype” but more folks just calling out the reality that there’s not a lot of evidence and data to support the amount of money invested and committed. And if you’ve been around the tech and business scene a while you’ve seen that movie before and know what comes next.

In 5 years time I expect to be using AI more than I do now. I also expect most of the AI companies and startups won’t exist anymore.



In the late 2000s i remember that "nobody is willing to pay for things on the Internet" was a common trope. I think it'll culturally take a while before businesses and people understand what they are willing to pay for. For example if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.


> For example if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.

One is the time of a human (irreplaceable) and the other is a tool for some human to use, seems proportional to me.


> human (irreplaceable)

Everyone is replaceable. Software devs aren't special.


Domain knowledge is a real thing. Sure I could be replaced at my job but they'd have a pretty sketchy time until someone new can get up to speed.


Yes, with another human. I meant more that you cannot replace a human with a non-human, at least not yet and if you care about quality.


Perhaps you can replace multiple developers with a single developer and an AI tool in the near future.

In the same way that you could potentially replace multiple workers with handsaws with one guy wielding power tools.

There could be a lot of financial gain for businesses in this, even if you still need humans in the loop.


That may be, but I still think

> if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.

Is way off base. Even if you replace multiple workers with one worker but better tool, businesses still won't want to pay the "multiple worker salary" to the single worker just because they use a more effective tool.


Yes, I agree. But do they have to?

It would seem to me that tokens are only going to get more efficient and cheaper from here.

Demand is going to rise further as AI keeps improving.

Some argue there is a bubble, but with demand from the public for private use, business, education, military, cyber security, intelligence, it just seems like there will be no lack of investment.


Late 1990s maybe. Not late 2000s.


People said the exact same thing about (numbers from memory, might be off):

- when Google paid $1 bil for YouTube

- when Facebook paid $1 bil for Instagram

- when Facebook paid $1 bil for WhatsApp

The same thing - these 3 companies make no money, and have no path to making money, and that the price paid was crazy and decoupled from any economics.

Yet now, in hindsight, they look like brilliant business decisions.



You listed only acquisitions that paid off and not the many, many more that didn't though.


I am not even clear how Whatsapp "paid off" for Facebook in any sense other than them being able to nip a potential competitor in the bud. I use Whatsapp but do not see a single advert there nor do I pay a single penny for it, and I suspect my situation is pretty typical. Presumably some people see ads or pay for some services but I've not, and I don't imagine there's that much money to be made in being the #1 platform for sharing "Good Morning" GIFs


While many people thought Facebook/Google paid too much for these companies, you're making an apples-to-oranges comparison. That part about there being "no path to making money" is wrong - online advertising was a huge industry and only getting stronger and while YT/Insta/Whatsapp may have struggled as standalone companies it was clear they'd unlock an enormous amount of value as part of a bigger company that already had a strong foothold in advertising online.

It is not clear who, other than maybe someone like Microsoft, could actually acquire companies like OpenAI or Anthropic. They are orders of magnitude larger than the companies you mentioned in terms of what they are "worth" (haha) and even how much money they need just to keep the lights on, let alone turn any kind of profit.

Not to mention the logical fallacy at the core of your point - people said "the exact same[sic] thing" about YouTube, Instagram and Whatsapp ... therefore, what, it necessarily means these companies are the same? You realise that many of us talked like this about "the blockchain", and "the Metaverse" and about those stupid ape JPEGS and we were absolutely correct to do so.


> Not to mention the logical fallacy at the core of your point

Yes, it's a logical fallacy. Another one is saying "I don't see any viable business model, therefore there is no viable business model".

Blast from the past:

> YouTube is a content paradise though. There's tons of value there and you can sell ads against it or even charge for premium services.

> Where's the money in Instagram? The content is practically worthless and their only real value is in their userbase. Even though I use the Instagram client, most of the time I see photos, they come through Twitter. So that also reinforces for me that any value is in the users and not the actual content, which is mostly crap.

> I'm more convinced that we're in a 2nd bubble now more than ever.

https://news.ycombinator.com/item?id=3818037

Another one:

> Does anyone else think this valuation is insane? It's like $300/registered user. The company doesn't have a business model. No way the handful of employees are worth $1B. My mind is blown.

https://news.ycombinator.com/item?id=3817930


It sounds like you're really into this and I hope for all of our sakes that you are correct to be all hyped up about AI. Because if you're not and that this is a horrific bubble that is going to burst then we're all in big trouble


yeah, and Zuckerberg said that everyone on planet Earth will buy his VR helmet, and renamed his whole company after a stupid game which i don't think even exists anymore. Being a contrarian doesn't mean you are right, and sometimes seemingly stupid money-losing things turn out... stupid.


There’s no comparison to what’s going on now vs those examples. Not even remotely similar.


> that for all the usefulness on the tech there is no clearly viable path that financially supports everything that’s going on

you lack imagination, human workers are paid globally over $10 trillion dollars.


They were even saying this about Uber just a couple years ago. Now Uber makes $15b a year


Uber are doing something entirely different though - they took a market which was proven to exist, created a product which worked then spent a decade being horribly unprofitable until they were the dominant player in that market. And even at their very worst they weren't losing as much money as OpenAI are. There's far too much hand-waving and dismissive "ah it'll be ok because Uber exist" going on among those who have bought into the AI hype cycle


We don't really know how much money Google sunk into YouTube before it became (presumably) profitable. It might have actually not been strongly coupled to economics.


Also they attempted their own competitor before buying YouTube, called Google Video. It never got very popular.


The blog post title is a joke about the AI hype.


Well it completely misses the mark, because your whole article IS hyping up AI, and probably more than anything I've seen before honestly.

If it's all meant to be ironical, it's a huge failure and people will use it to support their AI hype.


I was not clear enough. I wanted to write a PRO-AI blog post. The people against AI always say negative things with using as central argument that "AI is hyped and overhyped". So I, for fun, consider the anti-AI movement a form of hype. It's a joke but not in the sense it does not mean what it means.


However, as you point out, anti-AI people are pushing back against hype, not indulging in hype themselves - not least as nobody is trying to sell 'not-AI'.

I for one look forward to the next AI winter, which I hope will be long, deep, and savage.


[flagged]


> Anyone claiming "Writing code is no longer needed for the most part" is not a serious software engineer.

You need to recalibrate. Six months ago I would have agreed with you, but Opus 4.5 and GPT-5.2 represent a very real change.

I would phrase this as "typing the code out by hand is no longer needed for the most part", which I think is what antirez was getting at here.


And I'm sure if you go back to the release of 3.5, you'll see the exact same comments.

And when 5 comes out, I'm sure I'll see you commenting "OK I agree 6 months ago but now with Claude 5 Opus it's great".

It's really the weirdest type of goalpost moving.

I have used Opus 4.5 a lot lately and it's garbage, absolutely useless for anything beyond generating trivial shit for which I'd anyway use a library or have it already integrated in the framework I use.

I think the real reason your opinion has changed in 6 months is because your skills have atrophyed.

It's all as bad as 6 months ago, and even as bad as 2 years ago, you've just become worse.


> And I'm sure if you go back to the release of 3.5, you'll see the exact same comments.

Not from people whose opinions on that I respect.

Credible software developers I know were impressed by Claude 3.5 but none of them were saying "I don't type out code by hand any more". Now they are.

If you think LLMs today are "as bad as 2 years ago" then I don't respect your opinion. That's not a credible thing to say.


> Not from people whose opinions on that I respect.

Then you shouldn't respect Antirez's opinion, because he wrote articles saying just that 2 years ago.

> If you think LLMs today are "as bad as 2 years ago" then I don't respect your opinion. That's not a credible thing to say.

You are getting fooled by longer context windows and better tooling around the LLMs. The models themselves have definitely not gotten better. In fact it's easy to test, just give the exact same prompt to 3.5 and 4.5, and receive the exact same answer.

The only difference is that when you used to copy-paste answers from the ChatGPT UI, you now have it integrated in your IDE (with the added bonus of it being able to empty your wallet much quicker). It's a faster process, not a better one. I'd even argue it's worse, since you spend less time reviewing the LLM's answer in this situation.

How do you explain that it's so easy to tell (in a bad way) when a PR is AI-generated if it's not necessary to code by hand anymore?


Claude 3.5 didn't have "reasoning" - Anthropic first added that in 3.7 less than a year ago.

The RL for code problems that supported reasoning modes has been the driving force behind most of the model improvements for code over 2025: https://simonwillison.net/2025/Dec/31/the-year-in-llms/#the-...

> Then you shouldn't respect Antirez's opinion, because he wrote articles saying just that 2 years ago.

Which articles? What did he say?

https://antirez.com/news/154 is one from six months ago where he says:

> Despite the large interest in agents that can code alone, right now you can maximize your impact as a software developer by using LLMs in an explicit way, staying in the loop.


>If you think LLMs today are "as bad as 2 years ago" then I don't respect your opinion. That's not a credible thing to say.

This exact comment started getting old a year ago.


I can't tell if you are agreeing or disagreeing with me here.


What's wrong with you? Let people express their experience without calling them mentally ill. Put yourself together.


The comment was flagged and killed; the system works.

Please don't respond to personal attacks with personal attacks.


Personal attack for calling out the hostility? And btw it was not flagged nor killed at the moment when I wrote my comment.


Language like "What's wrong with you?" is a clear personal attack.


I couldn't care less.


There are too many people who see the absurd AI hype (especially absurd in terms of investment) and construct a counter-argument with it that AI is useless, overblown and just generally not good. And that's a fallacy. Two things can be true at the same time. Coding agents are a step change and immensely useful, and the valuations and breathless AGI evangelizing is a smoke screen and pure hype.

Don't let hype deter you to get your own hands dirty and try shit.


On the tech side, while things are a bit rough around the edges still the tech is very useful and isn’t going away. I honestly don’t see much disagreement there.

What? HN is absolutely packed with people complaining about LLMs are nothing more than net useless creators of slop.

Granted, fewer than six months ago, which should tell people something...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: