Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have to say, I'm not convinced by all of the AI hype, but it makes a bloody good search engine for a lot of the things I want to search (things where I am in a reasonable position to evaluate the truthfulness of an answer). It's only a matter of time before someone realises that and ruins it in the aim of making money.


It’s far more 5D chess than that.

Search costs nothing to run relative to ad revenue. Microsoft makes each query require 100x the cpu usage because users expect an LLM answer for results.

Microsoft’s share of search goes from 1% to 5%. Their cost go up but their sale of ads increases and they get valuable IP.

Google loses 5% share of market but its costs go up 100x.

Google can no longer finance its other bets like Cloud so effectively.

Microsoft meanwhile has a more compelling cloud offering.

Google starts to lose more ground on Cloud.

Amazon (not an AI company) lose ground to both.

Classic Art of War, if your opponent is strong, attack somewhere they are weak.


Is your point that MS is diversified enough to absorb the losses of eating up Google's share in search even with the now higher costs?


Microsoft can afford to make a less profitable product for search than Google because if Google competes it’s a net win for Microsoft.

AI has many other profitable uses for Microsoft but specifically using it to compete with Google Search seems like a poison pill.


The point, as I understand it, is for Microsoft the additional cost of making their search return results as a LLM is pretty small as compared to their overall business.

However, they will force Google's hand into moving their whole search load over to LLM to avoid loosing market share to Bing. This will cost Google a lot more as search is a much bigger part of their business.

It's an interesting theory but I wouldn't know enough to evaluate how likely it is.


I will be devastated when (hopefully if!) this happens. If I use a standard search engine to look up a question, I typically get endless “okay” results with piles of moneygrabbing shit mixed in. If I ask chatGPT, I get a contextually relevant, straightforward answer that is probably as accurate as anything else I could reasonably find. It’s like a breath of fresh air


Sure, but for now. ChatGPT results will also be enshitified, and since the cycle of everything is much faster now, it will take much less than 20 years.


Nah, the fruit hangs very low here, I bet that fine-tuning with ads is in MS and Google backlogs already. Try to ad block that.


OpenAI has some hiring reqs out for people with AdTech experience too


edit: I totally misread the parent comment, move along

I really wouldn't recommend looking to LLMs to evaluate truthfulness, they aren't built for that and have no way of knowing what true is.

They're much better as a tool for summarizing information, with the usual caveat that any summary is incomplete and can only be as truthful as the original source(s) used.


I think you've misread my comment. I have found copilot for example to be really useful for searching for things like how to pull out part of a log like using common bash commands. I can generally evaluate what it comes up with pretty well; having used all of the commands before but just not using them on the daily basis which would make it easy for me to come up with the initial version on my own.

On the other hand asking it the population of Azerbaijan is problematic. I'd have to Google whatever it told me.


Looked again and I absolutely misread it, I somehow completely missed the "things where I am in a reasonable position to evaluate" part. Sorry about that!


Can't they technically be supplemented to report their confidence level along with their output? Afaict there's ways to do it.


But what does a confidence level even mean for an LLM? "Truthfulness" would be limited by the original training set, compounded by the fact that the LLM loses context when effectively compressing the original dataset down to the LLM's final model.

A confidence level could be helpful to know how much competing data was considered by the LLM when predicting each word, but it would say nothing of truthfulness or even accuracy relative to the original training data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: