Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's all imperfect, for sure. For for instance see this old SO question [1], which does not specify python version. I pasted the text of the question and top answer into GPT-3 and prefaced it with the query "The following is programming advice. What is the langauge and version it is targeted at, and why?"

GPT-3's response:

> The language and version targeted here is Python 3, as indicated by the use of ThreadPoolExecutor from the concurrent.futures module. This is a module added in Python 3 and can be installed on earlier versions of Python via the backport in PyPi. The advice is tailored to Python 3 due to the use of this module.

That's imperfect, but I'm not trying to solve for Python specifically... just saying that the LLM itself holds the data a query engine needs to schematize a query correctly. We don't ChatGPT to understand the significance of version numbers in some kind of sentient way, we just need it to surface that "for a question like X, here is the additional information you should specify to get a good answer". And THAT, I am pretty sure, it can do. No understanding required.

1. https://stackoverflow.com/questions/30812747/python-threadin...



I don't think the issue is whether current LLMs have sufficient data, but whether they will be able to use it sufficiently well to make an improvement.

The question you posed GPT-3 here is a rather leading one, unlikely to be asked except by an entity knowing that the version makes a significant difference in this context, and I am wondering how you envisage this being integrated into Bing.

One way I can imagine is that if the user's query specified a python version, a response like that given by GPT-3 in this case might be used in ranking the candidate replies for relevance: reject it if the user asked about python 2, promote it if python 3 was asked for.

Another way I can imagine for Bing integration is that perhaps the LLM can be prompted with something like "what are the relevant issues in answering <this question> accurately?" in order to interact with the user to strengthen the query.

In either case, Bing's response to the user's query would be a link to some 3rd-party work rather than an answer created by the LLM, so that would answer my biggest concern over being able to check its veracity, though its usefulness would depend on the quality of the LLM's reply to its prompts.

On the other hand, the article says "Microsoft is betting that the more conversational and contextual replies to users’ queries will win over search users by supplying better-quality answers beyond links", apparently saying that they envision giving the user a response created by the LLM, which brings the question of verifiability back to center stage. Did you have some other form of Bing-LLM interaction in mind?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: