Hacker Newsnew | past | comments | ask | show | jobs | submit | algesten's commentslogin

or being... punkted?

WebSockets over TCP is probably always going to cause problems for streaming media.

WebRTC over UDP is one choice for lossy situations. Media over Quic might be another (is the future here?), and it might be more enterprise firewall friendly since HTTP3 is over Quic.


I wouldn't define it as Sans-IO if you take an IO argument and block/wait on reading/writing, whether that be via threads or an event loop.

Sans-IO the IO is _outside_ completely. No read/write at all.


Oof, you're completely right. I'm not sure where I got that wire crossed.


> ...allows it to carry current with zero resistance at 3.5 Kelvin (about -453 degrees Fahrenheit)

Seems to me this is a problem.


It's an interesting result, but yeah, not a room temperature superconductor.


For that matter, we've had superconductors for decades that work at much higher temperatures than this one.


It seems the breakthrough is that you could use familiar semiconductor manufacturing processes. However the temperature is still going to be a major issue. I don't want a computer that requires liquid helium cooling.


> I don't want a computer that requires liquid helium cooling.

True, but I /can/ see someone, such as Sandia National Labs, very much willing to install a liquid helium cooled computer if it provides a significant performance increase above their existing supercomputer installations.


> you could use familiar semiconductor manufacturing processes.

Unclear to me why that's helpful. Materials that superconduct at a higher temperature than this one aren't hard to come by, or obscure:

> In 1913, lead was found to superconduct at 7 K,


Probably because they don’t behave well for normal lithography techniques? The high temp superconductors I know of are weird meta materials, and good luck getting them to exist in chip form at all.


Isn’t that very close to the practical limit for cooling in a lab?


Not that hard. A dilution fridge, used for instance for cooling quantum computers, can go much lower:

https://en.wikipedia.org/wiki/Dilution_refrigerator


Quantum devices are already cooled to that temperature (at least for some technologies), so it's not a problem in that use case.


Thanks!

Was gonna be lazy and say… temp or is doesn't matter.


I assume with HTTP/1.1 this would be less useful, since each synchronized request would require another socket, thus hitting potential firewalls limiting SYN/SYN-ACK rate and/or concurrent connections from the same IP.

In some respects this is abusing the exact reason we got HTTP/3 to replace HTTP/2 – it's a deliberate Head-of-Line (HoL) blocking.


You can pipeline requests on http/1.1. But most servers handle one request at a time, and don't read the next request until the current request's response is finished. (And mainstream browsers don't typically issue pipelined requests on http/1.1, IIRC)

If you have a connection per request, and you need 1000 requests to be 'simultaneous', you've got to get a 1000 packet burst to arrive closely packed, and that's a lot harder than this method (or a similar method suggested in comments of sending unfragmented tcp packets out of order so when the first packet of the sequence is recieved, the rest of the packets are already there)


Ok, pipelining as in using the fact that the socket is bidirectional, so you queue up the next request before the previous response has arrived?

Sounds a bit dodgy, since any response could potentially contain a Connection: Close. Maybe ok for some scenarios with idempotent methods.


It's less that the socket is bidrectional, but that most requests have an unambiguous end. A pipeline-naive server with Connection: keep-alive is going to read the current request until the end, send a response, and then read from there. You don't have to wait for the response to send the next request; and you'll get better throughput if you don't.

Some servers do some really wacky stuff if you pipeline requests though. The RFC is clear, the server should respond to each request one at a time, in order. However, some servers choose not to --- I've seen out of order responses, interleaved responses, as well as server errors in response to pipelined requests. That's one of the reasons browsers don't tend to do it.

You also rightfully bring up the question of what to do if the connection is closed and your request has no response. IMHO, if you got Connection: Close in a response, that's an unambigious case --- the server told you when serving response N that it won't send any more responses, and I think it's safe to resend any N+1 requests, as the server knows you won't get the response and so it shouldn't process those requests. It's less clear when the connection is closed without explicit signalling --- the server may be processing the requests and you don't know. http/2 provides for an explicit close that tells you what the last request it saw, which addresses this, on http/1.1 when the server closes unexpectedly it's not clear. That often happens when the connection is idle.

An HTTP/1.1 server may send hints about how many requests until it closes the connection (which would be explicit), as well as the idle timeout (in seconds). But it's still not fun when you send a request and you receive a TCP close, and you have to guess if the server closed before it got the request (you should resend) or after (your request crashed the server, and you probably shouldn't resend).


https://en.wikipedia.org/wiki/HTTP_pipelining https://www-archive.mozilla.org/projects/netlib/http/pipelin... https://kb.mozillazine.org/Network.http.pipelining This has existed for years, and honestly it worked pretty well for me on most servers before HTTP2 came around, so long as you didn't abuse it. You could setup multiple connections too. I usually had mine set to "4"

Some servers didn't support it, most did though. Which was why when the first HTTP2 tech demos came out, I really couldn't see the enormous speedups people were trying to demo.


One thing I toyed with, but didn't get very far, was to encode the HTTP/1.1 protocol as a Sans-IO state machine with .await points for the IO, but rather than the IO registering Wakers with an async runtime, it relinquished control back to the user to perform the IO manually. One can think of it as .await releasing "up" instead of "down".

In the context of HTTP/1.1 the async code became a kind of "blueprint" for how the user wants the call to behave. At the time I was dead set on making it work for no_std (non allocator) environment, and I gave up because I couldn't find a way around how to need dynamic dispatch via Box<dyn X> (needing an allocator).


In fact, not being able to do some of these things might improve privacy.


"Semantic Index" sure is a better name than "Recall". Question is whether I can exfiltrate all my personal data in seconds?


I'm sure a simple Webkit vulnerability (there's none of those, ever, right?) will definitely not ensure that Semantic Index is featured in a future pwn2own competition.


I mean I can already search my photos for “dog” or “burger” or words in text on photos. Adding an LLM to chat about it is just a new interface is it not?


I think the important thing is that the semantic index tracks all you do through all your apps.


They are likely implemented very differently. I’m not certain but I imagine the current photos app uses an image model to detect and label objects which you can search against. I expect Semantic Index (by virtue of the name) to be a vector store of embeddings.


It's all in the "private cloud". "Trust me bro", it's like totally private, only us and a handful of governments can read it.


Yeah. It's going to be great. Selected experts are saying so.


The article says "the government has called the bill “fundamentally flawed”, but there may be sufficient House support to turn it into binding legislation"

Sounds like Trudeau is not the problem in this instance?


What are the chances this survives a court challenge?

Story: In a previous life I worked as a consultant in Sweden. My employer had a clause like this in the employment agreement. I quit and joined a company that technically was a client. My previous employer tried to make a legal thing out of it saying I owed them money, but backed down with a sternly worded letter from my lawyer.

My lawyer roughly told me that "they can't block me from earning a living without offering me the same amount they're trying to deny me to earn". Sounds like a very fair law. Wonder if it's still like that? Was some time ago.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: