Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if your keyboard latency dominates?

In older games like CS 1.6 I believe I could feel the difference between say 10 and 40 ms ping. Maybe PS/2 keyboards were faster. I got the feeling older computers were way faster to respond ... but I might remember wrong.



> I got the feeling older computers were way faster to respond ... but I might remember wrong

I don't know the specific devices you've been using over time, but in general the measurements in the following article back up your memory: https://danluu.com/input-lag/


PS/2 Keyboards are faster than a USB keyboard with slow-speed negotiation with the host computer but for USB keyboards with high speed negotiation, they are at par[1]

[1] https://www.youtube.com/watch?v=wdgULBpRoXk&t=1766s


That's the theory. At "full bandwidth", with frequent polling, in theory, it should be possible to have excellent latency.

In practice, if you actually measure time from keypress to event, you'll find that modern "gaming" keyboards are actually laggy[0].

0. https://danluu.com/keyboard-latency/


That article is a joke.

>if you actually measure time from keypress to event

Nobody measures keyboard latency this way because when you do the keyboards with less key-travel (Apple Magic) end up with lower "latency"; clearly an absurd result and not at all what you are actually trying to measure. It does not line up with anyone's definition of keyboard latency.

This article has been posted in response in the other threads: https://www.rtings.com/keyboard/tests/latency


>Nobody measures keyboard latency this way (keypress travel)

You'll find most keyboards in your link, despite these fullrate-enabled high poll rates, actually have higher latency from keypress to event than what PS2 trivially achieved.

Which, by the way, is consistent with the findings in Danluu's article ("a joke"), showing that pressing 2 keys at once might not be as accurate, but does not completely compromise the results, relative to using a mechanical arm.

Note that Danluu documents his methodology, and even expresses that he would prefer a mechanical arm setup.

Which is why...

>That article is a joke.

Is undeservedly unkind.


I think it's more about consistency. If delay is consistent, you get used to it, but if it's all over the place it's more noticeable.

I prefer to cap FPS in games for the same reason, if my PC can't deliver consistent frame rate.


>I prefer to cap FPS in games for the same reason, if my PC can't deliver consistent frame rate.

This jitter is minimized, although not eliminated, with technologies such as Freesync, now part of HDMI and DisplayPort specs.

This is worth consideration when selecting a screen today.


I have Freesync, it doesn't it eliminates tearing without vsync, but it does nothing about consistency of framerate. What I'm saying is that I'd rather have a narrow P99, even if it's just above 60, rather than experience 50-144fps variations.


>but it does nothing about consistency of framerate.

It does quite a bit. e.g.:

If your refresh rate is 60Hz, that's some 16.6ms per frame.

If the frame takes 16.7ms and thus isn't ready by the deadline, you get either tearing (very visible, annoying, experience breaking) or your frame takes 33.3ms, which is a lot of Jitter.

Jitter between 16.6ms and 33.3ms or tearing, are both much worse than 16.7ms you could get with freesync, which is just some 100µs of jitter.

Even at 120Hz, the barely miss deadline case would be half the jitter, but still 8ms vs 50µs. And the tearing alternative is still every bit as horrible.


Yes, it helps there, but I'm talking about frame rate constantly jumping around. I'm not talking about 16.6 vs 16.7 or even 16.9, I'm talking about 16.6 vs 40/60/90 and so on. VRR isn't going to fix it at all:

https://www.digitaltrends.com/wp-content/uploads/2022/10/got...

https://www.digitaltrends.com/wp-content/uploads/2022/10/got...

^^ I don't want this.

Yeah, the average is 60 and VRR will prevent tearing and jitter, but won't magically solve the issue I'm referring to. What I want is this:

https://www.digitaltrends.com/wp-content/uploads/2022/10/fra...

I want the same consistency for terminal input delay. Suckless terminal, for example, has higher latency, but P99 is much closer to avg than Alacritty and urxvt. While on the subject of terminals: st will stay mostly the same when there is additional load present, while all GPU-based terminals will start slipping if I'm doing something that involves GPU (discord voice call, screen sharing/recording, spotify).


>^^ I don't want this.

Yikes. That is bad. Maybe time to upgrade on the CPU side?

I'm using a 5800x3d and the cache sure helps with 0.1% lows.


You remember... right. The Apple 2e blows away modern computers (in terms of latency).

https://danluu.com/input-lag/


Pointless to compare those toy computers against cherry-picked modern desktop which is the equivalent of an old supercomputer. Indeed if you look at the numbers for the older workstations: SGI Indy, NeXTcube, Symbolics 3620 they are not that different and even worse than the newer machines. 30ms is not remarkable these days, with proper gaming keyboard, gaming monitor, config etc. you can get 5ms or less for the whole chain.


old supercomputers also sucked at latency too, and the modern desktop isn't cherry-picked

it's common for computers with less latency to also have less throughput; an arduino, the avr kind, can reliably respond to inputs in less than 200 nanoseconds

but these 'toy' computers are capable of running a word processor or spreadsheet, writing email, browsing the web (without js), compiling c, running an ide, etc. the zx spectrum even had a first-person 3-d shoot-em-up called elite. so i don't think it's pointless


>old supercomputers also sucked at latency too

So why would you expect it to be better if you shrunk them down to micro size? That is the whole point of my comment, that the observed latency is better explained as a function of hardware/software complexity and not "year of production". If you understand this, it should not surprise you at all that most contemporary desktops have the latency characteristics they do since we have seen it before in the old supers/workstations, their closest equivalents in terms of hw/sw complexity and capability. Of course it could be better and some are but you won't find them in that article.

>but these 'toy' computers are capable of running a word processor or spreadsheet, writing email, browsing the web (without js), compiling c, running an ide, etc. the zx spectrum even had a first-person 3-d shoot-em-up called elite. so i don't think it's pointless

Can it perform those tasks all at the same time like we expect now or like a mainframe/workstation user would have expected then? When you look at the entire landscape of computing hardware and software it is difficult not to see the early micros in that light. An extreme example would be the Altair 8800. I don't see how you could describe that computer as anything other than a plaything for enthusiasts. I made the analogy between modern desktops and old supers which you seem to agree with; could you even say that an early micro is the rough equivalent of a 50's mainframe with a straight face? It should be but the analogy is more difficult to justify: 36-bit vs 8-bit words, FORTRAN vs BASIC, FPU (some) vs no FPU, at every level of the analysis there are compromises. It should be clear that the micro represents an extremely compromised version of general computing that would be alien not only to us but to the serious computer user in the 70s/80s and yet their characteristics should be held as a benchmark for computing devices in general? Outside the context of "the home computing experience" they make for a poor comparison. Unfortunately there is great ignorance in this area due to the disproportionate focus on micros when discussing historical computing. Videos and articles on the Apple II series, awash with insufferable amounts of sentimentality, are a dime a dozen but you will be lucky to find anything on the IBM TYPE 704, a much more interesting and equally significant machine.


Unless you can produce actual measurements following the same methodology, for these mythical "non-cherrypicked" modern machines, I am going to call bullshit here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: