I was a "Citrix consultant" for about two decades.
I'd walk into customer sites for the first time, meet people, and within minutes they would start ranting about how bad Citrix is.
I suspect only dentists get this kind of feedback from customers before a procedure.
Having said that, 99% of the time the problem boils down to this:
The guy (and it is a guy) signing the cheques either doesn't use Citrix OR uses it from the head office with the 10 Gbps link.
The poor schmuck in the backwater rural branch office on a 512 Kbps link shared by two dozen staff gets no say in anything, especially not the WAN link capacity.
I've seen large distributed orgs that were 100% Citrix "ugprade" from 2 Mbps WAN links to 4 Mbps to "alleviate network congestion" in an era where 100 Mbps fibre-to-the-home is standard. With 2 Mbps you can watch PDF documents slooooowly draw across the screen, top-to-bottom, line by line. Reminds me of the 2400 baud days in the early 90s downloading the first digital porn, eagerly watching the pixels filling the screen.
Don't blame Citrix. Blame the bastard in the head office that doesn't give a f%@$ about anyone not him.
I agree in general but I do blame Citrix for some foot-guns. The Citrix admins at my employer have never figured out how to configure it to get keyboard latency below ~120ms (on a gigabit LAN), and the silly health meter always reports the connection as excellent. This is mostly on them - in classic enterprise IT thinking, if it’s not down your job is done - but I’m somewhat disappointed that it’s even possible to configure it to have latency twice that of a modem.
This is just flat out wrong. Any seasoned gamer can feel the difference between a few tens of milliseconds.
300ms would render most video games unplayable.
I see this claim a lot and it's making me want to build a website that gives you some common interactions (moving a mouse cursor, pressing a button) with adjustable latency so people can see just how big of an impact seemingly small amounts of lag have on how responsive something feels.
After using xterm for years, I don't like gnome-terminal anymore because its lag while typing has become noticeable. It's right around 30ms on this site, and xterm around 10-20ms.
Then have an estimation challenge mode, where it picks a random latency and you have to guess within 50ms what it is. Seriously though, that sounds both fun and useful.
If you had 300ms latency, back when I played League of Legends "your ISP is having problems today and you cannot play".
Anything above 70 is considered very bad
...for Massive Multiplayer Online Gaming (MMOG), real-time is a requirement.
As online gaming matures, players flock to games with more immersive and lifelike experiences. To satisfy this demand, developers now need to produce games with very realistic environments that have very strict data stream latency requirements:
300ms < game is unplayable
150ms < game play degraded
100ms < player performance affected
50ms > target performance
13ms > lower detectable limit
"
But this is real-time gaming. Typing should be less demanding, I'd think.
Not really, unless you're the kind of guy working in Cobol and who is used to typing with latency.
I've seen Cobol developers just ignoring the latency, keeping typing because they know what they've typed and it doesn't matter that it's slow to show up on screen.
Working with latency like that also requires the system to be predictable. If you're expecting auto complete but not confident in what it'll show, you've got to wait, if you're not sure if the input will be dropped if you type ahead too much, you've got to wait. If you need to click on things, especially if the targets change, lots of waiting.
If the system works well, yeah, you can type all the stuff, then wait for it to show up and confirm. 'BBS mode' as someone mentioned.
> I've seen Cobol developers just ignoring the latency, keeping typing because they know what they've typed and it doesn't matter that it's slow to show up on screen.
I used to do that (not in COBOL), typing into a text editor in a terminal over a 2400-baud modem. Like the other commenter said, you get used to it, but it requires a certain predictability in your environment that you don't get in modern GUIs.
Generally I think of it in terms of number of frames @ 60 fps.
Anything below one frame (16.66ms) and whether or not any sort of real feedback is even received (let alone interpreted by the brain) becomes a probability density function. With each additional frame after that providing more and more kinesthetic friction until you become completely divorced from the feedback around 15-20 frames.
That’s off by about an order of magnitude – highly skilled humans can see and react in less than 120ms. One thing which can complicate discussion on this is that there are different closely related things: how quickly you can see, understand, and react is slower than just seeing which is slower than seeing a change in an ongoing trend (that’s why you notice stutter more than isolated motion), and there are differences based on the type of change (we see motion, contrast, orientation, and color at different latencies due to how signals are processed starting in the cortex and progressing through V1, V2, V3, V4, etc.) how focused you are on the action (e.g. watching to see a bird move is different than seeing the effect of something you’re directly controlling). Audio is generally lower latency than visual, too.
All of this means that the old figures are not useful as a rule of thumb unless your task is exactly what they studied. This paper notes how unhelpful that is with ranges from 2-100ms! They found thresholds around 25ms for some tasks but as low as 6ms for some tasks.
Keyboard latency is one of the harder ends of this spectrum: the users are focused, expecting a strong (high contrast, new signal) change in direct response to their action, and everything is highly trained to the point of being reflex.
When I’m typing text, I’m not waiting for the change to hit a key outside of games but rather expecting things like text to appear as expected or a cursor to move. Awhile back I tested this and the latency difference between VSC’s ~15ms key-to-character was noticeably smoother compared to 80+ms (Atom, Sublime) and the Citrix system I tested at 120-150ms (Notepad is like 15ms normally) was enough slower that it forced a different way of thinking about it (for me, that was “like a BBS” because I grew up in the 80s).
n.b. I’m not an expert in this but worked in a neuroscience lab for years supporting researchers who studied the visual system (including this specific issue) so I’m very confident that the overall message is “it’s complicated” even if I’m misremembering some of the details.
The parent comment may be talking only about the network or Citrix components in the critical path. You also have to wait to get keyboard input (often 10s to many 10s of ms) and for double-buffering or composition (you might get updates and render during frame T, flip buffers to reach the OS compositor for frame T+1, have the compositor take another frame to render that and send it to the screen for frame T+2, though this is a bad case for a compositor, you may be paying the double buffering or flu latency twice). And it can take a while for modern LCD screens to process the inputs (changes towards the bottom of the screen take about a frame longer to display) and to physically switch the pixels.
120ms end-to-end without Citrix would be quite achievable with many modern systems (older systems (and programs written for them) were often not powerful enough to do some of the things that add latency to modern systems). So if Citrix 120ms we already get up to your ‘not immediate’ number.
But I think you’re also wrong in that eg typing latency can be noticeable even if you don’t observe a pause between pressing a key and the character appearing. If I use google docs[1] for example, I feel like I am having to move my fingers through honey to type - the whole experience just feels sluggish.
[1] this is on a desktop. On the iPad app I had multiple-second key press-to-display latency when adding a suggestion in the middle of a medium-sized doc.
Divide those figures by 10 might be closer to being accurate. 120ms is quite noticable. I know as I need to adjust latency out of Bluetooth headphones for recording. Recording with those latencies sounds like a disaster and is very very much noticable even with sounds let alone vision
While my post was wrong, in fairness the context was specifically about keyboards. Nothing to do with audio. I suppose I should have been explicit but the context was keyboard entry.
In my experience visual and feeling type things like typing have even stricter tolerances for timings is what I meant to say. If audio has a delay, visually noticing a delay will at least be as equal if not more noticable at a specific ms
We aren't talking about website loading speeds. This is about how quickly your mouse cursor moves in response to mouse movements and that latency needs to be 16ms or less.
Personally I can get latency down to 200ms over the internet into a remote datacenter with WebRTC. The challenge however in practice is that running a CPU without a GPU will eventually starve the CPU because it has to do intensive things like run a 1080p video at 60fps which aren't feasible on a CPU only machine. This CPU load will then slow down the video encoder and overall responsiveness (no, responsiveness doesn't mean a mobile layout here) of the remote desktop.
I recently had a bit of a rant about security people and how 70% of the truly dumb decisions in our industry can be attributed to them.
Your description is exactly why. Security people wedge themselves into the halls of power and then start making decisions that don't actually negatively affect them all that much.
I've literally seen a CISO that insisted everyone worked in a way they themselves did not.
sadly, the job of a CISO typically isn't "make the most pragmatic decisions possible to keep our infrastructure secure and running smoothly". In many industries, it's more lke "join as many compliance programs as possible to expand the ability to capture revenue from regulated markets".
The CISO didn't make the decision to enforce password rotation- the compliance programs your sales team asked for did
I'm the IT guy for a new non-profit. We aren't separated yet from the company that created us, but we're in the process of separating. I get to decide all this fun stuff.
I had a very brief talk with the IT team for the larger parent company when I started and explained this stupid password rotation thing, as I came from a security background, they wanted nothing of it. Set in their ways.
For the new non-profit that I'm helping spearhead, I'm not sure I'll get away from the password rotation entirely, but I can certainly set it to something more reasonable, like every 365 days, rather than every 60 days or whatever travesty most are dealing with. I'm pretty pleased about this.
> Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
This is a really useful thing to keep in mind because even if you aren't directly bound by a requirement to follow the NIST standards, being able to point your policy people at that is handy if you can shift the conversation to “bring our policy in line with NIST” where there's a question about whether they'll later look bad for _not_ having done so. Typically these conversations are driven by risk aversion and things like federal standards help balance that perspective.
Aside from password rotation being a very questionable practice, it actually can cause productivity loss. In a big organisation like mine it can take up to 48 hours for a password change to synchronise across all the internal services. There's also the issue where some endpoint software still uses the old password behind the scenes and fails to log in too many times - causing your account to be locked. I guess you can see my frustration coming through.
I had the joy of dealing with some endpoint software like this in an organization that had mandated password changes every 30 days. Very predictably, people set recurring "change your password" reminders for the 1st of the month and the organization lost an entire day of productivity each month as they locked themselves out of their accounts en masse. So the beginning of the month was always a panicked, all-hands-on-deck day for the help desk as people were waiting on hold for hours to get their account unlocked.
Our penetration testers suggested we add password rotation, and I had to quote them the latest NIST guidelines which state "Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically)."
If they don't know better, it's not surprising other companies don't either.
> To your point, password rotation is considered an insecure practice because it causes people to append 1, 2, 3, etc to the same password.
A good solution to discourage this would be to have heuristics that'd make sure that the new password isn't too similar to the old one, but doing that without having plaintext in there somewhere is pretty difficult.
Another solution would be mandating that all of the passwords should be randomly generated, but enforcing that would be difficult, because everyone who isn't used to having 99% of their new account information being in KeePass databases with randomly generated passwords, probably would find that too cumbersome to remain productive.
This seems like a people problem that makes being secure essentially impossible, due to how people use passwords (e.g. "I just use one password across X sites because remembering multiple ones is too difficult" or "I just add a number at the end of my current password").
And others also mentioned the productivity loss, for when people are slowed down by the need to change their passwords. You might easily rotate Let's Encrypt certificates thanks to automation but when it comes to people, things aren't so easy.
At that point, you might just stick with whatever passwords you have, do some dictionary checks in the future, maybe have infrequent password rotation and otherwise stack on more mechanisms, like TOTP through whatever application the user has available, or another means of 2FA, because relying just on passwords isn't feasible.
> causes people to append 1, 2, 3, etc to the same password
It’s either that or they write them down. Because people are going to forget a password that changes every month, especially a password that has to comply with the complexity rules.
Isn't that just a characteristic of how they're evaluated? Any security error is the CISO's fault, "heads must roll", etc
Given that, they're likely to give you what you are asking from them: a brick with no functionality which will do nothing. You can't do anything with Brick, but Brick has zero outstanding CVEs
It seems to me that the reason why so many bad enterprise solutions are bought is because the buyer is not the user. It’s such a funny thing to me that people would spend tons of money without firsthand experience or at least someone they trust using it.
I've never used Citrix but I remember when I had a T-1 (1.54Mbits for the younglings) and I left a Remote Desktop session open on a laptop. Some days later I went back to the laptop and used it for an hour before I realized I was in a RDP session to a machine in another state. I wonder what Citrix screwed up to make their UX so different. Of course a decent T-1 back then probably had better latency than today's consumer HFC connection.
Yeah the T1 easily had enough bandwidth to smoothly send the 800x600 16 bit color desktop you were probably running at the time (guessing the timeframe based on usage of a T1). Frame to frame diff was probably much easier as well with less shadows and graphical effects that modern Windows or Linux DEs have.
I don’t doubt Citrix has gotten worse as well but the job it had to do back then was much easier.
> Don't blame Citrix. Blame the bastard in the head office that doesn't give a f%@$ about anyone not him.
> The guy (and it is a guy) signing the cheques either doesn't use Citrix OR uses it from the head office with the 10 Gbps link.
If you were sure about this you could have as the consultant told this sentence or made this entire comment as your 'first page' of powerpoint/PDF (to make sure other hn-ers are happy!)
This _very_ much depended on where you are. I had symmetric 10Mbps at home in 1998 but when we moved to New Haven in 2008 Verizon couldn't deliver more than ISDN / T1 to large chunks the city (we literally could have used a WiFi antenna to hit their regional headquarters, too). There's so much deferred maintenance around the world.
The last time I saw a place migrate the remote offices from a less than 10Mb/s network was around 2015. That same place replaced its mainframe at 2011 because of an enormous price hike.
I'd walk into customer sites for the first time, meet people, and within minutes they would start ranting about how bad Citrix is.
I suspect only dentists get this kind of feedback from customers before a procedure.
Having said that, 99% of the time the problem boils down to this:
The guy (and it is a guy) signing the cheques either doesn't use Citrix OR uses it from the head office with the 10 Gbps link.
The poor schmuck in the backwater rural branch office on a 512 Kbps link shared by two dozen staff gets no say in anything, especially not the WAN link capacity.
I've seen large distributed orgs that were 100% Citrix "ugprade" from 2 Mbps WAN links to 4 Mbps to "alleviate network congestion" in an era where 100 Mbps fibre-to-the-home is standard. With 2 Mbps you can watch PDF documents slooooowly draw across the screen, top-to-bottom, line by line. Reminds me of the 2400 baud days in the early 90s downloading the first digital porn, eagerly watching the pixels filling the screen.
Don't blame Citrix. Blame the bastard in the head office that doesn't give a f%@$ about anyone not him.