Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its not that 5E is better, its about the application. Either you need near full length runs of 6A for 10G, and you need to have it carefully terminated (which is a large problem in a lot of installs, more so than the actual cable) or you have short runs which will work just as well over just about anything that isn't completely screwed up.

Maybe another way to put it is this: doing a poor CAT6 install (as this article points out most are) isn't buying you anything over cat 5E, your spending money for nothing.



There is the 2.5 and 5 GbE standards (https://en.wikipedia.org/wiki/2.5GBASE-T_and_5GBASE-T ), where 5 Gb is specified for Cat 6.

But yes, previously there wasn't much point to it. Cat 5e is good enough for 1GbE, and for 10GbE you need Cat 6A.


How long is "long" in this context? 5ft? 50ft?


10GBASE-T over Cat 5e works pretty well up to 100 ft and 150 ft is doable if you carefully run avoiding electronic noise sources (like a wall power outlet).

NBASE-T IMO is much more interesting for end users because by now USB adapters for those should be out and I fully expect them any week to drop. 5gbit/s runs up to 300 ft on Cat 5E. Also, while five gigabit is certainly doable for laptops with NVMe disks (last three years, since Skylake), ten gigabit is not necessarily so if you have PCIe 3.0 x2 (or equivalent) bottlenecks here and there. See, many Intel U chip laptops run the on chip bus between PCH and CPU at only a PCIe 3.0 x2 equivalent speed (aka 16 gbit/s) and the U chips provide PCIe lanes only off the PCH so if your data streams needs to cross into the CPU and if you are burning 10 out of the 16 for just Ethernet, not a lot will be left. And most of the time you need to cross the On Package Interconnect because the system memory hangs off the CPU side of it and all the peripherals the PCH side of it (northbridge vs southbridge, to use older terms). There's no such problem with 5gbit/s, you can even put twice of that on a 2GT/s OPI and still have bandwidth to spare.

Another potential such bandwidth choking point is the Thunderbolt controller, althought that might be past by now, because there is no JHL 7240 or Titan Ridge LP -- there was an Alpine Ridge LP which only used a PCI 3.0 x2 connection to the host meaning if you wanted to have a 5gbit/s USB A connection on your dock and you wanted 10 gbit/s Ethernet, you are already hitting all the constraints. Once again, 5gbit/s eases this problem.

In about five years when PCIe 4.0 and USB 3.2 are ubiquitous, well established features in laptops, that's when we can begin to discuss whether ten gigabit makes sense at home/small office.


This is just more market segmentation. There isn't any reason that 2.5 or 5Gbit couldn't have been optional extensions to the 10Gbit standard other than to further segment the market.

Particularly as we are discussion cables, it should have just been a case of, oh we see your 250ft run doesn't' appear to be running without errors lets drop it down to 5gbit, or this link is running at 10% utilization lets save some power and drop it back to 2.5Gbit.


I think the reason is more historical than artificial market segmentation. Long ago, the development of Ethernet technologies was targeted to achieve an increase of a factor of 10 for each subsequent generation, i.e. 10m->100m->1g->10g. The need for 2.5Gbit and 5Gbit only came with 802.3ac Wave2 devices, where a Wave2 MIMO AP can saturate a 5G pipe after subtracting WIFI overhead.

Also, 2.5G/5G Ethernet actually did not start out as an IEEE specification, but was started as NBASE-T/MGBASE-T.

From what I remember, the 2.5G/5G/10G devices actually will negotiate to determine the maximum datarate that will work.


Many new chipsets (Aquantia comes to mind) are indeed multispeed (10-5-2.5-1-.1) but I am not sure whether it is capable of such intelligent speed settings.


My experience is that your probably safe at 50ft and below. Which is actually the max cable lengths tested in this article. For single switch->switch runs in a non critical environment (say my house) I would personally push it as far as I could get away with rather than replacing a cable simply because it doesn't happen to be officially sanctioned.

What might have been more interesting is if they plugged those failing cables into a few switches to see if they actually were having problems anywhere. I'm betting they were probably mostly working fine. Although I have to wonder how many people are buying cat 6/6A for "future proofing" rather than actually running 10G on it.


I posted this below, but I think it's also good here, because it's a fun anecdote. I have a 10Gig run on 200+' of Cat5E in production that works well with only a few hundred errors a year. Running fiber or Cat6A into that room would costs thousands. This is mostly a credit to the interface designers, who let us get away with stuff that we totally shouldn't.

The Cat5E run is nearly perfect, entirely away from power lines, and clean direct from one room to another.


If your running direct port->port you get a lot of additional margin because the plug->jack is where a lot of the loss comes from and the assumption in the spec is that a run looks like "port jack->patch cable->panel jack->solid fixed run->jack->patch cable->end port".

So if your cable run is "port->long solid conductor cable->port" its going to give you some extra feet.


Solving the random issues that result from going beyond the specs of the cabling is the worst. It works for months and then suddenly fails for no reason.


This hasn't been my experience with copper. It tends to work until someone breaks it. Optical SFP's OTOH, were my bane for many years as they do frequently degrade leaving a once perfectly functional link dropping some huge number of packets before anyone notices that the link has degraded to some 1980's level of speed. Port monitoring when your not a network provider, or big data center seems to be a "hard" problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: