If I care about what it looks like and want to precisely arrange my cables, I always lace. It's time-consuming, finger-aching work, though and I'm not as good at it as I was when I was younger[0].
In my late teens, I worked at a telecom in desktop support and ended up on a multi-day project installing hardware at a switch site in Cleveland. They'd taken on a new director who was furious at the condition of the cabling/cable management[1]. I remember a piece of paper with a sliced black zip-tie taped to it and the words "FIRED" scrawled in big letters. Teams of two men were working throughout the site.
They were ripping out/replacing a huge number of runs. During the week, a group was working on a bundle of what looked like thousand Ethernet cables coming from the ceiling, all labeled, starting out as a perfectly laced 3x2 ft rectangle through the channel in the ceiling and then falling into a spaghetti pile extending most of the floor of the large room.
By the time I left, that channel was nearly done, the wires broke off at 40 or so points in different bundles ranging in size, but most were inches thick/wide, wires joined the bundle at various points in the metal channel causing the wire bundle that went to the floor, below, to be only a little smaller than the one coming in. Every curve of every bundle was visibly the same radius.
After seeing what could be done, I decided to learn how to do it. :)
[0] I'm not that old.
[1] Some of the issues were straight-up code violations, but most were issues of good practice.
While I too love to see good cable layout, whenever you see a “thousand” Ethernet cables in one run, a foundational mistake has been made.
Instead of 1000x 1 Gbps links, the proper way is to consolidate those into 10x 100 Gbps links or whatever works out to be the cheapest given the length of the run, the cost of the switch ports, etc…
Any argument to the contrary has a solution. E.g.: even multi-tenant links with overlapping address spaces can be handled with nested VLANs (VXLANs). This is essentially how cloud providers and large metro network provides handle multiple customers.
The last time I saw cabling like this was at a government department with “network techs” that didn’t get these basic concepts. Instead of having “top of rack switches”, they had one giant pair of switches in the corner of the data centre with bundles of cables coming out the thickness of my torso! The same department had a long building with switches in one corner and bundles of cables in the ceiling going to desktops over 100m away.
In all cases this was done neatly.
They loved cable porn, but didn’t understand networking!
I like the old Softlayer / IBM Cloud cabling scheme: dual 10Gbps to TOR (red)and BOR (blue) and the 1Gbps green cabling into the IPMI. Here's a pic: https://newsroom.ibm.com/IBM-cloud?item=32774
Residential fiber networks can get messy very quickly, especially if its all based on AON instead of GPON, and requires a point to point connection between a customer's edge location and a site end location. Add in (older) coaxial networks in to the mix aswell, and you end up with thousands of bundles of fiber for even small towns.
Also, keeping all that organised requires rigerious discipline and maintenance to keep it all tidy and neat.
You are correct and I should have clarified a few points.
Especially given the time (my memory is hazy but I want to say this would have been just before Y2K) -- almost all of the business and management network was ... token-ring[0]. I don't think these cables were even for that network (we usually ran those and they didn't come with much in the way of "rules"), but were for the SONNET/voice network (memory, again).
[0] When I came on in ~1998 most of the (PS/2) machines were connected via token ring. A lot of the work I was doing at the time involved replacing PCs with newer models and getting them plugged into one of the managed 100MBps switches. :)
> If I care about what it looks like and want to precisely arrange my cables, I always lace. It's time-consuming, finger-aching work, though and I'm not as good at it as I was when I was younger[0].
I just use zip-ties. What is the advantage of lacing ?
In the space sector the mass savings over a whole spacecraft can really make a difference, and lacing doesnt introduce and hard edges that can lead to fraying or wear from vibrations.
Additionally, lacing cord typically can handle much higher temperatures than space-rated zip ties.
I'm not a pro at this at all, I've only done amateur small-scale cabling jobs.
If I understand correctly, the arguments against zip ties are that 1) it's easy to overtighten them, which can lead to damage to network cables 2) they're obviously irreversible so less flexibility in the future?
Am I overthinking this? There must be standard cable management practices (that are either pro/against zip ties)... can anyone recommend a guide?
Use cable ties to hold a group of cables together or to fasten cables to other components. Choose Velcro-based cable ties versus zip ties, as there is a tendency for users to over-tighten zip ties. Over-tightening can crush the cables and impact performance.
If you're using a lot of zip ties, you really need a zip tie gun. Not only do you get a consistent tension, but you also trim the excess at the same time.
Love those things, a lot of fun to use and always get perfect results. They have some very nice ones with spinning heads to reach ties in difficult locations too.
The person that comes behind you (or you) who cuts the ends of the zip ties off has never stuck their hands back in there, I bet. In tight spaces, the cut off zip ties become nice little knives, and will cut your hands when the zip ties are large. We did this on industrial equipment panels, and had to stop due to techs coming in after us and cutting themselves up in tight spaces.
It's a "pros/cons" thing to me. The big disadvantage is that it's far more time-consuming.
The advantages are mostly aesthetic. For example, I have a 7.2.2 home theater setup with an AVR ... basically 14 cables going to the back of a device that is located in a place in my family room which makes it hard to obscure the cables.
I purchased white speaker cables to match my white wax lace, laced the speaker cables each at ~1.5". The lace is coated in wax, so it will coerce the cables into the right position by wrapping it around an extra 2-3 times when needed. It's more invisible than zip ties -- there's no "nub". If you're lacing the way I was taught, there's no knot. Each cable in the bundle is individually tied, as well as the bundle which means "it stays where you tied it" far better than zip ties do. This also means you have to do almost all lacing "in place" because you can't fix mistakes after the fact.
The result looks like one larger cable, made from 9 others in a horizontal row behind my receiver which makes a sharp turn directly into a port in the wall. I can remove my receiver and the 9 cables remain roughly in the same position because the lace holds.
I used black lace to cover the black power cables (separated from the speaker/signal wires).
It says the following, but I still don't get it. How does a hook-and-loop tie add that much more obstruction that it could be an issue?
"This old cable management technique, taught to generations of linemen, is still used in some modern applications since it does not create obstructions along the length of the cable."
If you try and pull a bundle of wires out (like through an opening in a floor tile), Velcro loops will snag on anything with a edge to it. Zip-ties will definitely snag on the head and sometimes on the body. Lacing cord is continuous and flat, so it won't snag.
I'm not saying Velcro and zip-ties don't have their place. If I'm securing a bundle to a wire support on a rack, I'll use Velcro because I know that sooner or later it'll get touched again.
When I was a grad student in Physics I worked at an accelerator lab where we had an old NASA high voltage engineering unit. The control panel was completely laced, every wire had a marker on it every foot or so, every end was perfectly terminated. Being an electronics tech, it was a dream to work on this thing... Wish I had pictures. But as others have noted, it takes a great deal of time to do that right when you are installing.
In the author’s defense they mentioned the NASA standard, and that guide— while legendary— isn't a "lacing" guide. Of the 50+ images, only 2 show lacing (acceptable v. unacceptable), and the rest is spot tying, zip tying or other harnessing. Always nice to see the link though
> Some organizations have in-house standards to which cable lacing must conform, for example NASA specifies its cable lacing techniques in chapter 9 of NASA-STD-8739.4.
See also this guide, reproduced from a book "Workmanship and Design Practices for Electronic Equipment", published by the direction of the chief of the Bureau of Naval Weapons, December, 1962. Included are some illustrations of jigs for creating laced wiring harnesses.
Its also important during the building process to keep the rats nest of wiring to a minimum, it can get fairly messy inside spacecraft. Then its also much easier to ensure cabled are kept apart from other signals that could interfere with them.
I was in telecom and IT for 35+ years (Bell Canada, NorTel) and worked with on-prem and CO-located step-by-step and crossbar switches at the tail end of their prevalence. It was a joy to be able to take the time to properly lace cable harnesses which were abundant in these installations. There were also PDP-11's, teletypes and such involved in network/switch monitoring which had their share of laced wiring harnesses (and wire wrap!). Also Datapac/Dataroute equipment cabinets and back-planes. It seems to me that the job title of "craftsman" fit in those days as there was often an almost artisan quality and craftsmanship to the work involved. I'm glad to have had the opportunity to experience it. Compared to my later datacenter installation experience (compute and power) it was night and day different.
In my experience, data center cabling is either a huge mess (random cables just hanging everywhere) or is tied down so much you can't do any repairs/replacements (velcro, tie wraps, etc). There is a fine line between cleanliness and usability.
For the folks that work with me, I explain the importance of air flow, proper cable length, and component serviceability. You definitely don't want "piano wire" stretched across the rack that prevents someone from replacing a PSU or NIC without affecting other gear in the rack.
And the latter (tied down) quite often devolve to the former over time exactly because it's so much effort to maintain that suddenly an urgent repair happens and ties get cut and cables torn out and something new just thrown in, and the effort of re-doing the original setup means it gets deferred.
I stopped tieing cables together years ago for that reason. The more the perceived effort in getting what needs to be done out of the way + fixing the cabling afterwards, the less chance it gets done properly. Instead I'd aim to set an example of stretching the cables out of the way, labelling and getting length roughly right that is simple enough to work with in a rush and trivial to "fix up" afterwards.
In large environments where crises are more likely to involved routing traffic away from entire racks or data centres, you can afford being a stickler for these things. Most places are not that large, and being too strict creates more problems than it solves.
A particularly frustrating variant of the 'piano wire' issue:
A redundant pair of devices, except cables have been run in such a way that if one device failed, you could not remove it from the rack without removing cables from the remaining device.
this is the reason why (ideally) you should have three racks in a redundant setup:
2 for redundant component placement (with redundant power obviously).
1 for interconnections between the racks, having prepared cabling and MPO panels installed to make interconecting devices easily, without having to change anything in the equipment racks compared or vice versa.
Makes replacing hardware a breeze, and prevents a lot of issues with cable mess inside equipment racks because those cables are very short to MPO panels.
one of the key things to do in a 'serious' datacenter is separate the fiber from the power.
this is why serious facilities have a dedicated channel of yellow plastic overhead fiber tray which contains ONLY 1.6mm or 2.0mm jacketed 9/125 singlemode fiber, with nice smooth corners and spigots on it coming down above racks.
and then the AC power is in conduit and long rectangular metal enclosures.
and the -48VDC cabling on steel ladder rack, often with waxed string lacing if it was done by people who are being serious about it.
on a per rack cabinet level it's also really important to pick one vertical side for your power cables, and data cables vertically up the OTHER side, and stick to it consistently.
what is also very common is having power ran across different power rails under the floor of the DC (different feeds obviously), which is then terminated upwards to a PDU into the customers rack.
Fiber comes in from the top, with a MPO or patchpanel at the top of rack for customer termination.
I don't know of anyone building new raised floor datacenters. Cabinets and equipment on concrete slab and everything power and communications related overhead.
Is there any good resources for learning more about this? I have a lot of legacy AV equipment mixed with modern digital components and networking and despite best efforts, I typically fall into either a rats nest or something that’s too constrained to repair/modify without undoing everything.
Not sure of any specific resources, just years of running cables :-)
Some of my lessons learned:
* Whatever you do, be very mindful of future events (repairs, replacements, etc). Spending an extra 5mins upfront when you are in a hurry will save you lots of time later.
* Label each end of the cable if possible. It is a PIA in the beginning but will be a real life-saver when you need to do maintenance
* Replace old (fat) cables with new, thinner cables of the proper length when possible. I typically add 1ft extra to any cable for a little slack.
* Think of airflow. Lots of cables bundled in the back of a rack can seriously cause airflow issues. Try to run all cables along the sides of the rack.
* NEVER run cables across the back of gear - especially for multi-node chassis that require rear access for maintenance
* Velcro is your friend.
* Stick with a cable color standard if possible. For me, I use white for IPMI, blue for data, red for external access
* Keep an updated diagram in the rack for the next guy/gal who needs to do work
* Always remove old/un-needed cables after a maintenance job.
Abandoned-in-place wiring is the bane of every tech's and installer's existence. It's impossible to have worked on legacy equipment and not spent some time reterminating abandoned cable :/ Not to mention having to weed through it when tracing problems.
We're in an ongoing battle with abandoned-in-place wiring in our current building. There's data wiring back to ARCnet (93 ohm coax), and power wiring going back to knob and tube days.
I understand that some landlords have it written into their leases that all network wiring will be removed when the lease ends because of this very reason.
One building we were in had Twinax above the drop-ceiling (used for System-36 and AS/400 terminals). Plus dozens of multi-pair phone cables that had been left there. I wish I had all that copper now that prices are much higher.
800A 120/208Y service entrance conductors live in the same raceway as 20A branch circuits. I got shocked off a lighting circuit with the main open once.
This is a former The Power Company building. I always heard it joked they could only use leftovers from customer jobs on their own stuff.
"Finger duct" is often the easiest way to get relatively clean results in a situation like AV where cabling is sort of poorly standardized (connections front and back, inline DC supplies, etc). It's fairly cheap and often run vertically on rack posts and horizontally above and below cable-dense areas like patch panels and switches, but in an AV situation you probably only need one vertical channel to get a big improvement in results. Finger duct can get pretty messy without planning but I think it's usually the best results to effort ratio.
I mean honestly there's not too much expertise involved, it's mostly common sense. It's just that it takes so much longer to do it right at the beginning to ensure its maintainable and ordered rather than just doing it the easy way and hoping that it's someone else's problem by the time someone actually needs to find the other end or replace a cable.
Making your own cables that are the exact length they need to be instead of using preterminated ones and just stuffing the slack into the side of the rack, routing them down the side and through the floor or the cabling trays on the ceiling, standardized naming schemes for labeling devices/ports on either end of cables, documented rack diagrams, etc.
i once had to make up about a 100 ethernet cables of varying lengths at once. Man, untwisting, sorting, and separating the individual wires to insert into the connector before crimping made for some very sore fingers.
I have been in some data centers over the years know one data center I visited multiple times where one blade chassis (their newest acquisition I think) stood on an old (wobbly IIRC) desk a meter or so from the closest rack and not near anything that could support it.
It was connected via a way too long fiber optic cable, of which most was placed on the end of a broomstick on the nearest server rack, and then the end of it was either wound around or tied to (I can't remember) the part of the broom stick that stuck out from the top of the rack towards the desk with the blade chassis before it fell down to the back of the chassis.
My first real IT job was doing all the grunt work (and whatever sysadmin tasks I could talk the greybeards into trusting the new kid with) in our two DCs, one at the corporate office and the other at our manufacturing plant.
Our DC at the corporate office was OK aside from the patch panel which was a fucking mess, that and when they originally built the thing however long ago they installed the switches at the top of the racks BACKWARDS so the ports where at the FRONT of the rack instead of the back with the server ports and they just ran with it instead of taking 20 minutes to flip them around, so instead of neat orderly cables going up and down between the ports you had these giant bundles on either side of the servers going from front to back I had to unbind and untangle anytime one needed replacing >.> This[0] is the what I managed to get our patch panels there reduced too and slightly cleaned too after removing all of these[1] cables that were still plugged into dead ports that my predecessor just never bothered to take out after retiring or recabling servers that had been on the other end. I'm 99% sure he was just running new cables and configuring a new port anytime someone called to complain about an unreachable server instead of trying to find the other end of the relevant cable.
Wish I still had some pictures of the plant DC though because holy cow it was even worse, no patch panels anywhere, but they did have a drop floor so I could pull the tiles up and run cables down there if I was so inclined. My predecessor, not so much. He seemed to prefer running them between the tops of the racks, with random cables crisscrossing above your head, always drooping down and getting snagged on the step ladder as I carried it between racks, or getting caught in the doors constantly. And they would never let us have any downtime to clean up the cabling, nor could I talk them into just switching the network configs to use a different damn port on the actual servers so I could just run new fucking cables ;_; I had to settle for obsessively tracing and labeling each end of the cable with the port# and device that was on the other end of the cable, because otherwise every call about checking on a server turned into an hour of either tracing cables either under the floor, pulling tiles one by one to follow it to the other end, or a jungle safari sorting through the tangled mess of cables on top of the racks following the one in question as it arced over multiple aisles to it's final destination. Which of course I had to do anyways to get them all labeled, but at least this way I only had to do it once and it was done forever.
> long ago they installed the switches at the top of the racks BACKWARDS so the ports where at the FRONT of the rack instead of the back with the server ports and they just ran with it instead of taking 20 minutes to flip them around
There might be a good reason for this, it sounds like they messed up the switch order and got front to airflow switches. Normally you would buy rear to front airflow, as the front of the cabinet is the cold aisle and the rear is hot.
In a shared colocation like Equinix, they will make you conform to these standards and have you rack a switch that way so the airflow is correct. Incorrect airflow lowers cooling the efficiency of a DC.
Front to airflow switches still have their place though, you typically find them on two post telco racks where the patch panels are.
They definitely just didn't care enough to mount them the correct way, I have very distinct memories of getting blasted in the face with hot air any time time I was standing in front of the racks. I'm reasonably tall and have longer hair, and the top U of the racks was juuuuust high enough so that my hair would be whipping around the entire time.
> In my experience, data center cabling is either a huge mess (random cables just hanging everywhere) or is tied down so much you can't do any repairs/replacements (velcro, tie wraps, etc). There is a fine line between cleanliness and usability.
We have 7 racks that went thru some changes and never in the history (12+ years) needed to replace any broken ones. It just doesn't happen.
What did happen were re-wires when some hardware were replaces but smaller ones were small enough that whether it was nicely bundled "permanently" or not didn't matter much, and bigger ones were "take everything out and recable" job anyway so again, what was in the rack (and there were some spaghettified ones too) also didn't matter.
Yeah routing it properly the first time helps but making it "permanent" by tying it down isn't really a problem, just put another bundle for next 5 servers.
Depends on the datacenter. I work in a large testing environment, so equipment is being moved and parts are being swapped out all the time. The cabling needs to be nice enough that you have good airflow, can see all of the components, and to make swapping parts easier. Perfection isn't necessary but cleanliness makes life easier.
> but professional installers know cable lacing lasts longer than cable ties.
Come on. Look I love this and it’s beautiful but I can’t think of a single scenario where you need cable lacing to last longer than cable ties. My 76 BMW (an almost 50 year old car) uses cable ties for its loom under the hood. I’m currently in the process of restoring it completely and every single one of those ties is still intact, in an environment far less hospitable than a sound stage.
The nice thing about this tie setup, it seems, is you get bundling it over a distance. Nowadays I think most people use nylon webbing for that but this is a cool alternative.
Like, we can love the craftsmanship behind this for its own sake. We don’t need to ship lies or pretend reasoning with the story to sell it.
Worked aircraft and nuclear system maintenance/operations for the last 20+ years in the US Navy and I can say that cable lacing isn't dead there. Some people suck at it, but if we aren't doing a full harness build with the sleeving machine then we lace everything.
I actually love the craftsmanship of building a proper harness up. There is a lot more skill that goes into it than most people expect, most of it regarding preventing edge cases (repair needs, chafing, EMI interference, signal contamination requiring separation...) during operation or maintenance. Lacing 400Hz AC into a harness with sensitive analog signals is a mistake that will lead to nothing but headaches...
The first time I saw laced cable, I am not ashamed, much, to say was in 1978, when I had just finished my masters in EE and had just landed a job at Bell Labs. One of my first assignments was to develop a procedure to convert an old teletype system to the new "datapak" technology which was a version of X.25 (a predecessor to the Internet, more or less) and was called BX.25, I guess, Bell X.25.
I went to an office that had the old teletype equipment, which reeked of PCB's, large coils with oil dripping from them. The circuit transmitted at 10 baud, not that is not a typo, not 10Mbps, 10 bps, or baudot, back then. 300 volts to zero and back again! Travelling hundreds of miles on terrible copper cables through hills and valleys carry the news wire of the day, AP etc.
An old man with grey hair and a beard, crept out of his cubicle to greet me, I informed him that I was from HQ and here to help. Just kidding, but I am sure I said something stupid like that. He explained everything to me, which was fascinating. I was surprised to find out that he was only 25 years old, and the PCB's had aged him. Again, only kidding, I have no idea how old he was, but he ansured me that he was going to retire just as soon as we completed the upgrade to the new technology.
He showed me around, the beauty of the equipment and the cabling astounding me, it was almost as if it would be a crime to touch any of it. The way it was tied so carefully. The way everything was put together. There was a beauty that was almost reminiscent of the Incan Quipu's.
I am sure I am waxing philosophically here of course. But who is to say that ancient technology is any less amazing than old art or music?
Had to learn lacing during my Electronic Engineering apprenticeship in the 1980s, but the skill has been little used since.
I did, however end up with a large spool of waxed, flat nylon, lacing cord, to which I fixed a small metal nut. The cord was very light and I could throw the nut (and cord) quite large distances through roof spaces and ducts; it was a great method for pulling through data cables.
I don't see much that looks like true lacing there, mostly a metric shitton of kapton with some ties at intervals to keep things from shifting around. Which is totally a valid way to do it but not really lacing
I can't help but be reminded of how cuts of meat get tied up to prevent them from curling up when searing. Funny that chefs use this technique for beef, yet we use it for miles and miles of spaghetti :)
All kidding aside, its amazing how a bit of string can be such a useful engineering tool in so many contexts.
Cable lacing seems to be alive and well in aviation. I'm currently building a homebuilt experimental airplane and as I've started researching the avionics build I've noticed a lot of cable lacing on the wiring harnesses all over people's planes. I think it's because lacing is lighter weight, uses less space and is easier on the wires in high vibration environments.
I do notice a lot more spot ties rather than running stitches. Not sure why.
Same here. I just finished up an RV build, and all my wire bundles are laced. I chose to use individual clove hitches instead of running stitches because it's easier to fix individual knots later and, well, I just never got good at the terminating knots.
When a single running stitch gets damaged, the entire bundle can come undone. Spot ties are failure independent.
Using lacing instead of zipties means there are no sharp cut-off ends sticking out to tear up your hands. Zipties can pinch and damage insulation, especially when installed with a tensioning gun.
Oh, this was really common in Soviet radio, military and especially aircraft and space industries.
Best part: Russian military stockpiled so much of amazingly durable PTFE-insulated cable that I could get miles of it for (relatively) cheap. Think anything from 36AWG to 12AWG.
And yes, I often use the leftovers of 26AWG for my projects. That thing was something my EU friends were envying endlessly :)
Sadly, with the EU RoHS regulations, I can no longer use this beautiful eutectic leaded solder... well, I can and do for personal use, but technically this is illegal :).
Many years ago, I started an ISP. I labelled every cable on both ends. My more technical partner came in one day and started ripping off the labels. His reasoning? "They're going to be eventually incorrect so let's just rip off the labels now and trace the connections every time."
It took a while but eventually we went our separate ways.
I feel obligated to say this, even though it should be extremely obvious :
As a manager of data center technicians, I would fire someone on the spot for this behavior. It is unconscionably stupid and entirely unprofessional and unproductive.
what works really well is building a system that generated a QR code with URL embedded in it for the trace of the cable ID. (netbox can do this, not sure about other IPAM solutions).
then all your DC techs have to do is scan a qr, and boom, cable trace is visible.
There are still ample supplies of lacing cords for example at McMaster-Carr [1] so there likely are still lots of sites using them.
The part I never quite understood is why lacing standards didn't specify tucking a tail of cord at the end under a single winding of two or more whippings so one can exercise the future option to expand the loop locks if necessary to insert more cabling instead of re-doing the entire lacing run from scratch. I've also yet to find proper waxed whipping thread that is a smaller diameter than the lacing cord so the loose end of the wound whipping fits inside the winding without a snag-prone bulge at the end of the lacing cord; I have to messily make my own from 80/3 linen (or if lacing outdoor cabling, UV-rated polyester) fine sewing thread and beeswax.
The company I work for develops geophysics instruments and we lace the wire bundles in our equipment. The feedback I've heard is that it isn't great in terms of ergonomics for the technicians who do the lacing - repetitive strain injuries are a problem.
When working on spacecraft cabling we use a combination of zip ties and lacing cable depending on the situation.
Lacing is used to hold a bundle together, fix shielding braid around cables and to connector backshells, and sometimes used to fix the cabling to the spacecraft structure.
Zip ties are used for more structural tie down points for both cabling and small components.
Lacing cables is one of the few things I miss about electronics, but I don't need more stuff so I have nothing to build and I have long since grown sick of building stuff for other people.
I used to run a few telco switching offices back in the circuit switched days... and I truly miss wiring up DSX panels. I would come in on a Sunday afternoon with music playing and that was my zen.
I've laced cables in the past. It takes a little longer but can look nicer if the cables are exposed. Also there's the advantage that it's easier to cut the lacing without damaging the cable insulation than it is a zip tie.
I much prefer velcro. If you do use zip ties don't cut the ends. If you do, make sure it's flush, had many many scratches over the years from non-flush cuts on plastic zipties.
Panduit zip tie guns are expensive, but once you experience the joy of having zip ties correctly tensioned and trimmed flush with no effort you'll never look back.
With a zip tie, use a pair of pliers. (I use a multitool.) Place them right behind the head, then pinch it on the sides and twist. It's important to grab the sides of the tie, not the top and bottom. The plastic will rip easily if you do it right. (This is also the recommended way to remove the tail end, it doesn't leave a razor sharp edge like scissors do.)
Zip tie guns are amazing. They pull to an adjustable tension and then cut off the tail. I ended up with a Panduit model that retails for $300, but harbor freight's $20 unit works very well.
In the large-scale video post-production facilities I worked in the 80s and early 90s, almost all the cables — miles and miles — were artfully laced with black linen flat lacing cord. One chief engineer in particular would sometimes loudly criticize other engineer’s lacing work and tear it apart and re-lace it himself. Great engineer but not well-liked.
> The reason is not just for esthetic, but professional installers know cable lacing lasts longer than cable ties.
As someone who knows nothing about this area, can somebody explain why? Cable ties really fail over time at a significant rate? And on a short enough time span that it makes a difference?
> Cable lacing is definitely old school but it's been the method of choice for major broadcast facilities, stage rigging, CATV installers, NASA engineers, ships and aircrafts for many years
I've worked for a major UK broadcaster for 20 years, and have installed equipment in dozens of core equipment rooms. Early in my career I sat next to a Tag frame (which predated krone blocks for jumpering audio cables), it was part of an old central aparatus room full of analog video cabling from the 1980s or before.
I've never seen such lacing. Maybe it's a US thing, but our Washington office was built in 2005 and doesn't have it either. Cable ties all the way - either plastic or velcro.
I've always wondered, from people more knowledgeable than I -- Any advice on bundling cable runs, re: crosstalk? Does it actually cause issues? For either electrical or data?
Know it's less on a concern with fully-shielded and -terminated (Cat6a+?), so this may be a somewhat historical question.
Interested in theory and/or real-world practice and results. Links welcome too!
On a decent quality cable you are unlikely to run into any scenarios in a typical residential or small business install where noise and interference causes a degradation in throughput.
The balanced signalling used in twisted pair communications is very robust these days, and then you have various protocols on top of that which can handle small errors natively (eg: TCP retransmissions).
It is highly recommended, and often required by code, to keep low voltage and high voltage (high voltage being 120V AC power lines, which really aren't that "high" of a voltage) separated physically. This can be air separation, keep them several inches apart, or conduit separation (don't ever run them in the same conduit). This is mostly a safety issue in the case of insulation breakdown to keep any high voltage leakage from entering the low voltage cable and equipment.
Similarly, things like minimum bend radius, and untwist amount at termination can often be wildly violated with no ill effects. But, it's still best to not temp fate if not needed. Even max cable length can often be exceeded by 20% or more.
It isn't the voltage that causes interference, it's the current. A flowing current creates a magnetic field which can induce a voltage in nearby wires that run parallel. It just happens to be that the wires that carry the highest current are your power lines at high voltage.
If you have a wire carrying a very high voltage but no current at all, you shouldn't get any interference from it. Likewise, you could have a low voltage cable carrying lots of current (though why, I'm not sure), and that would cause interference.
Your point about code and safety is separate, valid, and very sensible though.
Low-voltage and high-voltage wires are not separated for interference but for safety.
OPs point is that if a 230V wire somehow got damaged it could end up touching the outside of an Ethernet cable, and if the Ethernet cable is not rated for that it could cause serious damage to the attached equipment.
You either need to keep Ethernet away from power, or make power 230V+-tolerant.
>It is highly recommended, and often required by code, to keep low voltage and high voltage (high voltage being 120V AC power lines, which really aren't that "high" of a voltage) separated physically.
It depends, there are types of data cables certified (insulation above 400 V) to be installed in same conduit as mains wires:
Well, that can happen about almost anything, having data and mains cables separated is anyway a good idea, but (with the correct cables) is not (anymore) against code in many situations.
100% agree. My main point is that it is a "just because you can doesn't mean you should" sort of scenario. There are situations where it is allowable, and safe, of course, but it would still generally be considered bad practice unless absolutely necessary.
If doing long runs just run fibre. gigabit Ethernet is about as fast as copper can get - yes 10gb Ethernet exists, but it is power hungry (one reason 2.5gigabit starting to appear). You can send a lot more data with fibre, there are no cross talk issues, and it is cheaper than copper. Transceivers are still expensive though, so copper is best for shorter runs (within your house distances), but still everyone who might run some cable in the future should learn about fibre and how to work with it.
10Gtek sells 10gbit multi-mode SFP+ transceivers for ~10->15 USD per, depending on how many you buy at once. I have ten on my LAN and have had no troubles with them. (Now we "just" need to do something about how absurdly expensive 10gbit NICs are...)
(And no, I had never heard of them before I bought a few. I figured that multi-mode SFP+ transceivers have been around for more than a decade, so there's no reason for them to be 50->80+ USD per... purchased from several of the cheapest manufacturers selling on Newegg, and found that 10Gtek's stuff worked fine.)
10Gtek has worked well for me. I have several transceivers and DAC cables from them. Fiber is awesome for distance and speed, but few devices can take SPF+ so I have a bunch of bulky thunderbolt to SPF+ dongles.
Hard-earned suggestion: LABEL your fiber cable on both ends with the LENGTH (and id while you are at it). Nothing worse than running the fiber only to have it a few feet too short in the end.
That is one of the downsides of fiber: you need to figure out your distances before hand and order the right length. One more reason to use copper in the house most of the time. Maybe some ~2ft lengths of fiber for use on your server rack where you need speed between the server and the switch.
Only time will tell. For now my best guess is cat 5e is still good enough for everything, but since cat6a is no more expensive may as well use that instead.
God has not given me any indication on how the future will play out though, I could be wrong. Then again I might be right just because by the time you need more than cat6 there is a new cat 12 that you need. Only time will tell.
Cat6a can do 10GB over 100 meters, cat8 can do 40GB over 30 meters. Current consumer hardware is starting to support 2.5G over 100 meter of cat5, and we miiiight see 5G over 100 meters of cat6 become common in a few years.
If you're that worried about upgrade-ability, better stick to fiber.
Note that the specs are 100 meters in a dense conduit. You can probably manage higher than rated speeds in a household environment which likely doesn't have dense conduit nor 100 meter runs.
If you're wiring from scratch, sure put the best you can afford, but if you've got cat3 in your walls already, see what runs, your NICs aren't going to look at the label on the insulation. (For better or worse, ethernet speed negotation runs at 1Mbps and the specs don't contemplate testing conditions and reducing speeds until you get to the multi-gig (2.5/5/10g) equipment. Some equipment will drop to 100M in drivers though)
Fiber through the house is a PITA. The fiber cable is fragile, can't be bent and can't be tested/resized without special equipment. I wait to end my current contract with the internet provider and will switch to a new one that has normal equipment that allows to run ethernet trough the house.
Gigabit Ethernet doesn't care much if you have a good quality Cat5E+ cable. Doing this a lot in the data center for a long time. However you don't want high power AC cables going alongside with them. Just carry them separately, or keep power source local to target.
Similarly, other data cables (Infiniband, SAS, etc.) doesn't care either, but they're both short length when used in copper form and they have ample shielding.
As a foil, people building point-to-point tube amplifiers are very concerned about "crosstalk" in the circuit itself and try to have signal carrying wires cross at right angles to each other rather than run parallel (for obvious electrical-theory reasons).
I have spent months of my life trying to diagnose issues with spacecraft hardware that ended up being caused by cross-talk from a noisy clock signal in AWG30 wires. Added some shielding and everything worked perfectly.
In the most recent case, the clock was interfering with a series of motors, causing an antenna pointing array from getting lost and moving erratically. We had had some issues with the motors in the past so assumed it was an issue there and performed 100s of tests to try understand the issue. When we realised it was crosstalk I was quite red in the face.
In spacecraft we talk a lot of precaution to avoid crosstalk when laying out our harness. Shielding is not always an option.
Lacing cables together isn't worse than running them parallel using other means e.g. velcro loops unless you're lacing them tight enough to crush the cables.
From memories of electromag, field coupling from current in parallel cables decays with 2*pi*r, no?
So it seems there would be a fair amount of difference at extremely low separation distances. But we never worked through high frequency EE derivations, which I assume are more average-current- or capacitance-dominated in terms of effects?
The distance between cables doesn't vary much between lacing methods. The balanced nature of the signals also means the common mode rejection ratio is very high, so external EM interference (which almost entirely shows up as common mode) isn't a huge concern.
It can definitely causes issues. The risk factors are:
- High-current power cables
- Analog signal wires
- Low-voltage signal wires
In practice it is mostly a solved issue. Applications like Ethernet, HDMI, and professional audio have been using balanced signal pairs for decades and they cancel out most interference. We figured out how to do that in the 1880s.
I remember having to wear nice shoes and nice pants for my first IT monkey job, then wearing out the toes and knees because I was always scurrying around on all fours chasing down problems. Like packets spilling out all over the floor because somebody forgot to terminate their 10BASE2.
The corrugated cable sleeves that you see under the hood of your car are certainly easier. I suppose it has the downside that it's usually opaque, so you can't see the cables inside, but there are ones made with braided material.
Cable Lacing is still mandated by AT&T and (I think) Verizon in their facilities. I think AT&T only requires it for power cabling, but I'm not 100% certain.
This is really helpful to me right now. I’m finishing up a large home recording studio and I’m looking for better ways than Velcro and zip ties to help dress wiring
I wonder how easy it is unlace cable when something breaks or gets replaced or new hardware gets added or a musician changes their mind or has an idea that requires a novel signal chain.
Most studios are moving to digital. A ethernet stagebox (normally Dante, but there are other options) where each performer is, connect to your network. Just run an ethernet jack and power to various locations where you might want something and plug in what you want. You can get in-wall versions of the above to use in places where you will always have a lot of stuff connected.
You still have the mic cables on the floor, but they are going to be there all the time and the ones that change most often.
Why would someone place a toggle select for 'necessary cookies' on this page that cannot be deselected? It explains their justification for not being able to deselect (valid or not), but this does not explain removing or greying out the control.
I had to run some long speaker cable around my living room, and this guide really made it so much easier to keep things neat. I'm glad to see it reposted again.
Its kind of assumed by building inspectors that 'new' (post 1970s) lacing is fire retardant. Plenum rated velcro is available but unfortunately both costs more and looks absolutely identical to the flammable stuff, so building inspectors sometimes look askance at it.
Lacing dates back to the beginning of electrical wiring (late 1800s, early 1900s)
a) "Zip ties" were not invented yet (nylon was invented in 1935)
b) Twisty ties are thin metal wires covered by paper (plastic covered ones were not practical until the late 1900s). The twisty wires were a threat to cut into the wire bundle. They also rust.
Even in modern days with zip ties, lacing has a significant advantage in that zip ties have sharp "tails" that stick out from the bundle and cut your hands and arms when you reach into a compartment.
Lacing lies flat against the bundle and has no sharp ends.
> zip ties have sharp "tails" that stick out from the bundle and cut your hands and arms when you reach into a compartment.
The guy that ran a computer shop I used to frequent was an ex-Bell tech, and taught me about using flush-cut snips exactly to avoid this. They've been a part of my toolbox ever since. They're also my go-to cutting tool when working on small electronics.
Interesting, I hadn't heard of those before! I had to go look up a video to see how it worked [0].
However, since it uses standard zip-ties, there's still the extra bump of the head on the finished tie, which can snag if you're pulling on the bundle.
Depending on the environment zip ties, wire ties, etc. may not be appropriate. For example, high heat and UV exposure can cause zip ties to fail rather quickly.
Lacing was widely used in (industrial) electric switchboards, at least here what is used nowadays in them is not so much zip ties but rather plastic spirals or similar continuous cable wraps, like these:
My guess is — this would be quicker. Having to stop and twist together a new tie every 6 inches vs. just continuing your daisy-chain of loops only having to expend any significant effort at the beginning and end of the chain.
In my late teens, I worked at a telecom in desktop support and ended up on a multi-day project installing hardware at a switch site in Cleveland. They'd taken on a new director who was furious at the condition of the cabling/cable management[1]. I remember a piece of paper with a sliced black zip-tie taped to it and the words "FIRED" scrawled in big letters. Teams of two men were working throughout the site.
They were ripping out/replacing a huge number of runs. During the week, a group was working on a bundle of what looked like thousand Ethernet cables coming from the ceiling, all labeled, starting out as a perfectly laced 3x2 ft rectangle through the channel in the ceiling and then falling into a spaghetti pile extending most of the floor of the large room.
By the time I left, that channel was nearly done, the wires broke off at 40 or so points in different bundles ranging in size, but most were inches thick/wide, wires joined the bundle at various points in the metal channel causing the wire bundle that went to the floor, below, to be only a little smaller than the one coming in. Every curve of every bundle was visibly the same radius.
After seeing what could be done, I decided to learn how to do it. :)
[0] I'm not that old.
[1] Some of the issues were straight-up code violations, but most were issues of good practice.