I can conjure a scientific wild ass guess on the spot.
"I wasted around 10 hours last week thanks to inadvertently pulling broken builds because we don't have a CI server. I spent 4 hours manually deploying things because we don't have a CI server."
"When can I move the new CI server I already setup on my workstation - because fuck wasting half my week to that nonsense, and I had nothing better to do while the devs who broke the build fixed it - to a proper server where everyone can benefit?"
Extrapolating, that's what - 4 months per year of potential savings?
Sure, 14 hours might not be enough time to automate your entire build process, but it should be enough to automate some of it, get some low hanging fruit and start seeing immediate gains. Incrementally improve it for more gains when you're waiting for devs to fix the build for stuff the CI server didn't catch.
Thank you for insight. If the deployment indeed takes hours and there are high chances of pulling wrong builds, then indeed the time costs are very tangible. But if manual deployment takes at most 10 to 20 minutes and relatively straightforward, then it might be less of a case. Depending on how often you deploy also impacts this greatly. I guess in certain cases ROI is just not extremely high and that greatly reduces the appeal of such investment.
No problem, hope it's useful :). You're right about deploy frequency - that 10-20 minute manual deploy done 3-6x a day already adds up to my 4 hours a week deploying things. 10% of the work week right there!
On the other end of the spectrum, a lot of my personal projects don't warrant even the small effort of configuring an existing build server. I'm the only contributor, nobody else will be breaking anything or blocking me, builds are super fast... even if "it will pay off in the long run", there are other higher impact things I could do that will pay off even better in the long run.
In the middle, I've put off automating some build stuff for our occasional (~monthly) "package" builds for our publisher - especially the brittle, easy to fix by hand, rightly suspected to be hard to automate stuff. I was generally asked to summarize VCS history for QA/PMs anyways - can't automate that.
When we started doing daily package builds near a release, however, it ate up enough of my time that non-technical management actually noticed and prodded me before I thought to prod them. Started by offloading my checklist to our internal QA (an interesting alternative to fully automating things) and eventually automated all the parts QA would forget, not know how to handle, or waste too much attention on.
Even then, some steps remained manual - e.g. uploading to our publisher's FTP server. Tended to run out of quota, occasionally full despite having quota available, sometimes unreachable, or uploading too slow thanks to internet issues - at which point someone would have to transfer by sneakernet instead anyways. Not much of a point trying to make the CI server handle all that.
"I wasted around 10 hours last week thanks to inadvertently pulling broken builds because we don't have a CI server. I spent 4 hours manually deploying things because we don't have a CI server."
"When can I move the new CI server I already setup on my workstation - because fuck wasting half my week to that nonsense, and I had nothing better to do while the devs who broke the build fixed it - to a proper server where everyone can benefit?"
Extrapolating, that's what - 4 months per year of potential savings?
Sure, 14 hours might not be enough time to automate your entire build process, but it should be enough to automate some of it, get some low hanging fruit and start seeing immediate gains. Incrementally improve it for more gains when you're waiting for devs to fix the build for stuff the CI server didn't catch.