Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with this article on all points. Currently we have a massive jenkins pipeline sprawl that's difficult to maintain. It is also difficult to create new jobs in Jenkins itself, specially if you are using pipelines. My average is around 100 test builds before I can get a full pipeline success for anything of modest complexity.

If all you are doing is using jenkins to run simple bash scripts, you may be able to get away with it. The problems start when you want to add some logic to the pipeline – which you are doing, otherwise why bother with a pipeline?

First things first: are you going to use the scripting pipeline, or the declarative pipeline? The declarative pipeline is a bit better, but it lacks examples, has lots of bugs (I've littered my code with references to JENKINS-XXXXX) and is very restrictive (arguably by design). Of course, you can have 'script' blocks inside your pipeline->stages->stage->steps blocks.

Then you want to take advantage of parallelization or conditional steps, and to visualize that you want Blue Ocean. Problem is, not all plugins are compatible with Blue Ocean, it also doesn't have all features, so you drop down to 'old' jenkins often.

People will want to have a whole bunch of tools with incompatible versions in their builders. Not all are supported natively, so you need to figure out your versioning.

Once you figure all that, congratulations. Next guy to automate something will either find a similar pipeline to copy from, or will endure all the pain again. At this point you may want to use Groovy.

Groovy was absolutely the wrong tool for the job. Yes, I get it that it works with Java, which Jenkins is based on. Still it is the wrong choice. You see, the kind of things you want to automate often involve passing commands around, be them bash, ansible, SQL statements, what have you. Groovy's string escaping rules will ensure your life will be pretty miserable (https://gist.github.com/Faheetah/e11bd0315c34ed32e681616e412...)

You could get around most of these by perhaps moving most of the logic to containers and then running those. There again you'll run into problems with declarative pipelines, random things won't work and you'll be scratching your head.

However, if you are going to do that anyway, you're better off using a more modern system for CI, any system. Drone was already mentioned, there's also Concourse and a bunch of others. For CD, you can use Spinnaker as well.

Or maybe keep jenkins around but forget all the fancy stuff. Delegate all the 'thinking' to scripts and pretend the more recent development has never happened. You'll be saner that way.



If it takes you 100 tries for a moderately complex jenkins pipeline, you have other problems that are not jenkin's fault.


I wish I could bring you here to see you do better.

Or do you mean systemic corporate problems? In that case, I agree.

It still doesn't change the fact that Jenkins does not make my job any easier. I'll spend a day worrying about Jenkins idiosyncrasies ("why can't I use a pipe in sh", "why did my bash escaping disappear completely", "why 'dir' doesn't work with a container build agent?! (JENKINS-33510)", "why this input plugin won't work with blue ocean", "why can't I use a for loop in this piece of code in particular but it works elsewhere" (JENKINS-27421)).

Whereas with concourse or other newer build systems I can write a simple YAML description, which is modular and uses an existing standard, and test that in isolation. And then provide it as a building block for other tasks.


I feel you, but why are your jenkins pipelines so complicated? I feel like your workplace's deployable artifacts should follow a familiar pattern and there should not be much guessing/re-inventing the wheel with jenkins scripts. I feel like complicated builds are usually the result of an application that is not very well thought out in the first place.


> I feel like complicated builds are usually the result of an application that is not very well thought out in the first place.

Welcome to the world of enterprise Java or .net programming. Loads upon loads of crap. Best served with multiple frontends (e.g. web + mobile) which need different npm versions to compile and all of it out of a single fucking pom.xml which is a nightmare in itself!


If you're mixing npm with pom files you are asking for trouble. Jenkin's shortcomings have nothing to do with npm's crappy package management. (not saying you said that, just pointing it out)


> I feel you, but why are your jenkins pipelines so complicated?

You have an excellent point. Individual microservice containers are not complicated (then again, all they do is call a standardized script). The script will run a Dockerfile and push it to the registry. I would classify it as a 'trivial' Jenkins job, not even pipelines are used.

The pain starts when you want to do more than CI and try to get into CD. Or even worse, automate 'devops' tasks. That's where you run into all those warts.

A job could call Terraform, or spin up VMs, or run vacuum on a database, or any number of tasks. Or it may perform tasks on K8s to deploy a complex app. It may need to call APIs to figure out where to run things. And so on.

Since Jenkins is not only a CI/CD system, it can do anything, so people will try to make it do increasingly complicated stuff. And I'm arguing that this is wrong. If you have complex logic, it should be moved out of Jenkins so it can be more easily maintained and tested, and dependencies isolated. One of the easiest ways to do that is with containers. At which point, Jenkins loses most of its usefulness and other, newer tools shine.

Alternatively, use more specialized tools. If it is for CD, and Spinnaker works for you, please use that instead.


I agree with you that bash heavy complex things are best suited for something like ansible.

However, deploying containers to environments like openshift and kubernetes is extremely simple with jenkins. I don't think that's complicated at all. As a rule of thumb, you should be able to hide all the complexity in your deployment in the dockerfile. In addition, you can always use jenkins "build with container" functionality to build your application in a dedicated container on the fly. Many ways to hide complexity with jenkins.

I do agree with you that jenkins is abused because it is more than a CI/CD tool. I think that you need some experience using it to know what works well and what doesnt. Unfortunately in the new age "sprint agile" world some random guy has to pingeonhole crap into jenkins in 2 week time windows that shouldnt be there in the first place.

I also think that many devs underestimate what you can do running local jenkins as a tar file on your macbook. I like using jenkins to automate tedious tasks for myself. As an example, it is trivial to write yourself a custom github code scanner that will scan all files and folders in as many repos as you want. I like using jenkins for outside the box things like that.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: