Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know I'm going to get downvoted to hell for this, but am I the only one who thinks of these people's efforts as a huge amount of wasted time? Why should anyone have to package software aside from the developer who made it? Why should it have to be done for so many different package repositories and packaging formats? Why are people ok this?


Because the software is inconsistent (and thus buggy) in that regard. Also, barely any developer possesses the knowledge and infrastructure necessary to build their software for dozens of arcane architectures.

Packagers create the glue between the software (which is heterogeneous) and the distro (which is internally consistent). The world without this glue is a horrible mess and a huge waste of time; the old Slackware is a good example. It's thanks to the distro makers that you don't have to spend days collecting requirements and acquiring arcane knowledge just to install a piece of software.

So: yes, those people devote a huge amount of time - so the countless users around the world don't have to. The net time value is positive.


I think we need to distinguish between software that is used as a component in a larger system (e.g., the core OS for which is the core business of a distribution), and an application that is not part of the larger system (distribution) but merely runs on top of it. For putting together a distribution (with tightly integrated components) the traditional packaging philosophy is probably very well suited For add-on third-party software that is not an integral part of the distribution but merely wants to run on it, not so much.

An independent software author (e.g., Ultimaker or Prusa) just wants to reach all "Linux" users at once without dealing with different distribution's policies, and without needing to use the same version of e.g., Qt, that happens to be in a given distribution. And as a user of their software, I want their software on my "Linux" system in the same moment Windows and macOS users can have it.


A lot of the strength of open source comes (much like in computation in general) from chopping tasks into smaller and smaller pieces. The upstream developer is unlikely to be skilled at packaging for a number of different platforms. If platform users can rely on others who are skilled at packaging, we all benefit.


The biggest reason, both historical and present, that we have many different package repositories and packaging formats is because of the concept of shared libraries and the desired benefits: security, stability, speed, freshness of volatile data and disk usage.

And from what I have seen there is only a primary alternative being proposed; containers. Have a copy of all needed libraries in every package and leave it to the developer to patch and fix security. Occasionally put a few things into the operating system, but then you have to hope that those parts don't change or you have to make different packages for different version of the operating system.


What we need is a clear separation between the Core OS a.k.a. base system which should be provided by every "Desktop Linux" distribution, and the rest.

Applications should only use those shared libraries that come with the Core OS a.k.a. base system, and either link statically to or bundle the rest.

Like an iOS application can only consume what iOS provides or bundle any additional dependencies privately.

The result would be a much simpler and more resilient system (at the expense of some storage and memory overhead, which is the lesser evil imho).


> The result would be a much simpler and more resilient system (at the expense of some storage and memory overhead, which is the lesser evil imho).

On balance, I agree with the conclusion. However, coming from a non-desktop viewpoint (server, not embedded, though I do sympathize with the latter), I don't think it's obvious that "some" overhead is worth it, nor has it historically been worth it.

At scale, size can matter, though, like I said, I think today, nobody would even notice.

It's tough to "fight" that history, though, so we go through the pain of even more overhead of full virtualization before cutting it back with OS-level virtualization (a.k.a. containters) and (re-)declaring victory.


I disagree that it's a waste of time. (But I upvoted, because I think it's an important question, thereby adding to the conversation).

> for so many different package repositories and packaging formats?

How many are there, though, really? In theory, the number is unbounded, but, in practice the number of distros is modest, the number of popular ones is smaller, and the number of unique packaging formats even smaller.

Although a sibling comment alluded to it with distros being internally consistent, I wanted to unpack that a bit more.

Specifically, one benefit I've found as a "user" (sysadmin/devops) is that of well-defined dependencies. This isn't inherent to packaging, but it tends to be a feature of the more mature systems and distros.

The other benefit is that it provides a more universal mechanism of traceability and, thereby, at least a path to reproducibility of builds. This has implications for security, of course, but also for debugging.


I’m fairly sure that it often is the developers packaging it for different platforms...

Sometimes though, people do it...and it might even be automated.


Systems like the Open Build Service can ease the pain a bit by building for different distributions and versions, but it is a pain nevertheless. Luckily the Open Build Service instance at https://build.opensuse.org/ can also do AppImages, which run on most "Desktop Linux" systems.


Upvote from me, I fully agree with you. I'm sick and tired of people re-packaging python/npm/ruby/etc applications as distro packages. As a maintainer of a few high-traffic python libraries, it wastes my time. I am not ok with it.


Do you have some details about how this wastes your time?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: