Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Serious question, because I am admittedly ignorant to the plusses and minuses of the different package managers.

If you're going to swap out: why not switch to apt? What does apt lack that DNF is going to provide? This seems like one of those low hanging fruit where standardization across distributions would make sense.



It would have been better to switch to zypper. Both dnf and zypper use a SAT solver for very fast depsolving. Yum and apt don't and both have very slow depsolving. Zypper also directly supports rpm, and would have unified the package managers on Fedora, OpenSUSE and the enterprise Linux distros RHEL and SUSE.


As far as I can tell the fedora guys wrote libhawkey and dnf to reach some sort of middle ground, as switching to zypper would have implied breaking everything (i.e. all the scripts admins use, rewriting parts of anaconda, etc). The compatibility is not perfect, and in fact a fair amount of people complained (see some discussion on phoronix). As far as the parent post is concerned, to switch to apt would imply switching to deb, which would be a massive change. RPM is simply too different, both in positive (the possibility of changing target directory and of rolling back) and in negative (dependency specification is still rudimental compared to .deb, see https://www.youtube.com/watch?v=FNwNF19oFqM). And as rwmj14 said, libsolv is better.


For the record, apt doesn't always imply deb: apt-rpm exists, and it's used by a few distros, including PCLinuxOS.


apt-rpm has been around for more than a decade, and was considered for Fedora back in the day (it lost out due to a combination of lack of multilib support back then, and considerations over complexity). Last time I used Fedora it was available in the repository and worked just fine - I always preferred it over yum, despite also preferring RPM over Deb.


I thought apt-rpm was basically dead, happy to read that it is still around. And yes, I reckon the second part of my comment was wrong, you can have apt on top of .deb .


Well, you may be right it's basically dead - checking out the repository there appears to have only been one commit or so in the last two years. It may still work, but it's certainly not getting more features.

It's a shame, though - apt-rpm worked very well.


Is "very slow depsolving" even relevant? In my experience that part of installation and remove is a very small part of the overall installation or removal time and goes by so fast I've never really noticed it.

Even if dnf and zypper are 1000x faster than apt there, that alone wouldn't convince me to use them.


As well as being a lot faster than apt, the biggest advantage of openSUSE package management stack to me is that it plays nicely with third party vendors.

Packages have sticky-vendors, so you only get updates from the same vendor unless you choose to switch. Even if another the update is in a different package repository. This means you don't get flip-flopping package vendors when the same package is available from two vendors and they release updates one after each other.

Another example is the tooling around package/repository signing - Letting the user decide whom to trust without making them jump through hoops to manually import keys.

There's also a lot of things in the packaging itself designed to help support third party vendors. Dependencies tend to be configured with capabilities rather than package names. e.g. The kernel provides information about the binary api version which third party drivers can depend on. Third party kernel modules can also supplement/enhance (soft dependencies - inverse recommends/suggests) specific PCI ids so the solver will suggest installing them if available and compatible with your hardware.

In my view debian package management "just works" because the set of packages is carefully curated and tested, there's a high standard of package quality, and everything is available in one place. PPAs can present problems.

The rpm distro ecosystem philosophy is somewhat different. It's impractical to expect a single organisation to package all the software you will ever need, so let's provide tools to help software from multiple vendors to get along.


Apt pinning gives you the same functionality as sticky vendors - though it may be a bit more fiddly to implement (I'm unfamiliar with openSUSE PMs)


Not only that, but it sounds like someone can just add a solver to apt, whereas repackaging the entire linux ecosystem to use a different format is going to be a bit more difficult.


Red Hat (the company) and RPM (the format) predate Debian, although only by a few months. Also RPM is a much better format than deb, and that's speaking as someone who regularly makes packages using both.

I would also say that dnf and zypper are better package managers than apt, in as much as they are faster and easier to automate. Compare the code here:

https://github.com/libguestfs/libguestfs/blob/1.29.43/custom...


For those of us uninformed, why is RPM a better format than deb?


Primarily the all-in-one spec file is a lot easier to read and write than the scattered files of deb. Also the build system is considerably simpler and more coherent -- you don't have the mess of dh vs cdbs vs flavour of the month. RPM has a nice language and macro system. It's not that deb is bad, just that when I have to package for both, I find the RPM one simpler and easier.

Here is a relative simple package, done for both RPM and .deb. The RPM spec is 141 lines (excluding the changelog):

http://pkgs.fedoraproject.org/cgit/virt-top.git/tree/virt-to...

The .deb is actually shorter in this case, but split over several files, and uses cdbs which I find infuriating with its lack of documentation and multiple hidden implicit rules. If you have a Debian machine around, try reading the /usr/share/cdbs/1/ files some time. Remember also that for most Debian packages, the files come in a tarball or even a patch, which makes them hard to manipulate without obscure deb-* commands.

https://anonscm.debian.org/cgit/pkg-ocaml-maint/packages/vir...


What? how is this better? what all of this is is apt making users say yes to everything such as config changes. I'm totally fine with that. Sure it's an extra line or 2 when automating, but I don't see it as bad.


I don't know about you, but I'd rather use the tool that I don't have to hack the training wheels off of first when I'm trying to update 100,000 machines to mitigate a critical security issue.


1. Security updates don't add debconf prompts 2. Supporting large numbers of systems is exactly when a persistent local configure database becomes useful for avoiding unanticipated regressions


These distros use RPM for decades (or at least more than one) now, for them switching to apt would be repackaging everything. There seems to be an apt-fork for RPM, but that hasn't been updated in years.


I guess it's true that one wouldn't be able to just install debs on RedHat, for example, without modification. Alas.


There is apt-cudf which allows to use external solvers (such as aspcud) with apt.


> Is "very slow depsolving" even relevant?

Try upgrading from major versions and seeing what happens.

It's absolutely relevant


As mentioned, in server land, servers tend to be rebuilt rather than upgraded. In workstation land, major versions don't upgrade that often.

When I do an apt-based dist-upgrade, the long part of the upgrade is not the single-or-low-double-digit seconds it takes to solve the deps, but the download and installation of the new packages.


In server land, it's more common for machines to be rebuilt from scratch (or at least better to do so).

Given, I understand some people want "servers like pets" and Linux desktop to be a thing, in industrial applications it's usually not an issue.


I've worked on a large website where all backend servers were running RHEL5. We had pretty good tooling, deployment, config management. We could launch and kill new physical and virtual servers with particular "roles". We did not, however, re-image servers for every deploy. Our deployment/configuration-management would make sure the correct versions of all things were installed, including stand-alone code, services under daemontools, crontab entries, system users, packages, and more.

In most cases, no-longer-specified items would be removed. BTW ansible sucks for this aspect (I use it extensively at a new place) ;)

My point is that, the modern attitude of "you can't trust what's on a server unless you build it from scratch or apply an image" is not the only valid way to do things, and is somewhat defeatist, like we're no better than windows where you can't guarantee that anything can be cleanly uninstalled / replaced. Is your package manager that bad?

Yum being dog-slow was very annoying. Updating all server lists for every yum command was very annoying. I customized various modules and scripts to use "makecache" and "-C" where appropriate, but that was an annoying task. And yum was still slower than I'm accustomed to a linux package manager being.

Finally having a couple of years experience with yum, I can confidently say that pacman (archlinux) and apt (debian) are worlds better. Maybe zypper and dnf make rpm not suck; I probably won't find out for a few years.


> "Updating all server lists for every yum command was very annoying."

This isn't the case. It updates information when the cache expires, so this will only happen about every 30 minutes or slow. It should also only take (usually) way less than a minute.

> BTW ansible sucks for this aspect

Yep, it's not meant to understand how to remove resources it doesn't know about. That being said, it feels like I've created PHP at times, and I don't mind people not liking it. Many of the ways we have to automate Linux systems (due to lack of structure and API) are kludgy at times, but removal of packages not present in a master manifest seems dangerous to me for various reasons (group installs, tools self-bootstrapping, work happening out of band). Fair enough.

> is not the only valid way to do things

This was not the argument for immutable systems (though I like them), but rather that you need a good disaster recovery strategy and this is likely the best way to handle a major distribution version upgrade.

Minor versions? Continue to do what you do. Major upgrades between EL versions are full of all sorts of fun.

I've typically seem them done with upgrade kickstarts and the like, and you don't get good error reporting there at all when things go wrong. There were some advances in pre-downloading and then doing upgrades but... yeah... not a fan of doing them in automated context.

> " I can confidently say "

Trying to drive it programatically, yum seems much better engineered to me than apt. One example recently (and maybe there's a way in apt to do this) was to be able to select just one repo to use during an update to grab packages from, and I was missing --disablerepo=* --enablerepo=X. But there are a lot of things like that. Config files also seem a lot more capable. My opinion there too of course.


> This isn't the case. It updates information when the cache expires, so this will only happen about every 30 minutes or slow.

It doesn't do full downloads of files, but (as of RHEL5) it still makes HEAD requests or something, re-processes package lists (I guess apt does this too), I dunno. "-C" makes a significant difference. I want it to take less than a second to make a decision or spit out information; other package managers can do that.

EDIT: also want to say, I appreciate ansible overall, and I appreciate your attitude towards it. I've been close to a project that was good for a couple of uses, got kinda popular, and then people wanted it to be good for all purposes...


Thank you.

Today's "continuous deployment" world, that just "tear down and put the latest version" once a week is crazy.

A stable foundation is important, and updating it without breaking everything is needed sometimes.


> "A stable foundation is important"

It all depends on what kind of apps you are deploying and supporting. While I wasn't even discussing immutable systems in this capacity, some workflows work better for .com style applications. In a typical bank environment where you have thousands of legacy applications floating around, you are more apt to not be able to control the architecture and need to push out security updates.

In place updates here are fine, however, I still wouldn't want to do an in-place dist-upgrade across all of those systems, and then find out which ones of those thousand applications had problems. In this case, it's better to redeploy those applications if they need a new OS and the OS is no longer recieving security updates - and try to shift some of that burden onto those who maintain the application.

If you are just deploying a .com app though, you need a good backup/DR strategy, and it helps to be able to redeploy everything and take steps to not get attached to state on that machine.


Rather than forcing us all to install an outdated distro and do a major version upgrade then trying to guess the particular element of the process you find disconcerting could you may be tell us?


dnf _is_ using openSUSE's sat-solver (libsolv). They reinvented the upper layers.


Ironically, package formats have already been standardized: it's RPM, as per the Linux Standard Base (LSB).

Naturally no one fully obeys it, because LSB is a mediocre standard in general.


Because switching just the package manager will yield no benefit at all. Having apt as the common package manager among Debian derivatives is meaningful because those distributions get most of their packages from Debian, so they use not only the same package manager, but also same packaging format, same file system hierarchy, same package names/package splitting. All Fedora derivatives, OpenSUSE and Mandriva(?) would agree on using dnf but that would provide no significant benefit at all (other than reusing muscle memory).


Most of APT packages' superiority comes less from the packaging format than the entire philosophy surrounding them, most especially in the case of Debian, Debian Policy. Quite simply, so long as I stay within distro, the quality of Debian packages greatly exceeds those of RPM packages (long-standing direct, multiple-instance observation of Debian, REHL/Red Hat, CentOS, Fedora, and Suse systems).

Some of the benefit also derives from the packaging format itself. The ability to unpack DEBs using nothing but shell tools (busybox within a Debian system's pre-boot ramfs is sufficient and has been successfully used by me -- Red Hat loses in this instance by using a ramfs shell that's both 1) larger than Debian's dash and 2) doesn't offer interactive use -- it's a scripting-only shell, FML.

Joey Hess at one time had a detailed comparison of various packaging systems. He's pulled it apparently due to political / fanboi bickering, which is a shame. He's author of 'alien', a tool for converting between packaging formats.

But Policy (and fucking enforcing the fucking hell out of it) trumps.

Source: 18 years' use of both distros and many other Linuxen. 30 years' experience on further Unixen.


For a user, DNF is mostly just a quicker YUM. It still deals with RPMs and the UI is similar (at least for basic operations).


Everyone should switch to Portage. It's like a superset of every package manager.


The portage ebuild system for describing package dependencies and build procedures is awesome. The portage program for solving dependencies and building packages is mediocre at best—it's slow and too prone to not finding a solution even when the constraints of source compatibility are looser than binary compatibility. The gentoo portage repository of packages is clearly understaffed and orphaned packages are all too easy to run across.

All three of those things have gotten better over the years, and I have little doubt that given the attention and effort that RedHat and Debian package managers get, portage could be a clear winner. But the portage we have now has too many pitfalls to be the best all-around choice.


Correct, the all-around best choice is Exherbo's package management. It is extremely similar to Gentoo (after all, many of us used to work on Gentoo), but I'd like to think it has fixed all the problems Gentoo had.

Portage is not used.


There's no way that Exherbo's package repos are anywhere near as well maintained and broad as the distros that people have actually heard of. There's no silver bullet for that problem; the only solution is manpower that they don't have.


Exherbo's package repos are incredibly well-maintained. What is provided generally always works and things are kept very much up to date (KDE/gnome/chrome/firefox updates within 24 hours usually). Lack of public awareness doesn't always mean the system isn't as well maintained as a system like gentoo (which often breaks!)

The broadness of the system isn't quite as vast as many distributions but running a desktop / dev workstation I have never encountered a package not available that I needed.


If you haven't encountered packages missing from their repo, then their search must be broken. In just a few minutes of searching, I found that they seem to be missing anything GIS-related, netperf, smokeping, targetcli, any daap server. That's just stuff I've been using my Linux box for in the past month, but it seems like Exherbo would make me do at least as much work as something like MacPorts!


Perhaps my needs are just different from yours. What I did say was "I have never encountered a package not available that I needed". That is not contradicted by your example. Your needs are different, that's cool. What isn't cool is claiming that the search is broken because I haven't found the need to search for those packages.

Exherbo may not be for you. It values users who are willing to be developers as well, and augment the system with the packages they need. You want others to do the work for you, that's not what Exherbo is about.

Besides, the original discussion concerned what package management system was best, not if it had tool X, Y or Z that you claim is very often needed.


In reply to a comment that listed the quality of the package repos as one of three major areas of concern, you said that "the all-around best choice is Exherbo's package management" and that "I'd like to think it has fixed all the problems Gentoo had".

If you can't be honest about its shortcomings, you won't be able to convince anyone to try out your pet project. It doesn't matter how reliable and trouble-free it is at managing the core of the system if it immediately degrades to "build it yourself" anytime you want to use something that's not popular enough to make the cut for a live CD.


Perhaps I was unclear, and if that's the reason for any confusion I am sorry. I was referring to the majority of the comment which was about portage's shortcomings (though it is also true that ebuild quality is a major problem for gentoo). I specifically was comparing portage/the Gentoo package management infrastructure (NOT the package repos per se) with Exherbo's package management infrastructure (by which I mean the package manager, alternatives handling, repositories). This is what I meant by "Exherbo's package management"; that does not mean the breadth of the repositories.

I like to think my comments were honest: I admitted that the system while technically superior does not have the breadth that larger distributions do, but that for my purposes it was sufficient. You ignored that and found some packages not currently packaged in an attempt to disprove my experiences. Furthermore I admitted that the project may not be for you since you expect different things from a distribution than many of us do. What is dishonest about any of this? I have been incredibly frank.

Besides, one of the nice things about Exherbo is that it handles the nonexistence of a package rather seamlessly. You can compile it by hand, install to a tempdir, and then have the package manager merge it directly while giving you the ability to specify information about the package (metadata, dependencies, etc.). And then of course the package manager can uninstall it when you no longer want it. This makes the problem of "build it yourself" kinda moot.

I'm not going to bother responding to the "make the cut for a live CD" remark since obviously the there are far more packages than would fit on a liveCD or liveDVD.


If we're going to be one-upping each other, then let me suggest Nix, a superset of Portage.

It has many of the strengths of Portage, and even go beyond (it builds from source, and users may configure package dependencies, like USE flags on steroids -- its declarative language used to write packages is also used to configure them, so you can do more than just passing flags to a package). But it offers a substantial advantage, because builds are deterministic. The set of installed software (with all needed configuration) is determined from a config file (or many), and from this it's always possible build the same system.

This means that upgrading always end up in the same state as installing from scratch. This also mean that common packages can be cached as binaries, without risk of breakage - it only downloads a binary package when building from source would build exactly the same binary.

Nix also feature atomic upgrades and rollbacks: it only touches the running environment as the very last step of the upgrade (setting up a symlink), and stores the previous versions of packages until garbage collected, so an upgrade stopped in the middle can't break your system (the exception here being kernel upgrades). Indeed, if you interrupt an upgrade or install at any stage, just issue the command again. (also, this architecture makes it incredibly concurrent)

NixOS is a distro that uses Nix. It can provide a GRUB menu to boot previous versions of the system. When you upgrade, you can have it affect running system or only upgrade after a reboot. Either way, when you reboot GRUB will give the option to also boot the system you was using before the upgrade. On a technical level, Nix works a bit like a git repository: each package is addressed by the hash of its derivation (that says how to build it, and all its dependencies), and if more than one system version uses the same package it gets stored only once.

Coupling NixOS with Nixops, a deployment tool; and Disnix, that does service-oriented deployments (like Docker), they can help build more repeatable systems for production servers too.

Some links:

https://nixos.org/

https://nixos.org/nix/

https://nixos.org/disnix/

http://blog.lastlog.de/posts/useflags_in_nixos/


Are there any guidelines/docs on running Nix in an existing distro? I'd love to play around with it without having to switch the whole world.


I found this blog really helpful:

http://lethalman.blogspot.com/2014/07/nix-pill-1-why-you-sho...

It's a series that walks you through setting up and using nix. It found it really simple and satisfying - I now use nix on Ubuntu.


Cool, thanks!


You mean the better system that inspired portage, BSD Ports, right?


No, it really isn't. Portage poorly solves, or neglects to solve, most of the problems dnf is intended to solve with yum, and is, on many fronts inferior to yum, as well.


If portage/gentoo guys were not treating portage as if it was running on gazillion of GHz CPU, they would have a shot at making portage compete with other systems. Don't say it is the best. Not by any stretch.

I feel ashamed that for the last ten years I could not find a kind word to the portage developers. For what is worth, they don't care.


You mean Paludis right (on Exherbo too and not Gentoo which is still stuck in the stone age)?

Paludis does correct dependency resolution while Portage just pretends to.


The lower the stakes, the stronger the feelings. C.f. text editing.


Given that the package manager is the backbone that makes the Linux distribution (in most cases), it's not low stakes at all.


As long as all of the alternatives work, and they do, it is pretty low stakes. Realistically, how much difference does the choice between yum/dnf/apt-get really make to my everday life installing and upgrading packages? Very, very little.


Considering I spend probably half my time using a computer in an editor, I'd say the stakes couldn't be higher.



Apt is part of a tool chain that is extreme Unix philosophy. With this we benefit from backwards compatibility and continuing to use our muscle memory, but it also means that if say you want to find out which uninstalled package provides a specific file it means installing another tool.


[deleted]


How so? I don't think you can do that with apt alone, you need apt-file (which is a separate tool).


Oh right. I misread your comment, sorry.


apt-file is pretty damn lightweight when you get to "another tool" stakes.


at some point *hat could do us all a favor and remember that we're just humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: