If you're a developer, try to package your own things for the distros you care about. And accept patches that fix packaging issues. Think about how much easier your ops friends, and the continuous integration systems and similar will have it if all they have to do is run an "apt get" or "port install" to get a fully built version of your code.
Debian packaging has a long history of being insanely hard to get right. That was certainly true in debhelper v5 and v6 times - if you wanted to package ANY software, you needed to copy-paste from an existing debian/rules and then tweak all the dh_* calls until you got a working build. It was possible to get the packaging into a "working condition" by sheer stubbornness. (Trying to learn Debian packaging from scratch was like trying to learn auto* from scratch.)
With debhelper v9, and it's lovely sequencer things are a lot easier. The once-so-arcane debian/rules is now only a few lines, and as long as you set DH_VERBOSE=1, you can see exactly which internal helper gets called at any point of the packaging run. If you need to override a step, simply define "override_dh_auto_FOO" target and that's it.
There's less boilerplate, and thus the packaging will be quite a bit easier to adapt to official Debian standards. So it will be (hopefully) less work for an official NM/DD to adapt your simple rules and get the package into a shape where it could be uploaded to experimental or unstable.
Back in my previous life I actually wrote two or three instruction sets for random developers on how to get their software into a working debian package. And then, when the customer project manager wanted to try his hand at coding too, I came up with a bunch of reductionist scripts and templates that allowed to automatically generate a sensible debian/* hierarchy. (That effort was later taken over by developers more capable than me - and made into a proper QtCreator plugin. That thing has supported "easy debian package" system for a couple of years now...)
I still can't produce packages that would pass the extremely high bar Debian sets for their packaging standards, but I sure can put together a packaging that works and is relatively easy to polish.
Yes, I maintain Fedora and Debian packages for my upstream project (I'm not a Debian developer - I pass my package to a sponsor).
Debian is much harder to get right, though I do have more experience maintaining Fedora systems. The Debian packaging process seems an order of magnitude more difficult to get right, mainly because of the magic documented in various scattered documents and the many different build systems. I've yet to find a good document explaining how all this stuff interacts with each other. rpmbuild is much more conceptually easy to understand for me.
You also have more boilerplate documents (e.g. copyright), that Fedora doesn't care about. Fedora also has a nice system where you can get builds done on a server with various architectures and for different releases.
One of the reasons rpm packaging is so much easier may be the way the .spec files specify the patches you need to apply against the pristine source.
You have a number of PatchXX definitions, and then in %prep stage you do %patchXX -p N.
Compare with debian: you have debian/source/format which tells which patch format is expected. The old (and ugly) dpatch? Quilt? Native? Change the source format from one version to another and you need to respin the patch set too.
Whereas for rpm you simply do a "git format-patch -o $patchdir ..." and then copy-paste the patch sequence into the specfile. I've done both. I can also see the different benefits between the two systems. Debian's requirement to have debian/changelog name to match the upstream source name is a recurring hurdle. I still occasionally forget to get it right on the first go.
Then there's the split between build rules and package descriptions (debian/control vs. debian/rules). For rpm all of these are in the same file.
But then you get into packaging maintainability... After all the overengineering that you feel in debian packaging, updating it evevn across bigger upstream jumps is actually easier. When upstream changes things in a big way, catching up in the specfile takes quite a bit more work.
Having tried to do some Debian packaging, it still feels horribly arcane and hard to get right. I think part of the explanation lies in "If you need to override a step, simply define...". How do I know what steps there are, and what each one is doing when I don't override it?
The system saves on boilerplate, but it doesn't provide new abstractions for the packaging process. So you still have to know what is actually going on, and that's harder to discover when it's being done magically behind the scenes than when it's done by boilerplate in the rules file. So it's easier for experts now, but hard for novices to get beyond the most trivial cases.
Of course, the impression that it's difficult is reinforced by Debian's extremely strict standards for accepting packages.
> How do I know what steps there are, and what each one is doing when I don't override it?
That's what DH_VERBOSE=1 is for. Export that at the beginning on debian/rules and then do the usual "dpkg-buildpackage -rfakeroot -us -uc" dance.
You will see every dh_* command being executed. If it produced any output, you'll see that too. Most of them are no-ops for small project anyway.
For example: the build output will have "dh_configure", which under the hood calls ./configure with the stock debian flags. Your project doesn't use autoconf? Put this in the rules file:
Same for dh_test, or any other bit. The funky bit is that if your package needs some special treatment, you can cheat fairly easily. Just hijack a logically suitable no-op step and put the manual tweaks into the override. The verbose output shows you exactly which steps do nothing in your project. :)
I found it difficult to figure out .deb packaging for our in-house services when I had a crack at it last year. There are a dozen ways to do it, with no real agreement found by my google-fu, and if you don't have a makefile, it gets even harder to find documentation to help.
I saw a comment on a feature request in FPM by Jordan Sissel: "I'd think about implementing it, but there's so much silly ceremony around .deb packaging" (paraphrased). 'Silly ceremony' stuck with me. I'm sure it makes sense once you know, but there's a steep learning curve.
I'd like to have a honorable mention of Archlinux here. Arguably, it has the easiest package management out there.
To create a package you write a single PKGBUILD file with a few required fields (like pkgname, pkgver, etc) and a bash build script - that's it. Then you run "makepkg" and it's done.
Take a look for yourself. This is a build script for tmux:
This is exactly why I created Package Lab---to make Linux packaging easier for developers and ops folks. Currently in private beta and would love feedback. Please get in touch with me if you're interested (email in profile).
aaaand invite request sent. I'm a big believer in packaging, and $current_job is of the opinion that packages "aren't Cloudy/DevOps/new enough" to be worth investing in. If I could show them a RESTful API, that might just be what it takes for them to move beyond the pain of 'git pull' deploys!
Great! My experience has been that packages simplify automation and play nicely with DevOps tools like Puppet and Chef. It's dead simple to write a recipe to install a package, whereas things get complicated quickly when you try to write a recipe to orchestrate a build process, e.g., git pull, install compilers, ruby-build, install gems, compile assets, etc.
To help convince folks on the value of packages, you could try pointing them toward Michael Stahnke's presentation "The Balance of Packaging vs. Puppet Manifests":
Shouldn't packaging be left to each distributor's experts? As a developer of software you're an expert in whatever your software does--web server, calculator, compiler, whatever. There is no way you can be an expert in your software's functionality and in packaging, especially for more than one distribution.
I'm always wary of installing packages not provided by my distribution, just because I have more faith that my distribution's packagers know how to package and they are more likely to use QA processes that will catch packaging bugs. I'd rather compile from source than install third-party packages; I can make my own rudimentary package that would be woefully inadequate for general use but that is perfectly sufficient for my needs.
There is no way you can be an expert in your software's functionality and in packaging, especially for more than one distribution.
The goal of packaging your own software shouldn't be to spin up a public repository of your own, but to get it into distribution repositories. You don't need to be an expert to make decent packages, nor do you have to toil on your own for perfection. Just have something to show for yourself when you get the package reviewed by the distribution maintainers.
> Shouldn't packaging be left to each distributor's experts?
I get where you're coming from - there's some apps that want to redefine how packaging or your OS works - but I think it's easier for software devs to learn things like the FHS and packaging than it is for distro maintainers to judge whether an arbitrary piece of software works on not.
+1 making your own packages though. More people should do that - if you're smart enough to compile from source, you can easily make an RPM or Dpkg.
> Shouldn't packaging be left to each distributor's experts?
My take is that any package is better than no package. At least then there's something for a sysadmin, distributor, or other ops person to work with, and it shows that you actively want your software to be packaged.
You're the expert on your program and what it needs to work, knowing what the dependencies are, etc - inject that knowledge into the spec/port file and then whatever upstream packager will have even more to work from.
> I can make my own rudimentary package that would be woefully inadequate for general use but that is perfectly sufficient for my needs.
Why not make that and share it? Having the rudiments around as a starting point to fix and improve is almost always better than having nothing.
Think of making packages as a collaboration, rather than something that someone else has to do for you.
I think that depends. ArchLinux packages are almost just bash scripts. People can request fixes. I am sure when you first started programming you weren't an expert and that same logic applies to making packages. Multiple distributions you may be right.
I suppose it is fair enough that you would want to install third party packages from source but with most package managers I would guess you could download the package and look at how it is built or at least what files it contains. Using packages can make cleaning up a bad package easier.
If there is not a proper Debian package you can use checkinstall or GNU stow. This lets you install from a source tarball and be able to cleanly remove the installation if something happens, good or bad. (proper debian package / botched compile/install)
In an ideal world, software developers should merely prepare their software to make it easier for packagers to whip up a package based on a template. As a packager I know the pain of having to discover the magical way a piece of code is intended to be built, tracking down its dependencies, and stringing together the means of packaging the finished product based on how the developer assembled it.
Some of the problems you'll face when packaging random code:
* non-standard version numbers
* non-standard file paths
* non-standard configure/make/install routines (or none at all)
* custom or non-standard build/install tools
* lack of build/configure/install documentation
* lack of dependency documentation (including what versions of what dependencies might conflict)
* code that isn't portable past the desktop the developer wrote it on
* lack of best practices in setting up multi-user permissions
* lack of pre/post install scripts
* lack of simple init scripts
* lack of minimal configuration files
* lack of pidfile, log rotation, crontab, environment, and other necessary ops functions
* patching code just to get them to build and install on your distro
* bundled, highly patched, third-party software (!!!)
Learning to make packages is easy, but what's even easier is leaving your source in a state that's convenient for packagers of all distros to put it together with little or no effort. Here's some suggestions on how to do this:
1. Make sure your code is portable. Please, try not to hardcode things; use a 'configure'-type process to allow someone to override things like file paths, usernames, and other variable data. Test it on multiple architectures or distributions to see what kind of dependencies you might need to document.
2. PLEASE use a standard configure/make/make install method for your language, and don't write one yourself or depend on some backwards antiquated build tool that nobody uses. Interpreted languages often have their own standard, and compiled languages often use autoconf/automake. Learn them and use them, or at the very least, make an incredibly basic Makefile. And please document the expected permissions and users (chowns/chgrps/chmods).
3. Make sure that you can run one command for each of the configure/make/make install steps. Make sure it can be passed options or environment variables to change the root directory to install into. Make sure your install procedure puts them all in Filesystem Hierarchy Standard paths. Make sure it works as a non-root user; the packager will add the permissions and ownership as metadata to the package.
4. Provide working samples of all the things your app will need. Crontabs, log rotation, pid file tracking for daemons, configuration files, documentation, init scripts, preinstall/postinstall/uninstall scripts.
5. For the love of god, use standard version numbers. Either X.XXXX, X.XX.XX, or X.X.X.X. Pick one and stick with it. Otherwise different distros might actually use different version schemes for their packages which confuses everyone. Please don't include any non-numeric characters (fuck you, openssl! hex doesn't belong in versions!!!)
6. Fully document your ChangeLogs. A packager needs to know that your new release now requires a newer dependency or breaks ABI compatibility, because now they have to modify the metadata of the package to specify what deps will or won't work. Remember that users often install and uninstall different versions of different software.
"2. PLEASE use a standard configure/make/make install method for your language, and don't write one yourself or depend on some backwards antiquated build tool that nobody uses. Interpreted languages often have their own standard, and compiled languages often use autoconf/automake. Learn them and use them, or at the very least, make an incredibly basic Makefile. And please document the expected permissions and users (chowns/chgrps/chmods)."
This, a thousand times this.
A basic Makefile is always preferable to me finding a problem with your magical unicorn make system of choice (read: CMake).
The problem is that make and all make-based systems that do anything non-trivial (e.g: code generation) are terrible. They all do dependency graph computation prior to starting building, which in the case of code generation + dependency scanning is simply wrong. They all become painful with complexity. They all end up with false negatives, where you have to "make clean" to continue working reliably. They cannot pass parameters to sub-targets in order to build the same target in slightly different ways for different contexts (e.g: Unit tests vs system).
For some value of "worked", in which people generally have to "make clean" every now and then, have false negatives that cause crpytic bugs now and then.
Not to mention auto-generated code being extremely difficult to work with.
All companies I worked at had replaced "make" with something in-house, because make is simply horrible.
cmake is standard enough now that 'magical unicorn' is rather unfair. I've had more difficulty with bare/automake makefile-based systems than with cmake (not that cmake is at all stellar: but it can at least handle spaces in filenames).
However, CMake sucks in terms of installation paths. Ever tried installing a library into a libdir other than ${prefix}/lib? In the worst case, the developer tries to "fix" this by guessing the architecture and append a magical 64. This works on some distributions but of course not all of them.
I got sick and tired of this situation and wrote a small CMake module [1] that accepts standard configure options to determine the installation path.
I am not saying it is hard, I just wanted to point out that CMake does not provide something like this out of the box. Quite the contrary, now every developer has to re-invent this tiny wheel, and I've seen this going wrong several times.
Not sure where the cut off is or should be, but it doesn't feel standard to me.
On my anecdotal desktop install there are two packages that use cmake; awesome and taskwarrior.
Grep'ing the ebuilds on this Gentoo box, shows me ~400 packages out of ~16000 that use cmake(excluding KDE where it clearly is standard). A lot of the remaining ones seem to be very project specific too(70 leechcraft, 11 fcitx, 10 opensync).
For me it's not really about difficulty, it's about extra time. Is it going to take me extra time to find, download, install, and learn how to use your unusual build tool? If so, that sucks. Using the most standard/common build tools saves packagers headaches.
Autoconf/Automake are more popular so it's going to be a lot easier on packagers if developers use them. But if they use them incorrectly, packagers will have to hack on the Makefiles when they could have run a bootstrapper like autoreconf.
This cannot be overstated, I have heard during conferences packagers being made fun of, something to the effect of "they are so cute, they don't understand our problems".
To me it seem some are stuck between a rock and a hard place -- distro requirements vs application specific (vendor) requirements.
So vendor likes to bundle specific library .so in the package or compiles external modules as static. Distro requirement, for security reasons for example, disallow that. Packager is left in the middle trying to reconcile the differences between the two. I have seen people stop maintaining their packages.
Yeah sometimes there is nothing you can do and the latest coolest database or web browser or game just can't easily go into the main distro.
But also I can't tell you how happy I have been in the past if a library I wanted to use was in the repo and I could just to an apt-get or yum install and all that is due to someone putting the work into doing that. Now instead of packaging it myself, building it, I saved a lot of time.
Just keep doing what you guys are doing (started using Arch after years of Debian, Fedora, Ubuntu, in order of preference). I am hooked. Thanks for finding the sweetspot between things I like with FreeBSD/Gentoo/Debian in terms of packaging and philosophy.
They deserve thanks times a million in my book. They and the rest of the open source community, but OSS didn't feel as great before as it does after I made the switch.
These packages are also in that group, but all have huge package sizes so I imagine that would contribute to their exclusion (mostly due to bundling image or sound data):
These are proprietary, or are only used by proprietary software, but to me seem to fall in the same class as Steam / Flash / etc, in that they are all at least freeware.
binkplayer
btsync
btsync-autoconfig
desura
dropbox
hon
honpurple
minecraft
minecraft-server
spideroak
"Top" would be most of the kde stuff and all the foss id-tech engines for games. Though I imagine the reason none of them are in is due to some licensing reasons concerning the game data, though it is worth mentioning the demo data for most of those games is also in the AUR.
And simplescreenrecorder. Best desktop screencasting software I've ever used, wondering why its in the AUR.
Thanks for the awesome work to all Arch packagers.
I really think isync deserves being brought to [community]. It's an awesome IMAP syncing tool created by the same guy who coded mutt and used by a lot of people.
It really deserves moving to [community] to get to a wider audience. IMHO it's much better than offlineimap.
I've come to depend on it heavily for managing all my media files (pictures, music, video, etc.).
I currently maintain the git-annex-bin package in the AUR[0], which uses the official binary, but that's compiled for Debian and has issues on Arch.
For people who want to use git-annex on Arch, the choices are either (1) use this buggy binary, or (2) install cabal and a bunch of Haskell stuff[1], just to compile from source.
[1] which requires adding a separate repository, since Haskell is no longer in the main repos, and also installing a bunch of Haskell libraries that aren't in the AUR.
It is my experience with Haskell Arch packages, as the lack of forum feedback suggested to me, with my troubles with dependencies for pandoc or git-annex-bin,[0] to go and cabal it.[1] That is just me.
I know that goes against the point of this whole post, but I wanted to learn more about Haskell and part of that for me was leaving the farm and running wild, so to speak.
This is my experience as well, and the person who maintains "ghc/cabal-install/and the other haskell packages in [extra]" had a similar sentiment in a reddit thread[1] eight months ago.
Cabal is able to give you enough headaches on its own. :-)
I do love pacman and the Arch package database in most other regards, however.
How about 'packjpg' for AUR, it's an open source lossless JPEG compressor which gives ~20-25% compression, the resulting compressed files are either stored as single (.pjg) files or as archives (.pja) which can then be decompressed back into the original jpeg files.
While I like Arch's philosophy of always being on the cutting edge and making the glfw package be GLFW 3.0, I think it would be good if GLFW 2.x stayed in the official repositories for a while as a separate package. The two packages don't interfere with each other. It's basically the same situation as with SDL 2 and 3.
The arduino package has been sitting in the AUR for an oddly long time given its relative popularity. I've been waiting for it to go into [community] but it hasn't so far.
Or ask them to join Gittip! After they're signed up, others will be able to start giving them recurring weekly gifts that are totally anonymous and categorically no-strings-attached. It's like a DFTBA grant.
Disclaimer: I've taken 4 months off work to help work on the platform.
Debian does have an impressive list of packages, but I'd rather use Zypper than Apt any day. It's the only mainstream package manager that actually lets you choose what to do when a dependency can't be satisfied or there's a version mismatch.
I'm lazy when it comes to these things, so I use FPM. It's great if you're dealing with RPM and .deb packages (and a few others). I saw someone else (vacri) mention it in this thread, but it definitely deserves consideration if you want to side-step setting up a packaging environment.
To me, its rather sad that we still have to rely on multiple third party apps for dependency management - something that a modern OS should have baked into its core branch.
could they have used a better example than MUMPS tho....yes, I know it works with its peculiar brand of duct tape, chewing gum and baling wire, but still...
Things that help:
http://fedoraproject.org/wiki/How_to_create_an_RPM_package
https://wiki.debian.org/IntroDebianPackaging
http://guide.macports.org/chunked/development.creating-portf...
http://www.openbsd.org/faq/ports/guide.html
http://www.netbsd.org/docs/pkgsrc/creating.html