Jay Taylor's notes

back to listing index

Tips for C libraries on GNU/Linux | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: news.ycombinator.com
Clipped on: 2013-02-13

Image (Asset 1/3) alt=
Image (Asset 2/3) alt= Most of those tips are great, but dont use autotools! The points they give for choosing autotools are exactly the same as one would have read maybe a decade ago when people said that CVS was "good enough" and that it "just works." Subversion, git and several others are much better alternatives and where developed because people weren't happy with CVS quirks and crappiness. One example is waf (http://docs.waf.googlecode.com/git/book_16/single.html), arguably just as well documented as autotools but without all the cruft and generated files madness autotools forces you to go through. CMake (http://www.cmake.org/) is another, used by KDE 4. SCons (http://www.scons.org/) another nice build tool. These also have the advantage of working much better on Windows, a platform autotools is completely alien to.

-----

Image (Asset 3/3) alt= Eh, if it's a C library, use autotools. Really, the original article says it best:

  - We are all used to autotools, it works, nobody cares.
...

  - And really, anything but autotools is realy an option. Just get over it.
    Everything else is an experiment, and it will come back
    to you sooner or later. Why? think cross compilation, installation/
    uninstallation, build root integration, separate object trees,
    standard adherence, tarball handling, make distcheck, testing,
    portability between distros, ...

-----

Hoff 91 days ago | link

Have you ever tried to port a package that is using autotools to a platform that is not based on GNU/Linux?

Among those of us that have attempted or have done that port, most will probably not look particularly favorably on autotools again.

One of the easiest approaches here involves reverse-engineering the autotools product from a test run performed on a GNU/Linux box, and then translating and building a platform-specific script.

From what I can tell of it, autotools makes substantial use of GNU/Linux features. Which is certainly fine for its intended platform and typical usage, but it's an approach that is Not Fun for porting that code.

-----

radarsat1 91 days ago | link

Are you kidding? Porting tools to other platforms is the main selling point of autotools. Why do you think it goes to such pains to avoid bash-isms in its output, sticking to raw POSIX shell code, even to the point of avoiding functions and loops? This is specifically to support weird esoteric * -nixes.

Even on Windows, it works fairly well with MingW. Most porting pain comes from bad package management.

Many POSIX-compatible libraries using autotools/libtool can be compiled on OS X without changes, and often MingW too. That isn't true for many other systems.

More importantly, when you do need to make a change, it's a simple edit of shell code and some pretty straight-forward makefile variables. It could just be unfamiliarity, but I've tried customizing the output of CMake for some specific situations and the complexity of the its makefiles is pretty prohibitive. Trying to change the CMake source files for someone unfamiliar with its "language" and variable name conventions is annoying and error-prone.

-----

Hoff 91 days ago | link

Kidding? No. I'm not. To my point, try porting an autotools package to OpenVMS, for instance. No autotools. Weak bash. No CMake, either. And trying to port autotools itself has been an on-going project; that's been no small effort and attempted by a number of folks, and (so far) no particular success.

-----

qznc 91 days ago | link

So what does work on OpenVMS?

-----

Hoff 91 days ago | link

gmake scripts are usually fairly portable to OpenVMS, and there is a decent — though not great — ability to invoke bash build scripts directly.

If I had my druthers here, I'd like a build tool that wasn't a layer atop GNU/Linux/Unix — and that's not to imply how autotools works here is at all bad or particularly wrong, it's just an approach that's a bear to port the tools — and have that (portable) build tool then generate the platform-specific build script, build procedure, build-whatever equivalent.

Conceptually: to move the existing builds from a procedural, interpreted approach into a higher-level and object-oriented approach.

-----

jff 91 days ago | link

Autotools will go to great lengths to check if I'm running BSD 4.2, or a specific version of Xenix from 1987, but just you try running it if your "ls" doesn't have a -i flag! Apparently, a filesystem not based around inodes is just inconceivable.

-----

ArbitraryLimits 91 days ago | link

> Have you ever tried to port a package that is using autotools to a platform that is not based on GNU/Linux?

I think you mean Unix, not GNU/Linux. Porting from one flavor of Unix to another is the raison d'etre of autoconf. It's really quite valuable when you're the one in charge of babysitting 20-year-old AIX or HPUX boxes.

It's not even meaningful to talk about using the autotools on a non-Unix platform, since autoconf emits shell scripts. It sounds like you've run into problems with automake on a non-Unix platform generating Makefiles with shell code in it?

-----

jff 91 days ago | link

The thing is, 20 years ago maybe you developed on a Sun box, then the guy down the hall tried to compile it on his HP machine and it crapped itself because of some Sun-related assumption you made in setting things up. Then you mail it to a colleague across the state and he compiles it on OSF/1, hurrah!

But now, people write Linux libraries, tested on a Linux box, for people running Linux, and only compiled on Linux. There may be OS X support. That's it. This is not the 1980s, where 10,000 Unixes bloomed. With 4 Makefiles, you could support Linux, OS X, Windows, and FreeBSD, and thus cover 99% of your audience. I'm sick of watching configure scripts take longer to run than the actual compilation.

-----

dlitz 90 days ago | link

The Linuxes still have different library locations, and things move around as new things get added (e.g. multiarch).

Seriously, get over it and just use autotools.

-----

jff 90 days ago | link

Get over it and just use your slide rule.

Get over it and use punched card decks, timesharing is a fad.

Get over it and use FORTRAN.

Get over it and just use Windows.

-----

jimwise 83 days ago | link

Except that before "just supporting Linux (and maybe MacOS)" was the thing to do, it was "just support Sun (and maybe HP-UX)". And before that, it was "just support BSD/Vax (and maybe SVR2)".

So, when you take a broader view than "just support what everyone uses", you're not just helping niche platforms -- you're future-proofing.

-----

wladimir 91 days ago | link

Having some experience porting software to embedded architectures, I can tell you that cross compilation can be a hell in autotools projects just as well. In theory it's easy, but all the projects handle it slightly different, or have hacked around autotool limitations in different ways that don't work with cross compilation etc...

And separate object trees is nothing new, all the newer systems don't even mention it as a feature as it's the default (for example, in cmake).

Standard adherence, you could say something for that, though cmake is a standard in a pretty large amount of projects too these days.

Installation/deinstallation. All of the tools support installation and setting install prefix. And nah, half of the time deinstallation doesn't work. I think that's the task of a package manager anyway (or just in case of experimental stuff install into /opt/XXX).

Their arguments are not very convincing IMO. Unlike the rest of the article which is pretty good this is just bikeshedding.

-----

comex 91 days ago | link

I rather dislike autotools (every time I wait for a configure file to make a million compiler tests which could be stored in a global cache, it's wasting my time), but I wish some of the alternative build tools placed a higher priority on being compatible with it rather than inventing their own syntax. If I want to do something like cross compile or even just add custom cflags (ugh, cmake...), I know what configure option will do so, and it's not a particularly bad interface either. From a user's perspective, build systems are usually boring; a consistent user interface makes the world a little simpler.

-----

yorhel 91 days ago | link

Please do use autotools! You only need two files: configure.ac and Makefile.am, both at the top-level of your project. The autogenerated stuff can be ignored, you don't have to learn M4 to use autoconf. And (if you are a bit careful, but that's not too hard) you'll have many nice features such as out-of-source builds, proper feature checking, amazing portability, 'make distcheck', and acceptable cross-compiling (still hard to get right, but the alternatives tend to be even harder). Don't switch to another build system purely based on the idea that it's "more elegant".

-----

ambrop7 91 days ago | link

  $ tar xf autotools-using-package.tar.bz2
  $ cd autotools-using-package
Configure, compile, install. Oh, I found this little bug. Let's try to fix it ... edits configure.ac ....

  $ ./autogen.sh
  Error: possibly undefined macro AC_BLABLABA
Spend some hours figuring this out... Oh, I need to install an old version of auto*! How do I get the old one but keep the new one around? Spend another 30 minutes to figure that out.

  $ ./autogen.sh
  checking for build system type...
  ^C
No damn, I wanted to generate configure, not run it! How do I clean up the mess it made just now?

  $ make clean
  $ make distclean
  $ ./configure --prefix=$HOME/my_app ...
  $ make -j9 install
  ...
  install: no such file or directory blabla.la
WTF!?!?! Spend an hour or so googling this mess. Ah, it's a parallel-make bug.

  $ make install
HOLY SHIT, IT INSTALLED!!!

Let's submit this fix upstream. No problem, use diff.

  $ mkdir temp
  $ tar xf autotools-using-package.tar.bz2 -C temp
  $ mv temp/autotools-using-package autotools-using-package.orig
  $ diff -urN autotools-using-package.orig autotools-using-package
WTF IS ALL THIS MESS IN THE DIFF I NEVER TOUCHED?!!??!

I know you're going to say I should be using the VCS checkout in the first place, which would hopefully be configured to ignore the autogenerated files. But as a user, or distribution maintainer, most of the time the bug you find is with a specific, packaged version of the software, and it may be quite an effort to figure out how to get the exact same version from the VCS server.

-----

JoshTriplett 91 days ago | link

> $ ./autogen.sh > checking for build system type... > ^C > > No damn, I wanted to generate configure, not run it! How do I clean up the mess it made just now?

I always run "./autogen.sh --help" for exactly that reason; then if I see --help output from configure, I know that autogen.sh "helpfully" ran configure for me.

You can also usually just run "autoreconf -v -f -i" directly, unless the package has done something unusual that it needs to take extra steps in autogen for.

-----

J_Darnley 90 days ago | link

This just about sums up my experience with autotools except for the fact that I quit in frustration after step 2.

-----

bjourne 91 days ago | link

Practically, you need an autogen.sh script too because noone knows the command switches and order you need to call aclocal, autoconf and automake in to generate the configure script and Makefile.in files. That's enough for a minimal project. For something larger with subdirectories for src/ docs/ tests/ etc, you need to use one Makefile.am file in each directory and tie it together with the SUBDIRS variable. When you need to do something slightly out of the extra-ordinary (in autotools' world, "ordinary" means compile c, generate man pages and install) like generating doxygen or javadoc documentation, offering an uninstall option or building binaries that are not supposed to be installed, then you have to learn M4 and it's stupid syntax.

Also, autotools amazing features aren't all that. Builds out of the source dir? Eh.. This is 2012 and it doesn't impress anymore. Both waf and SCons can do it no problem. Autotools-projects, on the other hand, seem to always put the object files and linked library in the source directory. Sometimes the built library is put in a hidden directory for some reason I don't understand. Possibly it has something to do with libtool, another kludge on top of autotools one rather would do without. Since modern build tools does not pollute the source directories you basically get make distcheck for free. Waf has it builtin for example and it can easily be extended to also upload your tarballs to your distribution server, sending annoncement emails and what have you.

-----

i386 91 days ago | link

If anyone could tell me a way todo what you describe without copying and checking in 'mystery meat' autofoo and m4 files into my project, I would.

-----

AngryParsley 91 days ago | link

I don't want to toot my own horn, but I only check in Makefile.am and configure.ac. Here's a simple example I made a while back: https://github.com/ggreer/fsevents-tools. To build it, just run autogen.sh.

For a "real" project, https://github.com/ggreer/the_silver_searcher, I have the same set-up: Makefile.am and configure.ac with no generated code in the repo. It works fine as long as you have pkg-config, and most people do. It builds and runs on Solaris, OS X, FreeBSD, and any Linux distro you like.

Although I use autotools the "right way", I'm not a fan of it at all. There are multiple levels of generated files. Configure is generated from configure.in which is generated from configure.ac. Makefile is generated from configure and Makefile.in. Makefile.in is generated from Makefile.am. There are other files in the mix as well. Config.h, config.h.in, aclocal.m4, and various symlinks to autotools (compile, depcomp, install-sh, and missing) get generated. It's insane.

There are other problems. Minor versions of autotools can behave completely differently. AM_SILENT_MAKE was removed between automake 1.10 and 1.11. Instead of printing out minimal text during a build, scripts using that macro crash. Another example: 1.9 requires AM_PROG_CC_C_O to compile anything useful but 1.10 doesn't. What's crazy is that 1.9 actually spits out an error message telling you to add AM_PROG_CC_C_O to your configure.ac. It makes no sense.

A system this complicated can't be pruned without breaking backwards compatibility. For autotools, that's not feasible. There are too many projects that would need to be fixed. The next-best solution is to use something else for new projects and let autotools fade gently into history.

-----

yorhel 91 days ago | link

Start with the Autotools Mythbuster: http://www.flameeyes.eu/autotools-mythbuster/ The autoconf & automake documentation is generally quite good as well.

-----

Nursie 91 days ago | link

Yuck, SCons....

I know some folks like it in all it's python-y glory, but I found it opaque and horrible. Finding where things are done and how to change its behaviour was surprisingly hard work. Now this may have been at least partly due to the way the project was set up but... well just give me a nice Makefile any day.

-----

antirez 91 days ago | link

+1, also, don't use GPL but BSD-alike licenses...

-----

stefantalpalaru 91 days ago | link

If you don't want to choose between extreme approaches take a look at the Mozilla Public License (MPL): http://www.mozilla.org/MPL/2.0/FAQ.html

"The MPL's 'file-level' copyleft is designed to encourage contributors to share modifications they make to your code, while still allowing them to combine your code with code under other licenses (open or proprietary) with minimal restrictions."

-----

JoshTriplett 91 days ago | link

Bad idea to choose a GPL-incompatible license for a library, though. If you like the MPL, use it in a dual-license with the GPL or LGPL, like cairo does. Better yet, just use the LGPL or GPL.

-----

stefantalpalaru 91 days ago | link

MPL-2.0 is not (L)GPL incompatible. See http://www.mozilla.org/MPL/2.0/FAQ.html#mpl-and-lgpl

With some software (Go code that is always statically linked by the official toolchain comes to mind) you are shooting yourself in the foot with GPL and even LGPL.

-----

ksherlock 91 days ago | link

LGPL code can be statically linked. However, you need to provide your other object files (so the LGPL part can be replaced and re-linked).

-----

stefantalpalaru 91 days ago | link

It's complicated. To be able to modify, recompile and relink the LGPL part you would have to use the exact version of the go toolchain with which the commercial part was compiled with. Not to mention having to come up with a custom build process that puts the object files in the distributed archive...

-----

LukeShu 91 days ago | link

OK, there are a lot of autoconf scripts that are poorly written, and therefore don't get the benefits. I've never heard of waf. I've never had to fight with scons, but I've never used it for my project.

But, for the love of all that is holy, do not CMake. It works fantastically... until you have to fix something. I tell you this as a distro packager. I've had to fight with build systems. Patching autoconf files, automake files, ant files, are all fairly comfortable for me. I dread the days when I have to figure out an issue with CMake.

-----

pdw 91 days ago | link

As someone who's done a bit of Debian development, I find autotools packages a lot easier to deal with than the alternatives. It might be a bit crufty, but if used properly it works very well.

-----

jahewson 91 days ago | link

CMake is a nightmare on OS X, especially if you have Homebrew installed. If you're wanting to go down this route, gyp is a much better choice, though still rather new.

-----

iambvk 91 days ago | link

I find autotools thousand times easier than CMake. Autotools just work. I can always ask it to get the hell out of my way!

Doing complex dependencies in CMake is a pain where as with Autmake, I can just go back to Make in the same file.

-----

snogglethorpe 90 days ago | link

Yeah, I concur. I've had miserable experiences with cmake. The autotools have a butt-ugly implementation, but generally work pretty well, and are fairly simple to use, within their domain.

That's the issue, really ... each of these build systems (including the autotools) seems to have a sweet spot where it works pretty well, but can be a misery outside it.

None of them (that I've encountered) are even close to being universally good. Pick your poison wisely...

-----

qznc 91 days ago | link

For a counter-point you could read http://sta.li/faq

Most prominently Plan9 and friends are opposed to dynamically linked libraries.

-----

emillon 91 days ago | link

It's also a security and maintenance nightmare. I totally respect the suckless people for trying this experiment, though.

-----

acqq 91 days ago | link

I believe the OP is about "the library" as something giving some functionality that you add to your project in whatever way it can be added, not as something you must dynamically link.

I also agree that there are use cases where static linking solves some problems.

-----

mhd 91 days ago | link

If by "most prominently" you mean "only", sure. (Although there might be a slight overlap with the DJB fanboy crowd)

It also doesn't really work out if you're not buying that philosophy as a whole.

-----

emillon 91 days ago | link

Nice set of tips. I'd like to put an emphasis on symbol versioning, SONAMEs and symbol visibility. That's what allows distribution to upgrade your library without recompiling everything, and too few developers are aware of this (simple) mechanism.

-----

radarsat1 91 days ago | link

> Avoid callbacks in your API

This is good advice especially if you want to write bindings for other languages, including callbacks (as I've found) adds a lot of difficulty to that stage.

However, two main projects I work on use callbacks extensively. The reason is that they are essentially event-based, and it would seem much less user-friendly to force the user to implement a huge switch statement, particularly when user-defined events are involved.

How else could callbacks be avoided? In some cases, they just seem like the most user-friendly option.

Although for bindings they are more difficult, using callbacks can be great when binding to higher-level languages, where the user can specify what should happen using a short lambda function (e.g. Python, etc.) A switch statement or whatever is much more obnoxious in those cases, so what other solutions are there?

-----

justincormack 91 days ago | link

Interfaces like epoll let you pass and return you a 64 bit value which can be an address of a callback or a value to look up in a dispatch table. The user can choose. This is very flexible.

-----

ambrop7 90 days ago | link

A very nice feature of epoll is that an epoll file descriptor (which monitors a set of file descriptors) can itself be monitored via epoll/select/poll. This means that if epoll is available, a library can abstract all its I/O via a single file descriptor. Think of a complex operation that involves multiple sockets and timer events. All the user needs to do is monitor this one file descriptor in his event loop, and call a function in the library which determines which of its own file descriptors have become ready, and reacts appropriately, possibly by calling callbacks if the user needs to be notified.

Unfortunately if you're trying to be portable, you can't do this, and instead have to implement a complete event notification abstraction (unless you know you will only ever be dealing with this one file descriptor). E.g. user of the library needs to implement functions like addWait(fd, io_type, callback), delWait(wait_id), addTimeout(milliseconds, callback), delTimeout(timeout_id).

-----

jfaucett 91 days ago | link

"Don't write your own LISP interpreter and do not include it in your library. :)" - lol

-----

VMG 91 days ago | link

I think this is a reference to this little gem: http://svn.efixo.net/decodeur/media-libs/alsa-lib-1.0.14rc1/...

One of the authors of the linked piece is Lennart Poettering, creator of PulseAudio

-----

wrl 91 days ago | link

Especially with the authorship in consideration, I'd like to counter with "Don't embed an HTTP server or QR code encoder/decoder in your syslogd."

Glass houses, after all. ;)

-----

aerique 91 days ago | link

No, just use ECL: http://www.cliki.net/ecl :-)

-----

chj 91 days ago | link

I don't get it..

-----

irahul 91 days ago | link

http://en.wikipedia.org/wiki/Greenspuns_tenth_rule

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

-----

qznc 91 days ago | link

As we are in GNU-land here, you should use the official GNU extension language: Guile (Scheme)

http://www.gnu.org/software/guile/

-----

justincormack 91 days ago | link

No one seems to live in that part of GNUland any more.

-----

noahl 91 days ago | link

A few of us do, but not many.

I work on Guile because I enjoy messing with compilers. I absolutely agree that very few projects use it (exceptions include Lilypond and GNUCash). However, that might change - Guile 2.0 switched from a simple interpreter to a virtual machine and a compiler to that VM. That means that Guile is competitive in speed with other scripting languages (and actually faster than some of them, I believe). It also means that it now supports multiple high-level languages. There is currently a good Emacs Lisp implementation and about half of an ECMAScript implementation.

I don't know if it'll become more widely used, but on the mailing lists you certainly see people writing libraries and contributing code, so I think there is a real chance of it. My sense is that Guile is now coming out of a period of stagnation. I don't know where it's going.

-----

fafner 90 days ago | link

Isn't GNU Make adding support for Guile scripting?

edit: http://lists.gnu.org/archive/html/guile-user/2012-01/msg0006...

-----

noahl 90 days ago | link

You appear to be right, but I don't know anything about it.

That's exciting, though - it would be really cool if Makefiles could do more computation. Maybe then we'd get an easier-to-use autoconfiguration system!

-----

fafner 89 days ago | link

On one hand I'm a bit afraid that this might add more complexity to an already complex tool. But on the other hand the current tools in Make are very ugly to use. So Guile could certainly be an improvement.

And an easier-to-use autotools? Bring it on!

-----

andrewcooke 91 days ago | link

http://udrepper.livejournal.com/20407.html explains the O_CLOEXEC issue.

-----

ambrop7 91 days ago | link

The solution I have done in my software is to close all unneeded file descriptors right after fork() in the child [1].

It can be argued that this is not the best solution because every place where fork() is called needs to be patched, and this could be in libraries. But the same applies to O_CLOEXEC flags; every place where file descriptors are created needs to be patched. Further, there are probably many more places where fd's are created than where fork() is called.

So if you want to be super careful library, you should do both. Yes, I know the article advises against fork() from libraries. But sometimes you really need it. It's not bad per-se, just bad when done in *nix because of the broken design of OS interfaces.

[1] http://code.google.com/p/badvpn/source/browse/trunk/system/B...

-----

alexlarsson 91 days ago | link

Yeah, and then some other thread opens a file descriptor while your loop is busy closing the last few file descriptors...

CLOEXEC approaches are the only race free solutions.

-----

premchai21 91 days ago | link

fork() only leaves the one thread running in the child, and at that point the fd tables are no longer shared, so trying to detect and close unwanted descriptors in the child after fork is not racy by itself as a way of mitigating the possibility of uncontrollable non-CLOEXEC opens elsewhere in the process (though this doesn't preclude it being a bad idea for other reasons).

-----

alexlarsson 91 days ago | link

true, sorry.

-----

meaty 91 days ago | link

I came here to have a rant about autotools being mentioned and found a lot of people doing the same.

Thats a sign something needs to be taken out in the yard and shot if there ever was one.

-----

dlitz 90 days ago | link

No, seriously, please just use autotools. It's way better for users and packagers of your library.

If you're not going to do that, then use an alternative that provides the same command-line interface as autotools (so that things like "./configure --prefix=FOO && make -j4 && make install DESTDIR=BAR" still work). As far as I know, there's no such thing as of yet.

-----

olalonde 91 days ago | link

> - Make your library threads-aware, but not thread-safe!

Can anyone explain this piece of advice?

-----

nn2 91 days ago | link

Don't do your own locking. Let the caller pass in state. Push locking to the caller. Don't have your own global state that would need hidden locks, but instead let the caller handle it with arguments.

That is similar how the STL does it, but not like stdio

I'm not sure I fully agree. - For simple libraries it's likely good advice. - But it encourages big locks and poor scaling. It may be right for desktop apps, but not necessarily for server code that needs to scale. For some things that's fine, but you don't want that for the big tree or hash table that your multi threaded server is built around on. - It avoids the problems of locks being non composable, that is the caller may need to know which order the locks need to be called, to avoid deadlock. Actually it doesn't avoid it, just pushes it to someone else. However if you make sure the library is always the leaf and never calls back the library locks will be generally at the bottom of the lock hierarchy.

-----

KMag 85 days ago | link

If this advice is taken the wrong way, then it "just pushes [the locking problem] to someone else", but often locking is a crutch. Sure, there are some programs that have a natural need for a lot of globally mutable state, but not many.

Let's be honest. Most multithreaded programs evolve from programs that are more-or-less single threaded. Then, threads are added in an attempt to improve performance, and high-contention locks are broken into finer grained locks when profiling shows lock contention in the critical path. I would argue it's better to either design for minimal mutable global state from the start. Failing that, it's often better to re-factor the code when you start scaling up the number of threads, before you start investing a lot of time into locking and breaking down your big locks into finer and finer grained locks.

I'm sure you're not one of those programmers who often leans on mutexes/semaphores/etc. as a crutch to prop up poor design, but there are a lot of programmers who do.

-----

sigjuice 91 days ago | link

https://twitter.com/timmartin2/status/23365017839599616

-----

andrewcooke 91 days ago | link [dead]

http://udrepper.livejournal.com/20407.html explains the O_CLOEXEC issue.

-----




Lists | RSS | Bookmarklet | Guidelines | FAQ | DMCA | News News | Feature Requests | Y Combinator | Apply | Library

Search: