1 min read

How are binaries painful? 1/n

I don't use a compiled language professionally. I have looked at raw lists of Python packages sometimes and am vaguely familiar with manylinux, cffi, and cmake. That said, it sounds like there are a few ways it's painful to have to build binaries:

  1. They're specific to an architecture-platform pair. You have to compile at least one binary per CPU instruction set/architecture per platform that you want to support. That is: x86-64 Windows, x86-64 Linux, and x86-64 Mac as well as ARM64 for each of say Mac, Windows, Android, and iOS. That could be a lot of binaries.
  2. On Linux at least, compiled binaries also depend on a specific version of libc, so that you may end up needing to compile different binaries for different versions of libc. That means each architecture explodes on Linux into \(N\) different binaries, one for each libc version that must be supported. At least, that's what I understand from the current state of the manylinux project where wheels are tagged by libc version. That means you can't just compile one x86-64 binary for Linux but need \(N\) — one for each libc version you support.
  3. Static linking means wasting memory, but dynamic linking means you have to make sure it gets linked against the right thing, which right thing might conflict with the version other binaries need to link against.
  4. Binaries may need to be signed in a platform-dependent way.

This list seems too short. Surely compiling binaries is more painful than that. What's missing?