Unfortunately, just like with parser generators and more powerful parsing algorithms (in particular bottom-up LR and such), the practice proved very different than the theory. Linear-scan and variants of it have become the norm for register allocation.
(There was much more talk recently of GCC's LRA than IRA because completing the reload-to-LRA transition in the compiler threatened the removal of some targets still without reload support.)
I've had a lot of success using chordal graph allocators. They provide plenty of extra dimensions of 'relaxation' to tune them, they're incremental (so they allow pinning), and they decay nicely when their constraints are violated. Because of their incremental nature & "niceness" of decay, they can be forced into a nice hierarchical form ("middle out" on the loops). The main driving algorithm (maximum cardinality search) is a little harebrained; but, if you just relax and write up the code, you'll find it is surprisingly short & robust, and highly amenable to unit testing.
Spilling/filling is a bit exciting, since chordal coloring doesn't provide a lot of direction, but I've found that pressure heuristics fill in the gap nicely. The whole thing relies on having a robust interference graph — which more than kind of sucks — but, we don't get into compilers unless we've weaponized our bit-set data-structures in the first place.
Is this true though? Last time I worked on a compiler (admittedly quite a few years ago) Briggs was the bare minimum; our compiler in particular used an improvement over Callahan's hierarchical register allocation (the basic idea of which is that you should prioritize allocation in innermost loops, over a better "global" graph coloring, since spilling once in the inner loop costs way more than spilling several registers in the linear/ setup part of the code).
I would expect that only compilers for immature languages (that don't care about optimization) use naive RA.
At least GCC appears to use a graph coloring algorithm. LLVM seems to have moved from linear scan to a custom algorithm in version 3; I have no idea what they're using nowadays.
LLVM now uses something they call the "Greedy Register Allocator". As far as I can tell, it's a variation on the linear allocator with some heuristics. Here's a presentation: https://www.youtube.com/watch?v=hf8kD-eAaxg