I have met many that are generally bad at comparing constant factors and even worse at understanding what computers are fast at.
At school I had a friend implement a Fibonacci heap (which is O(1) or amortized O(1) in every step) for a simple thing and didn't bother to benchmark it against a simple vector based binary heap.
It was slower by one order or magnitude. Only twice have I had to use something else than a simple binary heap due to cache issues.
Small correction: a Fibonacci heap has O(log n) amortised delete-min, otherwise you'd break the lower bound on comparison-based sorting (roughly: insert all elements, then delete the minimum n times, et voilà, you've sorted the input based solely on pairwise comparisons, therefore either insert or delete-min or both must take (amortised) logarithmic time).
But yeah, they're one of the classical examples of "faster on paper, slower in practice" algorithms. Just use a binary (or d-ary) heap.
I think of Fibonacci heaps as essentially just a academic exercise to lower the big-O complexity of Dijkstra's algorithm with very little actual practical use. It always seemed tailor made for that.
Like, Fibonacci heaps guarantee an amortized O(1) "decrease key" operation, which is a stupendously obscure priority queue operation. Except, of course, in Dijkstra's algorithm, where the lowest runtime complexity bounds depends on it.
At school I had a friend implement a Fibonacci heap (which is O(1) or amortized O(1) in every step) for a simple thing and didn't bother to benchmark it against a simple vector based binary heap.
It was slower by one order or magnitude. Only twice have I had to use something else than a simple binary heap due to cache issues.