We know the desired random distribution of the 3rd order real polynomials (a product of uniform random distributions say), and I believe we can work out the distribution of the subset that have linear roots only and the fraction of random real polynomials that have real roots, so the distribution for the polynomials with roots of type a, b, and conj b, comes from the (weighted) difference of these two, and leads to the joint distribution of a, b (because conj b is fixed given b).
I didn’t want to claim that a full proof is not going to be messy in the details, I just think that this type of construct makes me feel not surprised that the largest root is more likely to be real.
It feels like this problem must have been solved previously, probably in some physics context (maybe in areas related to applications of random matrix theory or dynamical systems).
It is possible that this second recursion doesn’t hold at N=3 (because there is enough of the density covered by the fraction of real polynomials), but then only starts at higher degrees (it will eventually be easier because the N-2 degree will have a higher chance to have a root higher than the new complex b). So this makes is certainly more complicated to prove but still seems intuitively correct to me.