I'm not convinced we've even defined the problem space well enough to solve it. Like what is the concrete measure(something to target) for intelligence? If we develop general intelligence is it going to be human, dog, or fish?
Shane Legg (co-founder DeepMind) and Marcus Hutter (Schmidhuber pedigree) defined machine intelligence in this canonical paper from 2007: https://arxiv.org/abs/0712.3329
> Universal Intelligence: A Definition of Machine Intelligence
> A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
General intelligence usually meant in relation to humans, but you are correct in noting that it is a spectrum, not a binary.
I think this is the real answer. When we developed flight, the measure wasn't "can we fly like birds?" We still haven't achieved that even today, but we fly in otherwise unimagined, but equally powerful ways.
We seem to be looking at intelligence in humans and thinking we need to develop that, without first defining what intelligence actually is. We don't exist in isolation, and it's likely that the components of intelligence exist to varying degrees in other organisms. In the same way that birds, bats, gliders and insects all have wings that generate lift, what are the things that we have in common with other animals?
It seems like the difference between humans and dogs is substantially smaller than the difference between computers and dogs, so if we figure out dog-level intelligence human level intelligence is right down the corner. Also, the intelligence is likely to be of a different kind. Someone had an interesting point that training an ML system to look at picture isn't like sending a million interns to look at a million pictures, it's sending one intern to look at a million pictures. When you do that, you can derive insights that are significantly different than if you look at 1 picture, or 10, or 100.