Where has the WSJ been for the last 30 years? "Soft" AI has been broadly commercially successful since the 1980s with the advent of prolog and rule engines, and on the machine learning side, there have been major commercial successes since the late 90's with Bayesian methods, Feed Forward Neural Nets, Random Forests, SMVs, and most recently with multi-tiered Neural Nets.
AI research may not have reached the goal of Hard AI or AGI, but it has most definitely paid for itself several times over by now.
Soft AI can then be considered Amazon's (and later Netflix's) recommender systems, which most people run into all the time. I guess you can even view Google's page rank system as a form of Soft AI as it is just a useful result from the analysis of big data.
Once we go into Soft AI, there are tons and tons of examples of useful results. But what is the definition of Soft AI and where does it stop being AI, and just number crunching -- I guess that is a problem with defining AI in general.
The reoccuring problem with the definition of AI is referred to as the AI effect. Basically it refers to the constantly shifting definition of AI as knowledge of a technique that originated in AI research becomes more pervasive and well known. People stop thinking of the technique as AI because now it is just math or statistics or computer science or whatever. The AI effect basically states that nothing will ever be AI because as soon as we understand it well enough to use it, nobody associates it with AI anymore.
AI researchers have been extremely successful at finding and proving specific areas of intelligence. But what they haven't found is the "silver bullet" that is governing all forms of intelligence. For example, intelligence isn't just symbolic logic, as AI researchers from the 1950's reasoned. It isn't just relational logic or optimization, as those from the 1970's and 1980's reasoned. It isn't just statistical reasoning, as those from the 90's and 00's have reasoned. And it isn't just the ability to process mass amounts of knowledge (data) as those today currently reason. Significant advances in any specific area of AI will cause people to think that they have found a victory of Neats over Scruffies [2] (for example, the currently pervasive view that Deep Learning is the silver bullet that can actually get us all the way to AGI), but then an AI Winter forms [3], and all of a sudden the Scruffies were right all along.
My opinion is that the Scruffies have always been right. The only valid Neat view is that true intelligence is really just mathematics, but mathematics is the superset of all algorithms ever. If you want some real philosophy on the subject from one of the most intelligent men that have ever lived, read up on Marvin Minsky.
This effect happens both in general with technology but also individually if you study "AI". At first AI is imbued with magic, not unlike perhaps how we think CPUs work. After studying, you start seeing things like -- oh it's a tree search+heuristic function, oh it is simulated annealing, oh a bunch of neuron weights.
Maybe people have a shifting view of what AI is. But I think it's more that there is too much of a focus on the end result. There are many many things that are hard for a human to do, and might require AI to do with the human method, but can be bypassed with a far far simpler mechanism.
Let's take chess as an example. The way a human plays chess is very difficult. But a computer can bypass all that and beat most people with a simple "check all possible moves to depth N and pick the one where your decisions make you have the most pieces".
The only remotely-good threshold is the Turing test. For soft AI there are no good tests and debate turns into a quagmire.
And gluing a vocoder on something sure as hell doesn't make it AI. People are too anthropocentric.
As far as I can tell, only AI researchers consider any of these techniques to be "AI" in the first place. What laypeople think of AI is specifically intelligence in the sense of autonomous, goal-driven agents (that is, "Intelligent A-Life"); anything else is just algorithmic optimizations of regular math problems that have manual/heuristic "good enough" solutions already.
People call things like IBM's Watson AI because it seems A-Life-y; they don't realize that it doesn't keep state in a way where they could have a decent conversation with it.
Recent trends seem to be that the AI Effect is happening in reverse. Processes that were considered normal computational, mathematical and statistical fuctions are being described as AI.
AI research may not have reached the goal of Hard AI or AGI, but it has most definitely paid for itself several times over by now.