Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
‘Soft’ Artificial Intelligence Is Suddenly Everywhere (wsj.com)
124 points by jonbaer on Jan 17, 2015 | hide | past | favorite | 56 comments


Regarding weak vs. strong AI:

"Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim." -E.W. Dijkstra [0]

[0] http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EW...


Where has the WSJ been for the last 30 years? "Soft" AI has been broadly commercially successful since the 1980s with the advent of prolog and rule engines, and on the machine learning side, there have been major commercial successes since the late 90's with Bayesian methods, Feed Forward Neural Nets, Random Forests, SMVs, and most recently with multi-tiered Neural Nets.

AI research may not have reached the goal of Hard AI or AGI, but it has most definitely paid for itself several times over by now.


Soft AI can then be considered Amazon's (and later Netflix's) recommender systems, which most people run into all the time. I guess you can even view Google's page rank system as a form of Soft AI as it is just a useful result from the analysis of big data.

Once we go into Soft AI, there are tons and tons of examples of useful results. But what is the definition of Soft AI and where does it stop being AI, and just number crunching -- I guess that is a problem with defining AI in general.


The reoccuring problem with the definition of AI is referred to as the AI effect. Basically it refers to the constantly shifting definition of AI as knowledge of a technique that originated in AI research becomes more pervasive and well known. People stop thinking of the technique as AI because now it is just math or statistics or computer science or whatever. The AI effect basically states that nothing will ever be AI because as soon as we understand it well enough to use it, nobody associates it with AI anymore.

AI researchers have been extremely successful at finding and proving specific areas of intelligence. But what they haven't found is the "silver bullet" that is governing all forms of intelligence. For example, intelligence isn't just symbolic logic, as AI researchers from the 1950's reasoned. It isn't just relational logic or optimization, as those from the 1970's and 1980's reasoned. It isn't just statistical reasoning, as those from the 90's and 00's have reasoned. And it isn't just the ability to process mass amounts of knowledge (data) as those today currently reason. Significant advances in any specific area of AI will cause people to think that they have found a victory of Neats over Scruffies [2] (for example, the currently pervasive view that Deep Learning is the silver bullet that can actually get us all the way to AGI), but then an AI Winter forms [3], and all of a sudden the Scruffies were right all along.

My opinion is that the Scruffies have always been right. The only valid Neat view is that true intelligence is really just mathematics, but mathematics is the superset of all algorithms ever. If you want some real philosophy on the subject from one of the most intelligent men that have ever lived, read up on Marvin Minsky.

[1] http://en.wikipedia.org/wiki/AI_effect

[2] http://en.wikipedia.org/wiki/Neats_vs._scruffies

[3] http://en.wikipedia.org/wiki/AI_winter


This effect happens both in general with technology but also individually if you study "AI". At first AI is imbued with magic, not unlike perhaps how we think CPUs work. After studying, you start seeing things like -- oh it's a tree search+heuristic function, oh it is simulated annealing, oh a bunch of neuron weights.


Maybe people have a shifting view of what AI is. But I think it's more that there is too much of a focus on the end result. There are many many things that are hard for a human to do, and might require AI to do with the human method, but can be bypassed with a far far simpler mechanism.

Let's take chess as an example. The way a human plays chess is very difficult. But a computer can bypass all that and beat most people with a simple "check all possible moves to depth N and pick the one where your decisions make you have the most pieces".

The only remotely-good threshold is the Turing test. For soft AI there are no good tests and debate turns into a quagmire.

And gluing a vocoder on something sure as hell doesn't make it AI. People are too anthropocentric.


As far as I can tell, only AI researchers consider any of these techniques to be "AI" in the first place. What laypeople think of AI is specifically intelligence in the sense of autonomous, goal-driven agents (that is, "Intelligent A-Life"); anything else is just algorithmic optimizations of regular math problems that have manual/heuristic "good enough" solutions already.

People call things like IBM's Watson AI because it seems A-Life-y; they don't realize that it doesn't keep state in a way where they could have a decent conversation with it.


Recent trends seem to be that the AI Effect is happening in reverse. Processes that were considered normal computational, mathematical and statistical fuctions are being described as AI.


Exactly, and various dumb devices are branded as "smart" devices, e.g. smart meters


I also have a dumb question: Where is "AI" in our software development toolchains? Intellisense comes to mind. Compiler optimizations also come to mind. What more is coming or is already out there? Lately, I find myself thinking more and more about Prolog (or something similar) to help encode some thinking. We have seen functional programming make a comeback. Will we see more logic programming as well?


This is actually an excellent question.

Prolog is not AI, as the Japanese discovered [0].

The question of how to make good use of AI in software development has been open for almost forty years at least -- ever since Rich and Waters' Programmer's Apprentice project [1] produced rather unexciting results. (I don't recall in detail what they did, but it didn't set the world on fire.) The failure of Prolog -- not that it failed to be useful at all, but it failed to live up to the hopes -- is instructive. Prolog never delivered on the promise of declarative programming, because its rigid, unintelligent depth-first search strategy makes it necessary to understand the execution model and keep it in mind when writing programs. So it's only mostly declarative. Prolog achieved some popularity because there are times when brute-force depth-first search is all you need, particularly if it can be done very fast; but once you bump against the limitations of that paradigm, Prolog is of little help.

I think this failure is instructive. The key problem, I believe, is controlling search. Search is necessary for reasoning about programs; this is a consequence of Turing's famous Halting Problem proof. But searching -- particularly, searching the massive spaces of possible programs and of possible invariants of programs -- without getting lost is very difficult.

You can't understand a program without understanding its invariants. All interesting programs contain recursion (iteration being a special case of recursion), and understanding a recursive program requires knowing the postcondition of the recursion. That in turn requires coming up with the correct induction hypothesis. A simple example to clarify the meaning of the terms:

  int a[5];
  for (int i = 0; i < 5; i++) a[i] = 42;
The postcondition of this loop is:

  ∀i 0 ≤ i < 5 ⇒ a[i] = 42
which says that all 5 elements of the array are equal to 42. The induction hypothesis is:

  ∀i (∀j 0 ≤ j < i ⇒ a[j] = 42) ⇒ (∀j 0 ≤ j ≤ i ⇒ a'[j] = 42)
where a' is the value of a at the bottom of the loop; this says that if all the elements of a below i are 42 at the top of the loop, then all the elements through i (note the "j ≤ i") are 42 at the bottom. Once we have this induction hypothesis, it's trivial to prove it, and once we've proven it, it's trivial to prove the postcondition. In simple cases like this, human programmers do all this reasoning subconsciously; it's second nature to us. Not so easy for machines, though.

The problem in a nutshell is that search is required to find the induction hypothesis; there's no algorithm that will produce it. (This is a clear consequence of the Halting Problem.) While it seems straightforward in this case, we all know that most programs aren't anywhere near this simple. And the search required for real-world cases is still beyond existing AI technology.

[0] http://en.wikipedia.org/wiki/Fifth_generation_computer [1] http://dspace.mit.edu/handle/1721.1/6054


Seeing your example does make me wonder if the an AI of that level is truly out of reach. It's well known that human reasoning is flawed by cheap heuristics thrown in by nature and/or selection.

Looking at your examples you'd think that just throwing enough memory/speed/power at a simple bruteforce/search approach would solve trivial examples.

Not to mention that when a programmer analyses code he's actually executing it inside his head. Perhaps not all the loops, but looking at a loop he will follow the steps in the code. So in a sense he is executing a piece of code inside his head, thus not violating and/or disproving the halting problem(?).


"when a programmer analyses code he's actually executing it inside his head"

I have never thought about it that way before.


I'd say it's more like abstract interpretation.


While precondition generation is indeed a very hard problem, for programs such as your example we actually have methods that work. See for example these for sets of slides and the references therein

http://resources.mpi-inf.mpg.de/departments/rg1/conferences/...

http://resources.mpi-inf.mpg.de/departments/rg1/conferences/...

http://resources.mpi-inf.mpg.de/departments/rg1/conferences/...

http://resources.mpi-inf.mpg.de/departments/rg1/conferences/...


What do you think about the newer attempts at declarative programming (which are often embedded in Prolog) such as CLP + consistency techniques and CHR?

Regardless of whether Prolog (or CLP or CHR) is helpful towards "real" AI, I'd like to hear your thoughts about the kind of domains / problems for which these approaches are a good fit.


In my mind, the next big step will be the machine perception, specifically full scene understanding in vision.

Computer vision has advanced very rapidly recently in sub-tasks like object recognition, scene segmentation, 3d-modeling from videos, and others.

Now people are trying to put these elements together, along with text-based metadata and logic for physical interpretation of images (e.g. the coffee cup is on the table which is on the ground and abuts the wall; physical interpretations of spatial information).

Soon enough we'll be to the point where a drone can identify and track most objects in its line of sight and know their physical relationships to each other. This opens up tremendous possibilities in robotics.


> This opens up tremendous possibilities in robotics.

'Shoot the terrorist'.


Don't be so pessimistic. Drones have been used for search & rescue, storm chasing, and even precision farming (precision in the application of pesticides, etc.)


Unless we establish some artificial restrictions, that'll be inevitable.

The majority of People will want to defend themselves from threats --but people are also less willing than before to expend lives. Anti-war protesters against Iraqi and Afghan wars didn't assail their own soldiers any more --as in Vietnam, but rather couched it as 'we don't want American soldiers' lives put in harm's way' We are for the lives of soldiers so we are against war.... so with that the armed forces used more drones to make it more pallatable (fewer of our own lose lives). So, yes, I don't see why the Army etc, would not go in that direction, eventually.

Of course, antiwar protesters will have to find a new antiwar narrative --which may necessarily have some anti-self (anti-American) component which I think they have trying to avoid since it's a harder 'sell'.

Given that we will have foes, for whatever reason, and that some of those will irrationally engage with us, I would certainly prefer we use drones to people to fight those enemies. On ht other hand, since it will not incur the same amount of moral penalty from our side, we should reconsider when and how we'd said robotic fighters --in other words to try to ensure we don't run roughshod over perceived enemies rather than actual enemies who would bring destruction to us.


Which can also be used as 'Assassinate the senator'...


I was thinking about this. Are there any examples of open source Karpathy style computer vision applied to SLAM/point cloud data instead of 2D images?



The two algorithms I have been getting a ton of mileage out of lately are Bayesian Bandits and variations of the TrueSkill ranking algorithm.

Bayesian Bandits Explained:

https://www.chrisstucchio.com/blog/2013/bayesian_bandit.html

TrueSkill Explained:

http://www.moserware.com/2010/03/computing-your-skill.html



Yes, AI is popping up all over. The amazing thing is that "deep learning" actually works. We've had neural nets since the 1960s, and by the 1980s, most people in AI were convinced that was a dead end. Yet now neural nets are doing useful work. They haven't even changed much. They're just bigger, and with slightly different weighting and initialization functions.

Now we have the problem that we don't really understand what they're doing when they do work.


It feels like they're just reclassifying heuristics as AI and then saying AI is everywhere. Also there is not a single solid example of what the author refers to as soft-AI. The author says things like

"... developing highly complex socio-technical systems in areas like health care, education, government and cities."

Like what? Give us a solid example. What code? Who is writing it? What does it do?


I agree with you. There seems to be a new marketing trend right now, where any product can be tagged with the term "artificial intelligence" if it uses any of the algorithmic techniques developed in a field of AI research.

Some people are saying that "artificial intelligence" is the new "big data".

One example of this is thegrid.io, which as far as I can gather uses a constraint solver to help in positioning items on a page, and they call it "AI websites that design themselves".

Whether this is a good thing or a bad thing I don't know, but I think it is here to stay. Because it sounds good..


From experience, people used to translate "working on AI" by "building nasty robots". Now, they're just putting an AI label on everything they can.

So, there is some progress.


And yet no serious research is being put into hard AI. Just like five years ago. Or ten. Or twenty.

My childhood passed in reading circa-computer books of 60s, 70s, 80s, where full, general AI seemed to be just around the corner. Obviously this problem is far harder than it seemed to be.

But the (almost universal) lack of trying is extremely disheartening.


Hypothesis: Because the solution to "hard AI" isn't actually "general".

Ie, what humans view as "intelligence" isn't actually an emergent property of the right set of rules. Rather, it's a massive and hopelessly self-intersecting set of ad-hoc solutions and special casing all jumbled together over millions of years of evolution, producing something that looks general, but only because we're looking at it from within itself. An easy example: Having a computer identify all images of a thing called a "cup". That's an arbitrary category that has no definition reducible to actual physical properties. What makes something a "cup"? A human saying it is one.

For something to look like "full, general AI" to humans, we'd need to either: Build something that replicates all that specialized hyper-meta-spaghetti-code mental processing of ours to a level of fidelity that's beyond both our current understanding of our brain's structure and the level of complexity tenable by human computer science. Or leave out those evolutionarily-discovered "optimization circuits" and require far, far more processing power than we'll have access to for a good, long time.


"What makes something a "cup"? A human saying it is one"

And the human got their definition of cup by seeing a lot of things others called a cup and extrapolating rules, something that's already obtainable for some problems with solutions like neural networks.


So you're saying we can't have proper AI until we've had some semi-decent AI? That explains everything: what constitutes decent AI, versus good AI, versus superlative AI?

The problem is: nobody knows. Its a bit like two cavemen banging rocks together until one says "hey, lets use these rocks to kill something" and the other one goes "eh? How?" and the first one says "I don't know, keep bashing until something dies.."


I think, if you take Epenthesis' statement and shift up a meta-level, you're going to lose a lot of the clarity in what you're saying. A cup happens to have a fairly stable, consistent representation in a variety of relevant timescales. But if you want a computer to identify something like "an isomorphism," or "a useless activity," or "something to help a thirsty person not dehydrate," it is not so clear what the rules are anymore.


True, but that doesn't really negate the hypothesis. What are those rules? Are they the same as the ones generated by a neural network? Or are they qualitatively different?

(Diagnostic experiment: Is there a significant subset of "cups" that a naive neural network would always fail to include or only include when also including a significant subset of non-"cups"?)


I believe one problem is, that computers right now are terrible at considering things up to some equivalence relation. Probably mainly because most equivalence relations are hard to express as computations:

For example it is "easy" for a human to "see" that a coffee cup and a donut are topologically the same, whereas the same task for a computer, i.e. provide it with enough data to recognize something to have exactly one "hole" seems extremely hard. Another example would be to recognize that up to rotation two things are roughly the same. Clearly humans do not solve such problems case by case, they have some build in classifier system that solves those problems by a general principle.


This contradicts our present-day knowledge of cognitive science: probmods.org


> This contradicts our present-day knowledge of cognitive science: probmods.org

But nobody was arguing about the infeasibility of approximations of specific human cognitive abilities. Wasn't the conversation about general human cognitive ability?


>And yet no serious research is being put into hard AI

Google is doing some research. One quote:

"One of my favourite things is artificial intelligence, but it has gotten a very bad rap, but my prediction is that when AI happens it's going to be a lot of computation and not so much clever algorithms but just a lot of computation. My theory is that if you look at your programming, your DNA, it's about 600 megabytes compressed, so it's smaller than any modern operating system, smaller than Linux or Windows or anything like that, your whole operating system, that includes booting up your brain. So your program algorithms probably aren't that complicated, it's probably more about the overall computation. We have some people at Google who are trying to build artificial intelligence and to do it on a large scale to make search better. Very few [other] people are working on this, and I don't think it's as far off as people think." - Larry Page, February 2007

http://www.abc.net.au/radionational/programs/scienceshow/cha...


We already have hard AI. Sort of. It is wetware and it exists as networks of real neurons on silicon. It is sort of cheating, but I think it is probably going to generate a self-aware machine before pure silicon does. It arguably has already produced an aware one.

http://en.wikipedia.org/wiki/Hybrot

edit - as it gets more sophisticated it is going to generate some fairly strange debates. What rights do you give to a bathtub of human neurons that you are trying to use to predict the stock market?


It's definitely not for the lack of trying. It mostly is because it really is a hard problem.


No, hard problem means aligning elements with CSS or using a database. Anyone on Hacker News can tell you that.


Are people not getting the sarcasm? HN posters abuse the term "hard problem" daily.


It's not that hard a problem. It's a problem that looks hard because we were looking in the wrong directions based on "philosophy of mind" equating intelligence with consciousness.


Making something that does a good job on the turing test is hard regardless which direction you look philosophically.


It's difficult, but it's not a Hard Problem in the sense of requiring a complete paradigm shift. It's a mere research problem.


Any problem can be solved by creative redefinition.


My whole point is that the AI problem has always been different from the consciousness problem. Why does AI need to be conscious to do any job I might give it?


From what I understand (layman here), general AI research is roughly divided in two school of thoughts. One school believes GAI will be achieved by incrementally working on the different components (vision, language, learning, etc.) separately and eventually gluing those components together. The other school believes we should try to understand the human brain at a biological level and simulate it. The latter school believes general intelligence will emerge from a relatively simple algorithm and we should concentrate our efforts into discovering this algorithm. The former school seems to be a lot more popular these days although research is being done on both fronts.


And the third school believes that general intelligence is normatively-correct reasoning in some sufficiently rich (where Turing-complete is a lower bound on sufficiency) domain/class of possible-words/hypotheses.


I (another layman) don't think AI research is divided that way. The most visible area of research is "machine learning", where the goal is not AGI but to find better algorithms to solve well-defined statistical engineering problems. In contrast to that, there is a wide area of research that doesn't focus on well-defined problems. On the philosophical side, it's about understanding what intelligence even is. Or about understanding what tasks the brain is solving, or why we make some mistakes but not others. On the more technical side are the embodied people who focus a lot more on grounding of an agent's perception and on interaction with the environment, rather than thinking about intelligence as a passive task that only involves the brain.


Hard AI is a result of a lot of soft AI approaches just like we are a result of many sub-processes.


Creating a GAI which can learn how to act human, on its own, in virtual worlds and in robots with realistic bodies isn't serious research?

http://youtu.be/EfGD2qveGdQ?t=2m30s


How are "cheap parallel computing" and "big data" two things and not one? On top of that, those are hardly breakthroughs, they are the natural thing to happen when the CPU speed hike ends. If we can't build faster, we'll build more. Statistics (aka "soft AI") is the only application that can still grow because it is so much easier to scale than anything else.


I like seeing (soft) AI as the buttress that allows us to see farther, like Newton standing on the shoulders of giants. I guess the sudden presence of AI is due to the sudden amount of available data (and thus, a potential source of useful information).


I may have some dumb questions, here. But I'll stab at one or two anyhow of them, at the risk of looking even more dumb in this thread.

Is there a possibility, or likelihood, that hard AI could diverge into at least two major dichotomies depending on how those systems form and interact? In how they attain, analyze and process information? In how it is shared? In deciding what should be shared? In how information is used and grown? In how to manage ethical questions that are often very two-sided, and difficult to model with tools like mathematics and logic?

Things like that. And what kind of outcomes could that bring about? AI debates? AI "wars?" AI manipulating other AI?

I guess I'm simply confused and ignorant as to how the playing field is shaped, generally and specifically.

I'm probably not even qualified to put these questions out there, because I have so little intelligence of AI, but the patterns in human thought that strike at me over and over in life is that a "yin/yang" of opposing perception (and resulting in oppositional thinking) almost always occurs in naturally intelligent beings. Further, those inevitable disagreements often turn out in generating new knowledge, which, in-turn, often splits into two (or more) "camps" yet again... ad infinitum.

First, is this a valid question and perception of intelligence? And secondly, is it fair to assume this might apply to other forms of intelligent systems? Or maybe I am missing a big part of the discussion in AI, which may already be addressing this (or is disregarding as simply academic or even dead-wrong.)

To me, at least, it seems counter-intuitive that AI would push in one general direction (we can debate what we mean by "direction", too). My sensibilities hazily point to a more dichotomic outcome, perhaps.

Or maybe I'm the one with the intelligence issue! But I'm very interested in these concepts, and even more interested in what we may be blind in seeing as this technology continues to evolve and take on new meanings in both our biological minds, and non-biological minds alike.

Maybe someone can help me out? I think I'm missing something, here. Machines may identify with a certain idea of "certitude", but I have trouble with that, myself.

And if AI scientists have trouble with that, themselves, because I would hope they recognize and practice humility in their thinking and interpretation of meaning, what does that mean for the scientists working to build such a powerful and mysterious type of existence?

Sometimes finding the questions to ask, and learning how to ask them is the harder than teasing out the "solutions", so to speak.

Time for a stiff drink ;)

Edit: Looks like I need to read this, among other texts: http://www.theatlantic.com/technology/archive/2014/05/the-mi...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: