Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My favorite NLP/Computer Language problem:

  Moe: Hey is Curly back from Vacation yet?

  Larry: I saw a Red Lamborghini in the parking lot.

  Moe: Cool
We all assume Curly now drives a Red Lamborghini meanwhile most computer language systems would be lost at the inference here. Children learn this trick early computers are very challenged to understand these sorts of language challenges.


I didn't make that inference, I figured Larry was avoiding the question.

NLP is hard


I actually made the inference, but then rejected the hypothesis. It seemed unrealistic to me that Curly would own a Red Lamborghini.

An AI could form _a number of_ inferences and then use a knowledge base to choose a likely candidate. In fact: I feel like IBM's Watson gave us a little preview of what an AI's "thought process" might be like.


I figured there was some kind of temporal timey whimey thing going on because the three stooges pre-date red Lamborghinis.


I bet it'd make more sense with vocal inflection included ;)


I don't think the main challenge is to teach this "trick" to computers. The problem is probably to give them the proper context.

If the system knows Curly drives a red lambo, and that it is a rare car in this area, and that Curly usually only parks in the parking lot when he's around, and etc. etc. etc.


> "If the system knows [...]"

He was outlining that the difference is: humans do not need the context.

I do not know who Curly is, and I still got what car he drives just by reading the three-line dialog.


Except that you already know what a red lambo is, and that its safe to assume its a rare car almost anywhere, et al. You know this from experience, as do I and most other people. I think that was the parents point.


>I still got what car he drives just by reading the three-line dialog.

I inferred that too, but it doesn't mean that it is correct.


Are you by any chance channelling Doug Lennat?


The point is that humans don't need the context to understand the dialogue.


There's still a rule here though, and one that can be learnt: when someone references an object when asked about a person, the inference is the object has something to do with the person. It doesn't directly answer the question, but is a clue that can be used to predict the answer.


Yeah, it doesn't seem all that hard to form up an association between the car mentioned and Curly. From that association and the association between the car and the parking lot (space/time locality) -- it seems reasonable to add some probability to an association between "Curly" and "here/now" - and from there be able to answer some questions about Curly (Does Curly have a car? Which car? Is Curly back from vacation?) -- all from pretty straight forward parsing based on nouns and proximity.

Not saying that parsing natural language is easy, just not sure this is such a terribly hard example (for a system that's prepared to cheat and/or appear stupid/gullible).

Eg, parse both the above and the below correctly with the same parser:

    Ann: Have you seen my dragon?
    Dad: I think he is playing with the bear in your room.
    Ann: Ok.
(Personally I'd be happy if a system thought Ann had a dragon, but CIA analysts might be less than enthusiastic)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: