Moe: Hey is Curly back from Vacation yet?
Larry: I saw a Red Lamborghini in the parking lot.
Moe: Cool
We all assume Curly now drives a Red Lamborghini meanwhile most computer language systems would be lost at the inference here. Children learn this trick early computers are very challenged to understand these sorts of language challenges.
I actually made the inference, but then rejected the hypothesis. It seemed unrealistic to me that Curly would own a Red Lamborghini.
An AI could form _a number of_ inferences and then use a knowledge base to choose a likely candidate. In fact: I feel like IBM's Watson gave us a little preview of what an AI's "thought process" might be like.
I don't think the main challenge is to teach this "trick" to computers. The problem is probably to give them the proper context.
If the system knows Curly drives a red lambo, and that it is a rare car in this area, and that Curly usually only parks in the parking lot when he's around, and etc. etc. etc.
Except that you already know what a red lambo is, and that its safe to assume its a rare car almost anywhere, et al. You know this from experience, as do I and most other people. I think that was the parents point.
There's still a rule here though, and one that can be learnt: when someone references an object when asked about a person, the inference is the object has something to do with the person. It doesn't directly answer the question, but is a clue that can be used to predict the answer.
Yeah, it doesn't seem all that hard to form up an association between the car mentioned and Curly. From that association and the association between the car and the parking lot (space/time locality) -- it seems reasonable to add some probability to an association between "Curly" and "here/now" - and from there be able to answer some questions about Curly (Does Curly have a car? Which car? Is Curly back from vacation?) -- all from pretty straight forward parsing based on nouns and proximity.
Not saying that parsing natural language is easy, just not sure this is such a terribly hard example (for a system that's prepared to cheat and/or appear stupid/gullible).
Eg, parse both the above and the below correctly with the same parser:
Ann: Have you seen my dragon?
Dad: I think he is playing with the bear in your room.
Ann: Ok.
(Personally I'd be happy if a system thought Ann had a dragon, but CIA analysts might be less than enthusiastic)