Nope. We disagree fundamentally on the aim of the Turing test. Sure we agree things were thrown out the window, we agree it's a practical test. You think its aim is a fair way for judging machines to be conscious.
I think it's a thought experiment to show that machines are going to be able to fool an average human for some small length of time eventually. That's a lot less grandiose.
A mistaken interaction with an IRC bot or an automated phone service has the same net effect.
Hm? "I think this was a huge step in the right direction."
I guess what I am arguing is, that if there's any fair test, it has to be practical and not metaphysical, that's why I think the Turing Test is a step in the right direction. Do I make the claim that the Turing Test as-is is fair when it comes to determining "consciousness" by any reasonable and practical manner? No.
If I am reading what you are saying correctly, I surmise that you're saying that the Turing Test doesn't say anything about consciousness, but rather, is just about the probability of a machine being able to fool a human. I am saying that the reason why Turing even ponders the question is that there's no good way of definitely answering if a machine (or another human for that matter) is conscious or not. So that leaves the ability of being able to fool (or convince) other thinking machines that one is intelligent (or conscious) as the only viable metric.
I feel like most of this is mainly semantics. I too, don't believe that one can actually determine the consciousness of anyone but oneself. However, we do convince ourselves that other humans are in fact conscious, so it's still interesting to try to figure out what it would take for us to do that, and then apply the same standards to machines.
I think it's a thought experiment to show that machines are going to be able to fool an average human for some small length of time eventually. That's a lot less grandiose.
A mistaken interaction with an IRC bot or an automated phone service has the same net effect.