If you believe that what we describe as "consciousness" is emergent from the ideas a material brain develops about itself, then it's in fact not logically possible to have a world that is physically identical to ours yet does not contain consciousness. So indeed, premise 2. sneaks in the conclusion.
To illustrate this point, here's an argument with the same structure that would similarly "prove" that gravity doesn't cause things to fall down:
1. In our world, there is gravity and things fall down.
2. There is a logically possible world where there is gravity yet things do not fall down.
3. Therefore, things falling down is a further fact about our world, over and above gravity.
4. So, gravity causing things to fall down is false.
I don't think your point 2 is directly analogous to his point 2.
Because a world where things do not fall down is not physically identical to a world in which they don't.
I think the point of arguing about p-zombies is this. Do you believe it's possible for a human being to exhibit all the external characteristics of consciousness without an internal conscious experience? And if you believe that's true, then you can posit a world which is physically indistinguishable from our world through any experiment in which consciousness simply does not exist, because, as far as I know, there is no test that can prove that an individual does have an internal consciousness, and isn't merely mimicking it. Most arguments that p-zombies don't exist sort of rely on the internal conscious experience of the person making it, which no one else has access to -- "I have an internal conscious experience of the world, other people are similar to me and so they must also have those experiences."
That is _not_ true about a world in which gravity does not exist for obvious reasons. That universe would be very different from ours and easily distinguishible through experiment.
I think his point does hinge on whether it's possible for p-zombies to exist, but it's not as silly as you all are making it out to be, and it is not begging the question.
I actually think his weakest point is part 3 and 4, because I think mostly the problem is that we don't really have good definitions of consciousness and related concepts let alone a complete physical explanation of their origins, and his whole argument hinges on the fact that we currently don't have a way to test for internal conscious experience, but I think that might not always be true.
To elaborate on your statement, we all think in very different ways. Recently there was an academic test posted here that evaluated “how” a person thinks (internal monologue, use of images, how memories are recalled, etc.). After my girlfriend and I both took the test and I saw how differently we both think, I was shocked. Had we not taken this quiz I’d have assumed the inside of her mind fundamentally worked the same way as mine does. But that is seemingly very far from the truth.
As where I can visualize, use internal monologue, vividly recall my memories, etc. at will, by default I do none of the above, and my thoughts are opaque to me. For her, she almost exclusively uses her internal monologue when thinking, and her entire thought process is consciously visible to her. It’s entirely conceivable that other people might not have an experience of “consciousness” resembling anything like what my idea of consciousness is.
> Do you believe it's possible for a human being to exhibit all the external characteristics of consciousness without an internal conscious experience?
Nobody knows whether conscious experience is a requirement or not to "exhibit all of the external characteristics".
It's possible that the only way to get from state N to state N+1 is to include the consciousness function as part of that calculation.
A counter to this would be that a lookup table of states would produce the same external characteristics without consciousness.
A counter to that counter would be that the consciousness function is required to produce state N+1 from state N. The creation of the lookup table must have invoked the consciousness function to arrive at and store state N+1.
The thing we just don't really know is whether state N+1 can be derived from state N without the consciousness function being invoked.
Is a video game a conscious experience for a computer?
Now imagine an internal video game in a computer system that is being generated from the real world inputs of what it sees/hears/feels around it. You take outside input, simulate it, record some information on it, and output feedback into the real world.
Many people would say this isn't consciousness, but I personally disagree. You have input, processing, introspection, and output. The loops that occur in the human brain are more complex, but you have the same things occurring. There is electrical processing and chemical reactions occurring in the humans mind. Just because we don't understand their exact processing doesn't mean they are unrelated to consciousness. Moreso we can turn this consciousness off with drugs and stop said electrical processing.
Midazolam/Versed sedation seems pretty close to a p-zombie. You can have someone who seems completely awake, walking around and interacting normally, but if you ask them later they were completely unconscious from their own perspective. So self-reported consciousness isn't always accurate. And it also seems that consciousness is very closely tied to memory.
(I'm not arguing a particular position, but trying to figure out what to make of this. Also, this is based on what I've read, not personal experience.)
> You can have someone who seems completely awake, walking around and interacting normally, but if you ask them later they were completely unconscious from their own perspective
Were they unconscious, or are they now unable to remember what they did? I.e. amnesiac.
If you ask a deceased person about their life, you will find that they also will offer no evidence of a previous conscious experience. Life, apparently, resembles unconsciousness.
You experience your own consciousness - your own model of your self, time, and the world as perceived through your physical sensory apparatus. This could give you a probability of 100% certainty of your own consciousness. You're a good skeptic, though, and after much consideration, you decide that, despite absence of evidence to the contrary, you'll allow for 2% uncertainty, since you might be a simulation specifically designed to "feel" conscious, or some other bizarre circumstance.
Knowing this, you compare your own experience with reports by others, and find that, despite some startling variety in social and cultural practice, humans all more or less go through life experiencing the world in a way that more or less maps to your own experiences. You find that even Helen Keller, despite her tragic disability, wrote about experiences which you can simulate for yourself. You conclude that if you somehow swapped places with her, she would be able to map the sensory input of your physical sensors to her own experience of the world, and vice versa.
This leads you to think that our physical brains are performing a process that models the world, and it does so consistently. After reading up on people's experiences, you also learn that our subjective experiences are constructed, moment by moment, by a combination of these world models, real-time stimulation, abstract feedback loops of conscious and unconscious thinking.
The more you read, the more evidence you have of this strange loop being the default case for every instance of a human having a brain and being alive.
The Bayesian probability that you are conscious, because of your brain, given all available evidence, approaches 100% certainty. You conclude your brain is more or less the same as anyone else's brain, broadly speaking, and this is supported by the evidence provided by a vast majority of accounts from other similarly brained individuals through all of human history.
Since your brain doesn't have a particular difference upon which to pin your experience of consciousness, and the evidence doesn't speak to any need for explanation, Occam's razor leads you to the conclusion that the simplest explanation is also the best. The living human brain is necessary and sufficient for consciousness, and consciousness is the default case for any living human brain.
The posterior probability that any given human (with a "normal" living brain) is conscious approaches 100% certainty, unless you can specifically provide evidence to the contrary. Saying "but what if p-zombies exist" makes for a diverting thought experiment, but it's rationally equivalent to saying "but what if little invisible unicorns are the ones actually experiencing things" or "what if we're all in the Matrix and it's a simulation" or "what if we're just an oddly persistent Boltzmann brain in an energetic nebula somewhere in the universe."
Without evidence, p-zombies are a plot device, not a legitimate rational launching off point for theorizing about anything serious.
Humans are conscious. We have neural correlates, endless recorded evidence, all sorts of second hand reporting which can compare and contrast our own first hand experiences and arrive at rational conclusions. Insisting on some arbitrary threshold of double blind, first hand, objective replicable evidence is not necessary, and even a bit shortsighted and silly, since the thing we are talking about cannot be directly shared or communicated. At some point, we'll be able to map engrams and share conscious experience directly between brains using BCI, and the translation layer between individuals will be studied, and we'll have chains of double blinded, replicable experiments that give us visibility into the algorithms of conscious experience.
Without direct interfaces to the brain and a robust knowledge of neural operation, we're left with tools of abstract reasoning. There's no good reason for p-zombies - they cannot exist, given the evidence, so we'd be better served by thinking about things that are real.
>since you might be a simulation specifically designed to "feel" conscious
I would argue this is actually consciousness also. If (and yea, it's a big if) consciousness is an internal model/simulation of how we experience reality, then a simulation of a simulation is still a simulation.
I agree - once you've settled your math on consciousness, you can go back and modify the priors based on new evidence. One of the crazier suppositions that actually makes a dent in the posterior probability is the simulation hypothesis.
If all civilizations that develop computation and simulation capabilities converge to the development of high fidelity simulations, then it's highly likely that they would create simulations of interesting periods of history, such as the period of time when computers, the internet, AI, and other technologies were developed. We just so happen to be living through that - I still put my odds of living in base reality somewhere above 98%, but there is a distinct possibility that we're all being simulated so that this period of history can be iterated and perturbed and studied, or some such scenario.
Maybe someone ought to start studying the science of universal adversarial simulation attacks, to elicit some glitches in the matrix. That'd be one hell of a paper.
Taking this as true, wouldn't that mean that a lack of published papers on this topic is light evidence of being in a simulation? Also that it would be fairly dangerous to bring the subject to the public's attention.
> If you believe that what we describe as "consciousness" is emergent from the ideas a material brain develops about itself, then it's in fact not logically possible to have a world that is physically identical to ours yet does not contain consciousness.
This sneaks in an implicit axiom: that the brain is not only necessary, but is also sufficient, necessarily, for consciousness (implicitly ruling out some unknown outside, non-materialistic force(s)).
What is Chalmers saying then? As i understand he is saying that there can be a world where conciousness does not exist, but all the possible physical experiments cannot distinguish between that world and our world. But that simply means the conciousness he is looking for has absolutely no consequence, and therefore his point has no value...
How is this related to the 2nd point in OP's comment?
The whole Chinese room argument is based merely on a misinterpretation of computentionalist/physicalist argument.
Of course the computer or person who executes the program does not understand Chinese, they are just performing arithmetic operations, the entity understanding Chinese is the program itself, not the medium on which the program runs.
But Chalmers doesn't think that approach works, nor any other physicalist attempt to explain consciousness. The problem with what you stated is that you're substituting ideas about consciousness for sensations. And those aren't the same thing. We experience sensations as part of being embodied organisms, and then we think about those sensations.
It's quite clear if you approach these things logically that Chalmers doesn't do a lot of thinking before coming up with these arguments. All of his arguments boil down to "if we assume that consciousness is different from everything else, then it's different from everything else". He gets way, way too much attention for someone who is sub mediocre in his reasoning.
He also doesn't understand what computation is, even though he often makes confident statements about it. He thinks computation is a subjective process, that something only counts as a computation if someone interprets it as such, which is simply wrong, not a debatable topic. And this is the core of one of his other arguments about why consciousness can't be a computational process.
There is not. It's by the far the most likely explanation, and even if you don't agree with that, it is at least completely consistent with everything we know about computation.
For one, you would have to determine whether physical laws are computational processes. Stephen Wolfram is trying this, but it requires some incredible assumptions.
The laws of physics we know right now are either computable or stochastic with computable probabilities. This was of course the only possible outcome if physics made any sense at all, since the purpose of physics is to compute outcomes, so a physical theory that was not computable would never have been invented.
Still, the laws of physics could be anything and it wouldn't matter for this question. The only relevant question is whether our brains are computers, regardless of how the physics work at the lowest level. After all, we have clear proof that you can make computers on the existing laws of physics (I'm typing this reply on one!), so all we need to know is whether our brains are bio-chemical computers. Neuroscience is nowhere near a level where it can answer this, but it at least remains a plausible explanation. After all, we humans can't compute any non-computable function, or at least none that we know of (the Church-Turing thesis).
To illustrate this point, here's an argument with the same structure that would similarly "prove" that gravity doesn't cause things to fall down:
1. In our world, there is gravity and things fall down.
2. There is a logically possible world where there is gravity yet things do not fall down.
3. Therefore, things falling down is a further fact about our world, over and above gravity.
4. So, gravity causing things to fall down is false.