Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If I understand correctly, you're referring to the "easy" problem of consciousness, i.e. the "mechanistic" explanation of how a self could be constructed by the brain. That's an interesting question and I think your take is a coherent and plausible one (from a materialistic perspective). However, I still think this doesn't get around the hard problem of why these interactions actually feel like anything. I've never heard a satisfying materialistic explanation of that. Do you believe the interactions could in principle be implemented on any Turing machine? Or are they substrate dependent?


Actually I do; but... 1) such a Turing machine may have to be a lot more powerful than anything we currently have. With or without invoking QM mechanisms, there is reason to believe that every single neuron does a lot more computation than our simplistic models in current ML neural nets. 2) it may not be possible to "program" a machine to be conscious in the way we feel consciopus, we'd probably have to literally evolve it, i.e. in a rich simulate environment starting with simple artificial "organisms" that "feel" this environment and then getting progressively more complex.

But I do believe in Wolfram's "principle of computational equivalence", and thus that anything that can implement a turing machine can also implement any other complex system, including consciousness.


I guess this is where we differ. I don't see a sufficient reason to believe that increasing computational capacity/complexity alone gives rise to consciousness. Moreover, I think there are common sense reasons to believe that consciousness is not substrate independent. Therefore, I don't see it as obvious that Turing completeness is sufficient for consciousness. For example, as someone else on this post has pointed out, a sufficiently complex water pipeline can implement a Turing machine. However, I doubt it would ever be conscious, no matter how large we make it. I think representing and processing information is orthogonal to experiencing.


I think we do agree... "complexity alone" certainly will certainly not give rise to consciousness. Consciousness begins with feeling and separating the "I" from the "other"; I feel hunger, that there is food. That's why I said we'd have to evolve it in a simulated environment, one in which there are things for a nascent consciousness to feel. So in that sense yes, it depends on the substrate, but the substrate could be virtual, simulated on a powerful enough Turing machine.


You haven't explained why you think they shouldn't "feel like anything". How do you distinguish "feeling like anything" from anything else you experience?


Well, as far as I'm aware notions of feeling or experiencing are not accounted for in our current physical models. Does an electron feel anything? On the one hand, if it does, it seems to me like physical models have to be extended to include some primitive form of consciousness. This would be something like panpsychism. On the other hand, if single electrons do not have consciousness, why do large collections of them in specific structures have it? Note that, to me, it seems insufficient to say that collections of electrons can be used to model or compute with. Namely because it raises the question of why this modeling has a feeling tone (qualia) to it.

Finally, I don't know if I can make a meaningful distinction between feeling and experiencing. I believe a feeling is an experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: