Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think this is true, software is often able to operate with external stimulus and behaves according to its programming but in ways that are unanticipated. Neural networks are also learning systems that learn highly non linear behaviors to complex inputs, and can behave as a result in ways outside of its training - the learned function it represents doesn’t have to coincide with its trained data, or even interpolate - this is dependent on how its loss optimization was defined. None the less its software is not programmed as such - the software merely evaluated the neural network architecture with its weights and activation functions given a stimulus. The output is a highly complex interplay of those weights, functions, and input and can not be reasonably intended or reasoned about - or you can’t specifically tell it what to do. It’s not even necessarily deterministic as random seeding plays a role in most architectures.

Whether software can be sentient or not remains to be seen. But we don’t understand what induces or constitutes sentience in general so it seems hard to assert software can’t do it without understanding what “it” even is.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: