My question [1] (because I thought some people here might want to weigh in on this too):
There are many different possible interpretations of "more human", e.g.:
* More like humans in all respects
* More beneficial to (all) humans (for some value of "beneficial")
* Having more of those characteristics of (some but not all) humans that we consider "positive", like kindness and compassion.
Of course, there are other possibilities. This list was meant to be illustrative, not exhaustive.
Depending on the definition of "more human" that one chooses, the results can be radically different. For example, on the first definition of "more human" we would end up with AI that can fall in love, or get angry/jealous/depressed. On the second definition we could end up with AI's that decide that what is "beneficial" for us might be different than what we ourselves think is beneficial (much like a parent might make a different decision for a child than the child itself would make). On the third definition we have the burden of defining exactly what those positive characteristics are. For example, does "patriotism" make the cut?
So my question to you is: what exactly is it that you really mean when you say you want AI technology to be "more human"?
I like the idea of the 2 part AMA. The way these things always go for me, is I see a link to one i'm interested in. However its 4 hours too late, and I missed being a part of it.
There are many different possible interpretations of "more human", e.g.:
* More like humans in all respects * More beneficial to (all) humans (for some value of "beneficial") * Having more of those characteristics of (some but not all) humans that we consider "positive", like kindness and compassion.
Of course, there are other possibilities. This list was meant to be illustrative, not exhaustive.
Depending on the definition of "more human" that one chooses, the results can be radically different. For example, on the first definition of "more human" we would end up with AI that can fall in love, or get angry/jealous/depressed. On the second definition we could end up with AI's that decide that what is "beneficial" for us might be different than what we ourselves think is beneficial (much like a parent might make a different decision for a child than the child itself would make). On the third definition we have the burden of defining exactly what those positive characteristics are. For example, does "patriotism" make the cut?
So my question to you is: what exactly is it that you really mean when you say you want AI technology to be "more human"?
---
[1] https://www.reddit.com/r/science/comments/3eret9/science_ama...