Quote:

I think that the question in the base of it would be "Do you believe in the soul?"

If you do, there is likely no question of whether machines will be alive.
If you do not, then Machines undoubtedly can, since then they are not much different than we are.




This has nothing to do with souls in my opinion. Infact, what do you mean with soul in this respect anyways?

Behaviour is our reaction on our world, wether or not it's the soul that makes us unique in our reaction (I think it's quite save to say, that no one reacts fully 100% the same, biological activities included, like breathing for example), or wether it's a preset genetic code defining our character is pretty irrelevant.
A machine will never have a soul (the spiritual one), and even if we could give it one, it will always be a recreation, since we cannot give souls, if there even is something as a soul. And unless we are going to develop biological bots, as in some sort of semi human clones, able to duplicate and self-repair, I don't think robots will come close to humans in their behaviour. The self-awareness can be easily faked, or at least it's far from impossible, yet would a perfect fake mean the robot has actually become self-aware or would it still be a fake? My calculator can give me some very clever answers, if and only if I 'ask' the right questions, I think that we should not expect a robot to develop new answers to questions not yet asked, to situations not yet experienced, to objects not added in it's library of knowledge. I think we are a long long way from robots really able to learn, and we are a long way from a perfect imitation that can't be distinguished from a human or at least how a human would act. The latter would have to be achieved by having no 'pre-defined behavior' to act by, not even some sort of 'procedural behavior', because wouldn't that still be an imitation and not the real learning we supposedly can?

Cheers


PHeMoX, Innervision Software (c) 1995-2008

For more info visit: Innervision Software