I think you are misunderstanding neuronal networks here, you don't tell the computer what it can or can't do, you just give it feedback channels. Actions that it shouldn't do are punished with negative feedback and actions that it should do are rewarded with positive feedback.

You can teach a child like that as well, put it in an unknown environment with an electroshock collar for the negative feedback and something that injects a drug into it for the positive feedback. Tada, you are in prison now! But you might be able to train the kid some new tricks that way before you go there laugh

But seriously, this is how you train humans and pets. Less extreme, but you give positive and negative feedback. Just like a pet or human, the AI will try to receive positive feedback and mutate what its doing to maximize just that. You don't need to tell the AI that it can crouch, it will figure that out on its own and associate it with the positive feedback of not dying immediately. Of course if you can pass it on some knowledge to begin with, it might die a couple of million times less before picking up the traits of moving, taking cover, shooting back etc.

Last edited by WretchedSid; 07/05/15 22:18.

Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com