All of this is true, but I'm also proposing it needs to judge the feedback on its own, whether its positive or negative... To judge the outcome of its actions and the goal it needs to reach and understand on its own if this outcome of its actions is the desired outcome or the undesired. It needs to make the judgement itself...

Following your example grin the child should determine on its own if what it is doing is positive or negative. Otherwise you'll never get firestarters and parent-killers laugh

In your first line, if you reward a bot for shooting and you punish it for running away, this will produce a rambo-like behavior over and over, but following my first paragraph, you'll get a rambo for the first 40 deaths grin then, he'll become a camping chicken, without you instructing any of this in the programming or feedback... And after the first 20 camp-kills, he'll jump rambo again and go in a cycle until it finds the best mix of rambo-camping chicken...
Just like a weak player in any MP server grin

Isnt this behavior more advanced? It learns on the feedback it gives itself, judging based on the input and outcome...And the feedback it judges changes its input and outcome, making a self-sustaining mechanism (no human interaction or judgement at all).

Thats what I ment...

It might be next to impossible, but another thought is not to provide a goal at all and leave the A.I. to set its own goal, but this requires a richer environment, one where the A.I. can die and has 'food' or something more, but again, not instructing it that food is good or death is bad. I cant even imagine how this could be done...I'm sleepy laugh


Extensive Multiplayer tutorial:
http://mesetts.com/index.php?page=201