Originally Posted By: Kartoffel
Well I agree. But you need to somehow define wrong and right. No one can do this on their own. Children get tought
about good and bad by their parents all the time. Also feelings and emotions contribute to this feedback. If it hurts
you most likely have done something wrong (punching the wall or whatever). And if you're feeling happy the opposite
might be the case.
This way you also get tought about wrong and right. But you never learned that pain is a bad and happieness a good
thing. You just know that one is something you should avoid and the other is worth seeking.

The AI needs at least a starting point. Based on this, it can later learn judging the effects of it's own actions in more
complex ways and scenarios.

Quote:
Sorry for ranting again grin , but in this case I'm sure the AI will still just behave randomly. Simply because it wouldn't
give a shit about food or death, those don't have a meaning.


No,no. What I mean is, you can make it judge on its own good or bad (relative to itself, because good or bad is always relative to the view-point), the only question is how to accomplish this...
You can define that food gives energy, and you can define lack of energy as dying. The only thing missing is the sense of self preservation, every organism on the planet has one. It needs to if it wants to survive...

IMHO, this has to be the only defined behaviour, or more like a thought, the organism has to protect its own integrity. This is the foundation of every other behavior, even the simplest ones. (Leave out the love or people who cut themselves...we're searching for more like a dog behaviour here grin )

Quote:

However.. you could program the AI to pick someting totally random for it to seek. You're still setting a goal, but the
outcome might be intresting. (or just boring, I don't know)

Unfortunately this would result in a regular AI, because it will only pick random goals out of a fixed set of goals. It wont be able to construct new goals on its own, and the goals will be for this exact environment.
Quote:

But I guess you had something more basic in mind.. being able to walk around, some food lying here and there and
eventual starvation if you don't find enough to eat.

Thats my point, but basic in results and complex in its behavior. If you construct an A.I. to preserve itself in an environment where survival is key, it'll just sit around in an environment where it can do all sorts of things, but its survival is out of the context, like analising network traffic or a customer data base. But I guess those arent really providing any feedback at all. Maybe this A.I. will work if you plug it in a car navigation or a RC helicopter...
A true A.I. (even a stupid one) will be able to operate outside of its originally defined context, its 'work' environment.

But I guess, everything in real life can die and needs energy...Unlike the Google bot grin Maybe thats the key to a real A.I., "self-preservation" in the most original form?


Extensive Multiplayer tutorial:
http://mesetts.com/index.php?page=201