Gamestudio Links
Zorro Links
Newest Posts
zorro 64bit command line support
by 7th_zorro. 04/20/24 10:06
StartWeek not working as it should
by jcl. 04/20/24 08:38
Data from CSV not parsed correctly
by jcl. 04/20/24 08:32
Zorro FIX plugin - Experimental
by jcl. 04/20/24 08:30
folder management functions
by VoroneTZ. 04/17/24 06:52
lookback setting performance issue
by 7th_zorro. 04/16/24 03:08
Zorro FIX plugin - Experimental
by flink. 04/14/24 07:46
AUM Magazine
Latest Screens
The Bible Game
A psychological thriller game
SHADOW (2014)
DEAD TASTE
Who's Online Now
1 registered members (7th_zorro), 529 guests, and 2 spiders.
Key: Admin, Global Mod, Mod
Newest Members
EternallyCurious, howardR, 11honza11, ccorrea, sakolin
19047 Registered Users
Previous Thread
Next Thread
Print Thread
Rate Thread
Page 1 of 2 1 2
True A.I. - Personal Theories #453007
07/04/15 19:53
07/04/15 19:53
Joined: Jan 2006
Posts: 968
EpsiloN Offline OP
User
EpsiloN  Offline OP
User

Joined: Jan 2006
Posts: 968
I dont know if anyone will ever read this, but here's my thoughts on the subject for the past 15 years.

15 years ago I came up with this:
Quote:
Analysis of a situation , and change in the method of analysis depending on the outcome of the analysis of that situation.

This is my definition of a true A.I. This is what a machine needs to do in order to perform "thinking".

The difference with today's A.I. is that it just reacts to a particular situation depending on its programming. Maybe if you add a little learning, it might appear smart, but it still reacts according to the programming.

To become a true A.I. today's A.I. needs to change its behavior depending on the situation, which said simply means to act differently each time it encounters the same situation. This is essentially learning. To do better and better the same task every time.

In order to accomplish this, it is necessary (I think) to re-program itself, meaning to change its 'conditional statements' on the fly, to change the way it reacts to a situation in such a way to get the most out of its capabilities.

And, another thought I've been having for years, but started thinking more in-depth today.

I've been always wandering why is everyone forgetting stuff. But, its not exactly forgetting, because something lingers afterwards. A feeling that you have forgotten to do something... Or, the feeling when you have somebody's name on the tip of your tongue, but it doesnt come out...
I came up with a theory, about the reason why this is happening.

The brain is large. Scientists believe its divided in sections which, I dont know what the scientists are thinking, but I believe they just store memory in the neurons. (Imagine having a separate HDD with information for each section of a coca-cola producing machine... One for making bottles, one for pouring liquid, one for labeling, one for packaging...) But, the capacity to store that information is a limited space, something around 1300g. What would happen if you want to store 40 years of video feed with an extremely large pixel count? Add to that audio and tactile...It gets absurd to even think about this in terms of bytes.
And what about new data? If you've filled up your brain by the time you're 25 (when the brain stops growing) what happens after that? No new neurons are produced...
Recently Google developed a way to store more information in less space. WebP images. It predicts the next segment and saves the difference between the actual and the prediction data. Minimalistic...

So, my further thoughts on that subject are, what if the brain actually saves data in such a way, that one neuron holds memory for more than one 'byte', or for more than one memory...
Depending on which neuron comes before it (if you know anything about neural networks) it might produce different results... And I'm not talking just about different stimulation and On/Off switches. I'm talking about saving mixed up memories in the same set of neurons, depending on where that information came from (from which other group of neurons or sensors). This way you get a much more efficient system for memory storage, but, again, it produces those 'tip of the tongue' experiences, or feelings that you're forgetting something, because only a fraction of it remains, notifying the rest of the 'memory block' that something was there. I dont know what happens after that, the actual process of remembering this memory, but it sounds reasonable for our small brains to act this way in order to save that much data.

I hope I've given you at least something to think about while sitting on the toilet grin

I cant wait to see a real A.I...its exactly what the world needs...now...

PS.: Oh, and by the way, these thoughts are giving a slight hint to why humans are using 5% of their brain at any moment...You're remembering something particular, no matter if its 3 times 2, what you did on september 6-th last year or calculating the wind speed for a sniper shot (which is essentialy just like remembering...) You're using the exact piece of your brain that you need at this moment, and nothing more. The rest might be a hug from your mother, or you pissing your bed at 7 laugh Plus, it consumes precious energy from your organism...Energy that you might need for calculating the gravity pull of your sniper bullet after the wind...

Anyone have any thoughts?

Feel free to 'express' yourself grin

Last edited by EpsiloN; 07/04/15 20:34.

Extensive Multiplayer tutorial:
http://mesetts.com/index.php?page=201
Re: True A.I. - Personal Theories [Re: EpsiloN] #453009
07/04/15 22:47
07/04/15 22:47
Joined: Jun 2009
Posts: 2,210
Bavaria, Germany
Kartoffel Offline
Expert
Kartoffel  Offline
Expert

Joined: Jun 2009
Posts: 2,210
Bavaria, Germany
Quote:
To become a true A.I. today's A.I. needs to change its behavior depending on the situation, which said simply means to act differently each time it encounters the same situation.

I don't think it is that simple to be honest. For instance, the AI needs some kind of feedback about how
"right" or "wrong" the actions it performs are. By having it seek "right" behaviour and avoid "wrong"
actions it can develop into some direction at least. If it doesn't have any feedback like this it will
just continue to act randomly without being able to form any kind of distinct behaviour.

Edit: to put this into practise:
If you want an AI-controlled actor to follow another actor you provide him different things it can
output (could be movement keys in this example) aswell as inputs (position of both actors should be enough).
If it gets closer to the target tell the AI that it's actions are right, if it's getting further
away tell it that it's behaving in the wrong way.

However... for this to work, the AI needs to be capable of understanding how the other actor behaves and
how it's outputs ("pressing movement keys") affect the inputs (actor positions).

Last edited by Kartoffel; 07/04/15 22:59.

POTATO-MAN saves the day! - Random
Re: True A.I. - Personal Theories [Re: Kartoffel] #453035
07/05/15 20:29
07/05/15 20:29
Joined: Jan 2006
Posts: 968
EpsiloN Offline OP
User
EpsiloN  Offline OP
User

Joined: Jan 2006
Posts: 968
Thats the problem I'm currently facing...

You cant provide it with feedback about its behavior, or it wont be a true A.I. You must provide it with feedback ONLY of its own behavior changing the situation...It must be able to 'decipher' the change in the situation and judge if its action was in the right or wrong direction...

So, the feedback can be constant, from a whole picture (video feed) to numbers it doesnt matter, and it needs a way to judge the change in the situation if its bringing it closer to the goal at hand performing its current action. (Plus, as always, it needs a note of randomness to experiment with new actions)

Lets try a little example, based on your post. And lets consider in this example a Counter-Strike bot (we're all familiar with those).
You can begin by building a simple AI that can move and shoot and give it a goal of killing the enemy bot.
Even if you dont give it the knowledge of what movement is, it can experiment in its input channels and based on distance to judge if its getting closer to its goal.
This is the simplest form of 'learning' AI, but it only learns how to move.
People gave the advanced bots programming to hide or jump from behind, but this has to be left outside of the programming, the A.I. needs to come to those things on its own.
Imagine thats all the bot learns and knows from its first round. Now it also knows where it met the enemy and how well that encounter went.
It can take into account the last encounter when it meets the bot again, perhaps in the same spot, and judge if it came quicker or slower and what is the difference in its health from the last encounter.
And somehow take into account this new information in the second round and perform better or worse, and make an even bigger assessment the third round based on the first two, learning the patters in behavior of the other bot, like most visited place, speed of shooting and speed of movement to that place, and basically take all this information at random when it needs to and judge how it should perform.

Lol, this sounds a lot more complicated than what I meant to describe laugh

I was thinking about Neural Networks today. Instead of if-else statements for behavior, Neural Networks play out exactly the same role, but they can be modified easily, meaning it can be changed, unlike hard-coded conditioning... The 'action decision' part of the A.I. brain could be sculpted with a Neural Network acting like a group of if-else conditions to decide from possible actions, but it still needs a separate module for modifying the Neural Network based on the input,output (difference in the situation) of the system.

In this module lies the difficulty, making it judge how to modify the NN to perform better. Its like building an A.I. to guide another A.I. I guess grin And it'll probably take even more A.I. modules than those two to have something at least as smart as a bug or a bird.

Any thoughts on that? How could an A.I. be built to modify an N.N. that decides the actions needed to be performed.

PS.: I dont have a proof, but I believe the key to (at least a game A.I.) true A.I. in all this lies in the input towards the A.I. Maybe giving it just a little info of positions and other object statuses is not enough. Perhaps it needs a full video feed and an object detection system, in order to really 'feel' its environment, to have a correct interpretation of the distance to every point in its surroundings, and possible paths (where it can go and cannot go at every given time). But this would require an even greater amount of A.I. to detect and recognize and remember objects and forms...

And, by the way, this thing I'm talking about in THIS reply (not a really smart A.I. but in the right direction A.I.) will be more universal, not just acting as a CS bot, but for other games or even programs with different goals than kill that guy... This will be a smart A.I., compared to the current A.I., which are really built for a specific task and cannot operate outside of their given environment (meaning you cannot take a Google bot and plug it in Battlefield or vice-versa...) Could that be a foundation of true A.I.??


Extensive Multiplayer tutorial:
http://mesetts.com/index.php?page=201
Re: True A.I. - Personal Theories [Re: EpsiloN] #453036
07/05/15 20:40
07/05/15 20:40
Joined: Apr 2007
Posts: 3,751
Canada
WretchedSid Offline
Expert
WretchedSid  Offline
Expert

Joined: Apr 2007
Posts: 3,751
Canada
We already have neuronal networks that can be taught and which learn based on their input and the result. You can pretty much teach a robot to pick up a cup of water without having the water fly all over the test room.

If we are talking specifically about games though, it's not desirable to have a learning AI. First of all because you need to spend a huge amount of time training your AI to be what you want it to be, but also because it becomes a nightmare to debug. A state machine can look and behave clever in a very deterministic way. A learning AI is definitely cool, but what happens if it picks up wrong habits and then starts to fall apart in that regard?

So, for a little bit of extra "wow" factor (that is until your AI starts going rogue), you have to invest a huge amount of time and resources in even getting it to perform the task you want. A true AI has to learn, much like a human. You can't put the true Google AI into Battlefield and expect it to not die a ton of times before the negative feedback of dying gets it into a position where it can take cover and eliminate threats.

Edit: Of course you also have to satisfy the performance requirements a neuronal network and machine learning need. Might also be a problem with making this a reality.

But if you want some fun, you can certainly teach a computer how to play Battlefield today! You need to wire negative feedback with dying and positive feedback with killing and you are pretty much good to go. Over time the AI will pick up that shooting out of cover will get it less likely killed than standing in the open and shooting the sky.

Last edited by WretchedSid; 07/05/15 20:43.

Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com
Re: True A.I. - Personal Theories [Re: EpsiloN] #453037
07/05/15 21:01
07/05/15 21:01
Joined: Jun 2009
Posts: 2,210
Bavaria, Germany
Kartoffel Offline
Expert
Kartoffel  Offline
Expert

Joined: Jun 2009
Posts: 2,210
Bavaria, Germany
Quote:
You cant provide it with feedback about its behavior, or it wont be a true A.I.

I still dont get how this could work. Humans get this kind of feedback all the time.

And like I've said, I can't think of a way that an AI like this can develop a destict behaiviour without any clues.
It doesn't matter which information you feed it with. Images, sounds, position values.. it cannot connect the dots
on its own (at least not consistently) because it would never have a clue which interpretation or reaction is right.

Just imagine a game where you have to touch something with your cursor, but on your screen there's just random
noise because you don't know how to interpret the information you're given. There are countless ways to interpret what
you're seeing and you cannot know which one is right. Actually, you wouldn't even know that touching the thing is
your objective. You also wouldn't know what a mouse, a cursor or movement itself is.

...so you don't know if you're getting closer or further away from the object. You also woudn't recognize if you've won the game.
The only thing you could do is continuing to act randomly without ever knowing what's going on.

Last edited by Kartoffel; 07/05/15 21:42. Reason: typos

POTATO-MAN saves the day! - Random
Re: True A.I. - Personal Theories [Re: WretchedSid] #453038
07/05/15 21:51
07/05/15 21:51
Joined: Jan 2006
Posts: 968
EpsiloN Offline OP
User
EpsiloN  Offline OP
User

Joined: Jan 2006
Posts: 968
I cant even describe what I mean laugh

I'm not talking specifically about producing an A.I. capable of outsmarting you in a game. I'm speaking of even a stupid AI, but one that taught itself what and how to do it.
The difference is, that you give it just an input channel and a goal. For example, the same AI can be plugged in to Battlefield or a trading platform, and you just give it a way in(WSAD or "Buy" button), current status of the given environment (player pos or USD to BGN) and goal to reach (kill or +$0.5/h). It has to come up on its own with the methods of reaching this given goal. Perhaps analysing its environment over different periods of time and its actions and if they get it towards the right direction...(For example, this wont be expressed at all in the programming, but it has to have a way of building on its own a micro-goals list like "To kill a bot I must see it, to see it I must meet it, to meet it I must walk...and start walking" without you defining any of these things in the A.I.'s programming, just provide it with a method for building its own logic and input/output/goal. Meaning, you must provide a NN that doesnt represent ANYTHING and give the modelling A.I. the capability of modifying the NN so it 'represents' the micro goals and eventually it'll reach a state that every neuron serves the purpose of reaching one of the micro-goals...I hope this makes sense...It has to be vague. And it'll probably need to use a dynamic number of neurons, depending on the complexity of the micro-goal list, because every excessive neuron breaks this list).

I'm not at all concerned with budget or time, or tech, because I'm just trying to reach an understanding on how such a machine could be built, how it could work...(And, by the way, you could eliminate the training time with a high-speed simulation, saving you some time). If you know how to build something, its easy to consider whats needed, but if you dont know what you're building, you can never guess what you'll need laugh

But, what I'm really talking about, like in your example, if you put a Google AI into Battlefield, it wont understand at all the environment its in, unless you specify it CAN take cover. A true A.I. has to understand on its own that it can take cover and that it actually helps, what the cover means to itself in its own 'language'. It has to understand its environment on its own, with its own meaning... Without you defining what a corner means or how it can be exploited... I dont think this can be done solely with NNs, or at least with a single NN. Using a single NN has to be more complex than what a human can sculp, even for a life-time. You can copy an entire brain if you like, but it'll never work, because you cant provide it the stimuli a real human body has. It has a ton of feedback/input and output, thats the true problem of the most complex NNs built so far.

The NNs that we currently have model themselves, but it is fixed, it is scripted by a programmer towards a particular goal and in a fixed environment. It must have a mechanism of judging on its own if it is getting towards its goal... A pattern mimicking NN (probably the worst example ever) has its modelling behavior sculpted into it...It doesnt need to understand how to get to the desired goal, it just does...

I feel I'm going in circles laugh I feel cant really express what I mean. I hope you're understanding all this.

PS.: A Long post and a lot of thoughts. If it seems random, sorry, I'm jumping from section to section while thinking laugh but I have the feel I'm going somewhere, even if the goal is on the other side of the universe.
* This has to be my longest post ever! grin


So, after all this, any thoughts on realising a judgemental A.I. mechanism for modelling an NN based on meaningless input/output and goal? grin

Last edited by EpsiloN; 07/05/15 21:51.

Extensive Multiplayer tutorial:
http://mesetts.com/index.php?page=201
Re: True A.I. - Personal Theories [Re: EpsiloN] #453039
07/05/15 22:16
07/05/15 22:16
Joined: Apr 2007
Posts: 3,751
Canada
WretchedSid Offline
Expert
WretchedSid  Offline
Expert

Joined: Apr 2007
Posts: 3,751
Canada
I think you are misunderstanding neuronal networks here, you don't tell the computer what it can or can't do, you just give it feedback channels. Actions that it shouldn't do are punished with negative feedback and actions that it should do are rewarded with positive feedback.

You can teach a child like that as well, put it in an unknown environment with an electroshock collar for the negative feedback and something that injects a drug into it for the positive feedback. Tada, you are in prison now! But you might be able to train the kid some new tricks that way before you go there laugh

But seriously, this is how you train humans and pets. Less extreme, but you give positive and negative feedback. Just like a pet or human, the AI will try to receive positive feedback and mutate what its doing to maximize just that. You don't need to tell the AI that it can crouch, it will figure that out on its own and associate it with the positive feedback of not dying immediately. Of course if you can pass it on some knowledge to begin with, it might die a couple of million times less before picking up the traits of moving, taking cover, shooting back etc.

Last edited by WretchedSid; 07/05/15 22:18.

Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com
Re: True A.I. - Personal Theories [Re: WretchedSid] #453040
07/05/15 22:39
07/05/15 22:39
Joined: Jan 2006
Posts: 968
EpsiloN Offline OP
User
EpsiloN  Offline OP
User

Joined: Jan 2006
Posts: 968
All of this is true, but I'm also proposing it needs to judge the feedback on its own, whether its positive or negative... To judge the outcome of its actions and the goal it needs to reach and understand on its own if this outcome of its actions is the desired outcome or the undesired. It needs to make the judgement itself...

Following your example grin the child should determine on its own if what it is doing is positive or negative. Otherwise you'll never get firestarters and parent-killers laugh

In your first line, if you reward a bot for shooting and you punish it for running away, this will produce a rambo-like behavior over and over, but following my first paragraph, you'll get a rambo for the first 40 deaths grin then, he'll become a camping chicken, without you instructing any of this in the programming or feedback... And after the first 20 camp-kills, he'll jump rambo again and go in a cycle until it finds the best mix of rambo-camping chicken...
Just like a weak player in any MP server grin

Isnt this behavior more advanced? It learns on the feedback it gives itself, judging based on the input and outcome...And the feedback it judges changes its input and outcome, making a self-sustaining mechanism (no human interaction or judgement at all).

Thats what I ment...

It might be next to impossible, but another thought is not to provide a goal at all and leave the A.I. to set its own goal, but this requires a richer environment, one where the A.I. can die and has 'food' or something more, but again, not instructing it that food is good or death is bad. I cant even imagine how this could be done...I'm sleepy laugh


Extensive Multiplayer tutorial:
http://mesetts.com/index.php?page=201
Re: True A.I. - Personal Theories [Re: EpsiloN] #453041
07/05/15 23:02
07/05/15 23:02
Joined: Jun 2009
Posts: 2,210
Bavaria, Germany
Kartoffel Offline
Expert
Kartoffel  Offline
Expert

Joined: Jun 2009
Posts: 2,210
Bavaria, Germany
Quote:
All of this is true, but I'm also proposing it needs to judge the feedback on its own, whether its positive or negative... To judge the outcome of its actions and the goal it needs to reach and understand on its own if this outcome of its actions is the desired outcome or the undesired. It needs to make the judgement itself...

Well I agree. But you need to somehow define wrong and right. No one can do this on their own. Children get tought
about good and bad by their parents all the time. Also feelings and emotions contribute to this feedback. If it hurts
you most likely have done something wrong (punching the wall or whatever). And if you're feeling happy the opposite
might be the case.
This way you also get tought about wrong and right. But you never learned that pain is a bad and happieness a good
thing. You just know that one is something you should avoid and the other is worth seeking.

The AI needs at least a starting point. Based on this, it can later learn judging the effects of it's own actions in more
complex ways and scenarios.


Edit:
Quote:
It might be next to impossible, but another thought is not to provide a goal at all and leave the A.I. to set its own goal, but this requires a richer environment, one where the A.I. can die and has 'food' or something more, but again, not instructing it that food is good or death is bad. I cant even imagine how this could be done...I'm sleepy laugh

Sorry for ranting again grin , but in this case I'm sure the AI will still just behave randomly. Simply because it wouldn't
give a shit about food or death, those don't have a meaning. It would do whatever it likes to do - but since there's no
goal, there aren't any things the AI would prefer doing (like not dying). Over time it would helplessly perform random
actions until the gamemechanics decide that the AI starved to death. ( sad end. frown )
(or it will keep doing random stuff forever if there's enough supply of food lying around everywhere which it somehow
manages to eat)

Edit2: Totally forgot: The AI could also kill itself (if the game mechanics allow this). Even if the AI knew that this means
no more helpless wandering around doing random things - it just wouldn't care because theres simply no meaning to it.

But I guess you had something more basic in mind.. being able to walk around, some food lying here and there and
eventual starvation if you don't find enough to eat.

(end of edit2)

However.. you could program the AI to pick someting totally random for it to seek. You're still setting a goal, but the
outcome might be intresting. (or just boring, I don't know)

Last edited by Kartoffel; 07/05/15 23:50. Reason: too many edits.

POTATO-MAN saves the day! - Random
Re: True A.I. - Personal Theories [Re: Kartoffel] #453047
07/06/15 07:05
07/06/15 07:05
Joined: Jan 2006
Posts: 968
EpsiloN Offline OP
User
EpsiloN  Offline OP
User

Joined: Jan 2006
Posts: 968
Originally Posted By: Kartoffel
Well I agree. But you need to somehow define wrong and right. No one can do this on their own. Children get tought
about good and bad by their parents all the time. Also feelings and emotions contribute to this feedback. If it hurts
you most likely have done something wrong (punching the wall or whatever). And if you're feeling happy the opposite
might be the case.
This way you also get tought about wrong and right. But you never learned that pain is a bad and happieness a good
thing. You just know that one is something you should avoid and the other is worth seeking.

The AI needs at least a starting point. Based on this, it can later learn judging the effects of it's own actions in more
complex ways and scenarios.

Quote:
Sorry for ranting again grin , but in this case I'm sure the AI will still just behave randomly. Simply because it wouldn't
give a shit about food or death, those don't have a meaning.


No,no. What I mean is, you can make it judge on its own good or bad (relative to itself, because good or bad is always relative to the view-point), the only question is how to accomplish this...
You can define that food gives energy, and you can define lack of energy as dying. The only thing missing is the sense of self preservation, every organism on the planet has one. It needs to if it wants to survive...

IMHO, this has to be the only defined behaviour, or more like a thought, the organism has to protect its own integrity. This is the foundation of every other behavior, even the simplest ones. (Leave out the love or people who cut themselves...we're searching for more like a dog behaviour here grin )

Quote:

However.. you could program the AI to pick someting totally random for it to seek. You're still setting a goal, but the
outcome might be intresting. (or just boring, I don't know)

Unfortunately this would result in a regular AI, because it will only pick random goals out of a fixed set of goals. It wont be able to construct new goals on its own, and the goals will be for this exact environment.
Quote:

But I guess you had something more basic in mind.. being able to walk around, some food lying here and there and
eventual starvation if you don't find enough to eat.

Thats my point, but basic in results and complex in its behavior. If you construct an A.I. to preserve itself in an environment where survival is key, it'll just sit around in an environment where it can do all sorts of things, but its survival is out of the context, like analising network traffic or a customer data base. But I guess those arent really providing any feedback at all. Maybe this A.I. will work if you plug it in a car navigation or a RC helicopter...
A true A.I. (even a stupid one) will be able to operate outside of its originally defined context, its 'work' environment.

But I guess, everything in real life can die and needs energy...Unlike the Google bot grin Maybe thats the key to a real A.I., "self-preservation" in the most original form?


Extensive Multiplayer tutorial:
http://mesetts.com/index.php?page=201
Page 1 of 2 1 2

Gamestudio download | chip programmers | Zorro platform | shop | Data Protection Policy

oP group Germany GmbH | Birkenstr. 25-27 | 63549 Ronneburg / Germany | info (at) opgroup.de

Powered by UBB.threads™ PHP Forum Software 7.7.1