We already have neuronal networks that can be taught and which learn based on their input and the result. You can pretty much teach a robot to pick up a cup of water without having the water fly all over the test room.

If we are talking specifically about games though, it's not desirable to have a learning AI. First of all because you need to spend a huge amount of time training your AI to be what you want it to be, but also because it becomes a nightmare to debug. A state machine can look and behave clever in a very deterministic way. A learning AI is definitely cool, but what happens if it picks up wrong habits and then starts to fall apart in that regard?

So, for a little bit of extra "wow" factor (that is until your AI starts going rogue), you have to invest a huge amount of time and resources in even getting it to perform the task you want. A true AI has to learn, much like a human. You can't put the true Google AI into Battlefield and expect it to not die a ton of times before the negative feedback of dying gets it into a position where it can take cover and eliminate threats.

Edit: Of course you also have to satisfy the performance requirements a neuronal network and machine learning need. Might also be a problem with making this a reality.

But if you want some fun, you can certainly teach a computer how to play Battlefield today! You need to wire negative feedback with dying and positive feedback with killing and you are pretty much good to go. Over time the AI will pick up that shooting out of cover will get it less likely killed than standing in the open and shooting the sky.

Last edited by WretchedSid; 07/05/15 20:43.

Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com