Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

You are here

Alternate Neural Network Training Method: Real Life!

PeterOMalley's picture
OK, so my title was a bit provocative, but here's what I'm going to do for my project (and hopefully it will work). (When we went around on Wednesday and explained our projects, I said basically this, but now I'd like to elaborate it more.) Training Neural Networks to do ANDs and ORs is all fine and good, but I feel that it misses the point, at least in terms of emergence. Neural Networks show great potential in terms of solving computing and AI problems, but I'd like to go somewhere different. I want to write a simulated world where the creatures are run by neural networks. The inputs to the neural networks will be the "sense": vision, for example, could be represented by two parameters: one for the distance of the nearest object in the line of sight, and another one for the "color", where food would have one color, other creatures another color, and obstacles a third. (The distance and color would have to be normalized so as to be a number between zero and one, of course.) The outputs, then, could be actions: one output could be whether to move forward or not, another to be whether to turn left, right, or not at all, and maybe another could be to change the creature's own color. How could such a network work? Well, start off with random weights, like normal, and let Darwin take over. Allow either asexual reproduction (clone yourself when you have energy > X) or sexual reproduction (find a (willing?) mate and cross over with genes, when you both have energy > X), and have some chance of mutation (either in weights, links, or whatever: even allow new nodes for more processing!). Eventually, successful creatures are left. I have reason to believe that this can work: two of my friends in high school did something similar, but they had to write their own network code, which I don't have to. I do, however, have to learn the code, which shouldn't be too hard, and work out mutation algorithms, which may or may not be hard to write, as well as write the whole simulation itself! With graphics, of course, so that you can see the little buggers going around. I hope it's not too much for me to chew, but I was talking to Josh Carp and I know he's interested in working on it too. We're not sure what the deal is with partnerships on this final project, but that's something else to figure out. So y'all let me know what you think, especially if you see any problems or have any ideas!


PeterOMalley's picture

For anyone who's interested, I thought I'd give an update as to the programming specifics of the project. (This assumes an understanding of neural nets comparable to what we received in class.) First, the "Brain" class. We're using the neural network implementation in Pyrobot ( for the brains of the critters. To start, the brains are given an input layer of 3 neurons, an output layer of 3 neurons, and a random number of intermediate layers with a random number of nodes in each layer. (Currently the number of layers is capped at 15, as are the nodes per layer, but that is just a number I picked out of the air.) The inputs are given as follows: one is the health of the creature (divided by the max health to give a number from 0 to 1), and the other two are for vision. Vision is computed as follows: the creature looks directly ahead of itself, and receives two numbers: one corresponding to the distance of the nearest object, divided by the vision range (12 squares), or 1.0 if no object is seen, and the other number corresponds to the "color" of the object seen. Empty space is color 0, walls are color 1, food 2, and other creatures 3. The color is then divided by 3 to normalize the input. There are 3 outputs: one for going forward, one for turning left, and one for turning right. Since each output is a number from 0 to 1, the action taken corresponds to the highest of the numbers. Next, the "Creature" class. The creature class is really simple. It has a brain, and a health. It has a "cycle" method that is called once per turn, and then proceeds to request the vision input from the World object (see below) and feeds all the inputs to the brain class, and receives its outputs. It then decides on an action based on the outputs, and requests to perform that action from the World object. Finally, the "World" class. A world object holds a two-dimensional array of all space. Each entry in the array is either a 0 for empty, 1 for wall, 2 for food, or a creature object. (Thanks to Python, all entries in the array don't have to be the same type.) The world is in charge of giving sensory input to creatures on request--it has a vision function. It also is in charge of the execution of the program. It has a cycle function that calls the cycle function of all the creatures. (It also maintains an array just of the creatures for this purpose. This is redundant with the 2D array, but I prefer to waste memory and gain speed, since I think that will be the limiting factor.) Finally, the world is also in charge fo graphics, but you'll have to ask Josh about that. What's next? Implement a reproduction algorithm! This is where the actual "training" comes in. To start, I was thinking about doing a simple cloning: whenever a creature has energy > X, it clones itself, with mutations. This will be pretty straightforward, except for the mutation, which I hope to get done (well, a good portion done) today in class. Speaking of which, time to go!