Simulating and Studying Ecosystems, Genetics & Neural Networks (Part 2)

Following on from Part 1 of this series!

So, the first step along this journey was for me to build a system so I can have neural networks driving the creature behaviour in my little ecosystem simulator!

What follows are my findings from building a neural net myself, and might be inaccurate, here be dragons, etc!

So what, fundamentally, is a Neural Network? Put simply, it’s a simulation/abstraction of how we think brain cell structures communicate. Brains are large groups of cells called neurons, which communicate to eachother via long connecting strands called axons. Neurons receive impulses from other neurons by “listening” with strands called dendrites.  A Neural Network is a collection of neurons interconnected in this way.
So! Onto the business of simulating them! Let’s see what the internet has to say abou-

https://upload.wikimedia.org/wikipedia/commons/6/60/ArtificialNeuronModel_english.png

http://www.bogotobogo.com/python/scikit-learn/images/NeuralNetwork1/NN-with-components-w11-etc.png

https://www.intechopen.com/source/html/759/media/image11.jpeg

help

 

Okay, how about we approach this from what we know a Neural Network is, and what it’s trying to accomplish? Enter that illustrious, ingenious piece of software that is a hero to all; MSPaint.

So we have our creature, right?  And its brain.

The creature’s brain is made up of neurons.

And the neurons are connected by axons. Note how not all neurons are connected together! Some are only connected to a couple of others, and some to many.
Great, but we’ve already established all this beforehand. How do we simulate a neuron, or an axon?

Let’s say a neuron has an activity level, or an excitement value. A neuron’s excitement is how much energy is currently within that specific neuron at one instant of time. We can easily represent this in code with a floating point value ranging from 0.0f to 1.0f, and we’ll use a frame of execution as our time-frame.

Now, each neuron does something with its energy, each frame (or instant of time). This is the neuron’s behaviour, and there’s probably a formula or something that tells us exactly what this is, but I’m going to go ahead and ignore that for now in favour of doing something a bit more transparent!

Let’s start with something really basic, an accumulator. Each frame, the neuron will gradually increase its energy level. When it gets all the way to 1.0f, it’ll turn back off back to 0.0f. Not hugely exciting, but it’s more important when we start to think about axons again.

Let’s link a few neurons together, in a line. And now, let’s say that instead of simply increasing energy whenever, a neuron will only increase energy when the neuron linked to its left has an energy level at or above 0.5f. Oh, and for kicks, the neuron at the very left is excited based on some arbitrary, outside value. This is our input neuron, so it represents some incoming information. Like an eye cell, transmitting the overall light level of the world to the rest of the brain! For now, we’ll make it a constant value of 1.0f.

Already, their behaviour together is more complex. The neuron at the far right will fill up twice as slowly as it did before, with a big pause at around 0.5 while it waits for the middle neuron to go above 0.5 again, after looping round.

So, imagine then, what would happen if the neurons were arranged like this, where we have two “eye” neurons, two “thinking” neurons, and two “output” neurons? Conventionally, these distinct groups of neurons are called layers, and these are the input, hidden, and output layers.
Take note of how the axons are positioned here. You can see that the bottom hidden neuron is connected to both input neurons, but the top one is only connected to one of the inputs. Note the similar situation for the output neurons too. This kind of non-uniform connection is something found in nature – we’re not trying to build the best possible brain here, we’re trying to build something sort of organic! Additionally, non-uniform structure is how we end up with more interesting behaviours, as connecting all neurons together would simply result in a big mish-mash.

So, a neuron connected with two axons? What does our humble accumulator do? Well, we could take the average value of the two input neurons, and then use that as our threshold, or maybe just accumulate once per input over the threshold. It’s really quite open for us to experiment with. Again, there’s probably a “correct” way to do this, but I’m opting for the latter approach.

Now, imagine if we expanded the capabilities from basic accumulators to things like addition, subtraction, multiplication… even trigonometry? Then make sure we have a random assortment of those behaviours on a load of connected neurons, and bam! We’d get some very interesting neuron behaviours indeed!
And that’s what I did today!

Here’s some gifs of some simple neural nets in my engine. I’ve hooked up the mouse X and Y coordinates to input neurons in a couple of them, so you can see the mouse have an effect on the way the neural network responds.

In this last gif, the top four output neurons push the green circle in one of four directions. Already, we can see organic-like behaviour coming from random assortments of networks!