A downloadable asset pack

Download NowName your own price

GML Neural Network Tutorial

Prologue:

In this tutorial series we are going to create a very simple neural network. (In a later tutorial series I will show you how we can use this to do some interesting things)

(WARNING Advanced understanding of GML is recommended) Creating a neural network in Game Maker Studio can be a little tricky if you're not familiar with the language or the concepts behind neural networks. But don't worry, we will walk you through the process step-by-step in this tutorial. We will start by defining a few helper functions and classes before putting them together to create our network. So let's get started! (Download the source code "Asset Pack", it is free to use as you wish)

Part 1: Layer Object

The first thing we need to define is our layer object. A layer is a group of neurons, and we will store these in an array. The activate function will calculate the output of each neuron in the layer when given an input.

function _layer() constructor{
    self._neurons = [];
    self.activate = function(input) {
        var result = [];
        // For all neurons, ...
        for(var i = 0, len = array_length(self._neurons); i < len; i++) {
            // Push a result value to an output array.
            result[i] = self._neurons[i].activate(input);
        }
        return result;
    }
}

Part 2: Sigmoid Activation Function

This function is used because it has a clean derivative, and helps us clamp the output. When plotted it creates an S shaped curve where an input that approaches -inf and +inf get converted to 0 and 1. This is pretty standard for most simple neural networks.

function sigmoid(input) {
    var _output = ( 1.0 / (1.0 + exp(-1.0 * input)) );
    return _output;
}

Part 3: Neuron Object

Each neuron in our network has a number of weights, a bias, an input, an output, and variables for backpropagation. The `activate` function calculates the output of the neuron for a given input.

function neuron() constructor{
    // Weights array.
    self.weights = [];
    self.bias = 1;
    // Variables for backpropagation.
    self.input = [];
    self.output = 0;
    self.deltas = [];
    self.previousDeltas = [];
    self.gradient = 0;
    self.momentum = 0.7;
    self.activate = function(input) {
        var sum = 0;
        // Cycle through each input and multiply it by a weight value.
        // bias + sigma(input * weight)
        var _len = array_length(input);
        if(array_length(self.weights) < _len){
            for (var i = 0; i < _len; ++i) {
                // code here
                self.weights[i] = random_range(-1, 1);
            }
        }
        for(var i = 0; i < _len; i++) {
            // Sum up the weights.
            sum += input[i] * self.weights[i];
        }
        // Add the bias.
        sum += self.bias;
        self.input = sum;
        // Sigmoid activation function.
        self.output = sigmoid(sum);
        return self.output;
    }
}

Part 4: Setting up the Network

The first step is to create a constructor for the network. Here is the code for the network constructor.

function network(neurons, options = -1) constructor {
    randomize();
    self._layers = [];
    
    // Set default options.
    self.options = {
        iterations : 20000,
        learningRate : 0.1,
        momentum : 0.9,
        epsilon : 0.001
    };
    if (options != -1) {
        self.options = options;
    }
    if(neurons == undefined) neurons = [2, 1]; 
    
    /// more code will go here...
}

The constructor sets up a few defaults (learning rate, momentum, number of iterations, and epsilon) and initializes an empty list for layers. If options and neurons are specified, they are set to the input values.

Part 5: Initializing the Network

self.initialize = function(neurons) {
    try {
        if(!is_array(neurons)) neurons = [neurons];
        self._layers = array_create(array_length(neurons));
        for(var i = 0, len = array_length(neurons); i < len; i++) {
            self._layers[i] = new _layer();
            for(var j = 0; j < neurons[i]; j++) {
                array_push(self._layers[i]._neurons, new neuron());
            }
        }
    } catch(e) {
        show_debug_message("Error initialize :{0}", e);
    }
};
self.initialize(neurons);

This function initializes the network by creating layers and neurons. The number of layers and the number of neurons in each layer are defined by the input array 'neurons'. For instance, if neurons = [2, 3, 1], the network would consist of three layers: the first layer with two neurons, the second layer with three neurons, and the third layer with one neuron.

Part 6: The Input Method

Next, we'll create an 'input' method for the network. This method will pass inputs through the layers of the network, effectively implementing forward propagation.

self.input = function(input) {
    try {
        if(!is_array(input)) input = [input];
        var result = [];
        array_copy(result, 0, input, 0, array_length(input));
        for(var i = 0, len = array_length(self._layers); i < len; i++) {
            result = self._layers[i].activate(result);
        }
        return result;
    } catch(e) {
        show_debug_message("Error input :{0}", e);
    }
};

The 'input' method runs each input through the network layers. It starts from the first layer and uses the output of the current layer as input for the next layer. It does so until the final layer, whose output it then returns as the network's output.

Part 7: The Train Method

Now, we need to create a 'train' method. This method will train the network using backpropagation.

self.train = function(inputs, ideals) {
        var err = 1, index;
        var counter = 0;
        // Cycle through inputs and ideal values to train network
        // and avoid problem of "catastrophic forgetting"
        var qty = array_length(inputs)-1;
        var avgerr = 1;
        var choices = [];
        for (var i = 0; i < qty+1; ++i) {
            choices[i] = i;
        }
        var s = 0;
        
        while(avgerr > self.options.epsilon || counter < 10) {
            index = (s++) % (qty+1);
            var point = choices[index];
            err = self._iteration(inputs[point], ideals[point]);
            avgerr = avgerr * 0.90 + err * 0.10;
            if(s == qty-1) choices = array_shuffle(choices);
            if((counter++) > self.options.iterations) break;
        } // End of an epoch.
        show_debug_message($"Error {err}, avg error: {avgerr}, iterations {counter-2}");
    };

This method goes through several epochs, each passing all the training data through the network. It adjusts the weights of the neurons based on the calculated error between the network's output and the desired output.

Part 8: Backpropagation and Weight Adjustment

Finally, the 'train' method calls another method '_iteration', which performs one iteration of backpropagation and adjusts the weights.

self._iteration = function(input, ideal) {
        try {
            var i, j, k, error, sigErr;
            var _neuron, output;
            // Run the network to populate output values.
            self.input(input);
            // Begin backpropagation.
            sigErr = 0.0;
            // Starting from the last layer and working backward, calculate gradients.
            var len = array_length(self._layers);
            for(i = len-1; i >= 0; i--) {
                /// we'll define this in a minute
            }
            // Once all gradients are calculated, work forward and calculate
            // the new weights. w = w + (lr * df/de * in)
            for(i = 0; i < array_length(self._layers); i++) {
                // we'll define this in a minute
            }
            return sigErr;
        } catch(e) {
            show_debug_message("Error iteration:{0}", e);
        }
    }
}

This section of the code is implementing the backpropagation algorithm, which is a method used in neural networks to calculate gradients and update weights based on the calculated error. Let's break down the first for loop:

Here, len stores the total number of layers in the network. Then, a loop is initiated that iterates from the last layer (output layer) to the first layer (input layer) of the network. This is the standard direction for backpropagation, as we first calculate the error at the output and then propagate this information back to the preceding layers.

Within this loop, two main scenarios are handled:

if(i == len-1) {
    for(j = 0; j < array_length(self._layers[i]._neurons); j++) {
        _neuron = self._layers[i]._neurons[j];
        output = _neuron.output;
        _neuron.gradient = output * (1.0 - output) * (ideal[j] - output);
        sigErr += power((ideal[j] - output), 2);
    }
}

In this case, the gradients for the neurons in the output layer are computed. The gradient for a neuron in the output layer is calculated as the derivative of the activation function (output * (1.0 - output)) multiplied by the difference between the target value (ideal[j]) and the neuron's actual output. This is a direct application of the chain rule from calculus. The total error for the network (sigErr) is also updated.

else {
    for(j = 0; j < array_length(self._layers[i]._neurons); j++) {
        _neuron = self._layers[i]._neurons[j];
        output = _neuron.output;
        error = 0.0;
        for(var k = 0; k < array_length(self._layers[i+1]._neurons); k++) {
            error += self._layers[i+1]._neurons[k].weights[j] * self._layers[i+1]._neurons[k].gradient;
        }
        _neuron.gradient = output * (1 - output) * error;
    }
}

Now lets break down the second for loop.

This section of the code handles the update of the weights and biases in the network. This is the second main step in the backpropagation algorithm, where the calculated gradients are used to adjust the weights and biases in order to minimize the error in the network's output.

for(i = 0; i < array_length(self._layers); i++) {
    for(j = 0; j < array_length(self._layers[i]._neurons); j++) {
        _neuron = self._layers[i]._neurons[j];
        _neuron.bias += self.options.learningRate * _neuron.gradient;
        // ...
    }
}

Here, the code is iterating over all layers and neurons in the network. For each neuron, it updates the neuron's bias. This is done by adding a value proportional to the gradient. The proportionality factor, self.options.learningRate, is a hyperparameter that determines the size of the steps taken during the optimization process.

Next, the weights of each neuron are updated:

for(k = 0; k < array_length(_neuron.weights); k++) {
    _neuron.deltas[k] = self.options.learningRate * _neuron.gradient * (i > 0 ? self._layers[i-1]._neurons[k].output : input[k]);
    _neuron.weights[k] += _neuron.deltas[k];
    if(array_length(_neuron.previousDeltas) <= k) {
        _neuron.previousDeltas[k] = 0;
    }
    _neuron.weights[k] += self.options.momentum * _neuron.previousDeltas[k];
}

The value to be added to each weight is calculated as the product of the learning rate, the gradient of the neuron, and the output of the neuron connected by the weight. The logic (i > 0 ? self._layers[i-1]._neurons[k].output : input[k]) is used to handle the case of the first layer, where the inputs to the network are used instead of the output of a previous layer.

The code also includes a mechanism for momentum, which is another hyperparameter. Momentum helps accelerate gradient descent in the relevant direction and dampens oscillations, speeding up the training process.

Finally, the current deltas (weight changes) are stored as previous deltas for the next iteration:

_neuron.previousDeltas = [];
array_copy(_neuron.previousDeltas, 0, _neuron.deltas, 0, array_length(_neuron.deltas));

This is part of the momentum implementation, where the previous weight changes influence the current changes.

The calculated gradients are then used in the update step to adjust the weights of the network. This process is typically done multiple times, iterating over the whole dataset, until the network's performance (as measured by the total error sigErr) is satisfactory.

Finally, lets create a method for printing this out that will clean it up enough to be legible.

function get_input_pretty(net, value){
    var _out = net.input(value);
    _out = array_map(_out, function(each_element){
        //if (each_element >= 0.9) return 1;
        //if (each_element <= 0.1) return 0;
        return round(each_element * 10) / 10;
    });
    return _out;
}

Conclusion:

With these methods, you have a complete neural network that can be initialized, trained, and can make predictions. The training process involves several rounds of forward and backward passes through the network. This process gradually adjusts the weights of the neurons, allowing the network to make increasingly accurate predictions.

You now have a simple neural network in Game Maker Studio. There's a lot more to learn about neural networks and machine learning in general, but this should give you a good starting point. 

Download

Download NowName your own price

Click download now to get access to the following files:

my_neuron.zip 60 kB

Comments

Log in with itch.io to leave a comment.

What is the license?

CC-BY-SA 3.0

Why not 4.0?

All the CC licenses are compatible with later versions of itself so you should add the license into metadata for the asset pack(s):

"Edit asset pack->Metadata->Release Info->License for assets" It will increase the size of audience that will be able to find your music


You also have code... So what license does it have? MIT? The metadata for code license in the  same place...

I see, I will be modifying the metadata to have MIT for the code and CC-BY-SA 4.0 for the assets. Thank you.

Nice, can't wait to try it out!