Genetic Algorithms (Includes Implementation)


What is it?

As biological neurons provide inspiration for their artificial counterpart, biological evolution has inspired the creation of genetic algorithms. Evolutionary computing provides an alternative to traditional search techniques and can find a solution to a problem “by selecting from a number of potential solutions…realising new solutions not previously considered”.

In biological evolution, a population of individuals in a species represents a generation; members of this population mate to produce a new generation with characteristics of both of its parents. This process allows the species to survive and adapt to changes in the environment, albeit very slowly.

Evolutionary computing is based on this process. By using this technique, it is possible to create software that adapts itself overtime to find increasingly more suitable solutions to a problem. The potential solutions represent the population of individuals. The solutions are crossed over to produce new solutions, which are then tested. With each new generation generally representing an improved solution.

Genetic Algorithms (GA’s) are a specific type of evolutionary computing paradigm common to artificial life simulations. Solutions to problems are encoded in a structure called a chromosome (or genome). The solution can be represented as binary digits (bits) or in any other computer readable format (e.g. floating-point weights of a Neural Network). Similarly, to biological evolution, new generations are created by mating two chromosomes and (possibly) applying mutation to produce a new offspring and consequently a new solution.

Crossover and Mutation
Crossover and Mutation. Image found here.

Chromosomes A and B are mated to produce an offspring. The crossover point can be dynamically selected or pre-determined; in the diagram the first block of A is crossed over with the second and third block of B. This produces a new offspring with characteristics of both parents. Mutation is not usually performed after every crossover; rather it is performed based on a mutation rate. During mutation, one or more digits is selected and changed in some manner e.g. if the chromosome is encoded in binary digits the digits are reversed.

A general process for a GA is shown in the diagram below.

A general genetic algorithm process uses the following steps: initialization, evaluation, selection, crossover, and mutation.
Image found here.

There are a number of methods used to select individuals for crossover and mutation; however, fitness proportionate selection is commonly used in artificial life simulations due to its parallels with nature. Using this method, the fitter an individual (based on their fitness) the higher the probability of selection. However, it does not guarantee that the fittest individual will be selected.

The problem being solved by a GA is defined by its fitness function. Individuals are evolved to have the best fitness values possible. The fitness function is used to evaluate the relative performance of possible solutions and is normally defined as a numerical, problem-independent, value. There is no general function; it must be tailored for each application.


A part of the main genetic algorithm loop is included below. This loop was responsible for evolving the neural networks for predators in a predator/prey environment. The full project can be downloaded from the GitHub link at the top of the page (unfortunately the code was written years ago and no energy was put it into making it readable/extendable).

public void Update(GameTime gameTime, ref List<Prey> prey, ref List<Predator> pred)
// Loop through all entities that have recently been killed/removed from simulation.
for (int i = 0; i < pred.Count; i++) {
if (!pred[i].IsAlive) {
// Add predators fitness score (time predator was alive) to overall fitness score 
// for this genetic pool.
// Select two parents. Parents with higher fitness score are more likely to be selected.
MovingAgent parentOne = FitnessProportionateSelection();
MovingAgent parentTwo = FitnessProportionateSelection();
// Combine two parents neural networks to create offspring.
CreateAgentFromCrossover(parentOne, parentTwo, pred, i);
private void CalculateTotalFitness()
totalFitnessScore = 0;
for (int c = 0; c < predPool.Count - 1; c++) {
totalFitnessScore += predPool.TimeAlive;
// Return new entity. Higher fitness score means higher chances of selection.
private MovingAgent FitnessProportionateSelection()
float randomSlice = Utilities.RandomMinMax(0, totalFitnessScore);
MovingAgent choosenAgent = null;
float fitnessTotal = 0;
for (int i = 0; i < predPool.Count; i++) {
fitnessTotal += predPool[i].TimeAlive;
if (fitnessTotal > randomSlice) {
choosenAgent = predPool[i];
return choosenAgent;
private void CreateAgentFromCrossover(MovingAgent parentOne, MovingAgent parentTwo, 
List<Predator> pred, int i)
NeuralNet neuralNetwork = CrossOver(parentOne, parentTwo);
pred[i] = new Predator(predTexture, neuralNetwork);
// Apply mutation sometimes.
if (Utilities.RandomMinMax(0, 1) < Utilities.MutationChance) {
// Combine parents neural networks to produce offspring.
private NeuralNet CrossOver(MovingAgent parentOne, MovingAgent parentTwo)
NeuralNet neuralNet = new NeuralNet(false);
List<float> newWeights = new List<float>();
List<float> parentOneWeights = parentOne.GetNetworkWeights();
List<float> parentTwoWeights = parentTwo.GetNetworkWeights();
// Get random crossover point (between 0 and 1).
int crossOverPoint = (int)Utilities.RandomMinMax(0, parentOneWeights.Count);
// Generate new neural network using combination of both parent neural nets.
for (int i = 0; i < crossOverPoint; i++) {
for (int i = crossOverPoint; i < parentOneWeights.Count; i++) {
neuralNet.SetWeights(ref newWeights);
return neuralNet;

Each agent in the simulation has an neural network that was mutated like so:

public void Mutate()
// Create copy of current neural network.
List<float> weights = new List<float>();
// Loop through all weights associated with network.
for (int i = 0; i < weights.Count; ++i) {
// Add random offset.
if (Utilities.RandomMinMax(0, 1) < Utilities.MutationRate) {
weights[i] += (Utilities.RandomMinMax(-1, 1) * Utilities.MaxPerturbation);
neuralNet.SetWeights(ref weights);

Related Posts