Extending a Genetic Algorithm into a search problem

Create a good selection using a search problem

Photo by Steve Bruce on Unsplash


I wrote a post last week about using Genetic Algorithms to select troops for a Warhammer 40K game. That post was part of a series of posts about the problem of the best selection (you can read them here, here and here). I wanted to change the focus from that problem to search algorithms and how important are they for Artificial Intelligence. But a friend of mine noted that Genetic Algorithms could also be transformed into a search problem. In this post I will explain a bit about local searches, and how can we transform the Genetic Algorithm we wrote last week into a search problem.

Local Search

The genetic algorithm works by crossing different selections and incorporating mutations in a random way, that is, not all crossovers generate mutations. But, can we modify the algorithm to force these mutations? Sure! there is a parameter controlling the frequency of mutations. But, it is possible to enhance the power of the algorithm to obtain better selections? Well, if we start with a random initial selection, and perform a number of mutations on it we can obtain different selections, and improve the fitness if we keep mutating only the best selection on each iteration. This new algorithm is called Hill Climbing algorithm. Why this name? Well, soon you will understand.

Hill climbing algorithm using the genetic functions Mutate and Fitness.

How the algorithm will work? It will perform different mutations,and select the fittest descendant until either of these conditions apply:

  1. A value for fitness is reached: that is, we reached an acceptable value and the algorithm can stop.
  2. After a series of mutations, the value do not increase because we reached a fixed-point.

This algorithm is part of a family of algorithms called Local Search Algorithms. Local search is a short sighted search strategy, meaning that the algorithm sees only the current selection and its descendants, that is, its mutated child selections. And do not consider the path followed nor the mutations applied. It is a short sighted strategy because it does only consider the immediate value of its descendants, instead of considering the cumulative value over the visited path as other searches do. In other words, it will only care to select the fittest value, regardless of the number of iterations.

Fitness value for 10 different selections over 10 iterations.

Notice the illustration above: it represents the fitness value for 100 different combinations of a W40K army, obtained by performing 10 iterations of 10 mutations each. Sometimes, a mutation yields an invalid selection according to the selection rules of W40K, that is why its value is 0. It looks like a range of mountains, doesn’t it? There are peaks and valleys representing local maximum and minima, and plateaus, regions where the fitness value stays the same. A local search algorithm as Hill Climbing will try to climb to the highest peak available in each iteration, hence its name, by looking to the maximum (fittest) selection in the iteration. However it may get stuck in a local maximum value, and return it without considering that there may be another best value, i.e. a global maximum, some iterations away, because it never considers a selection that has a fitness value worse that the current selection.

Because of this, Hill Climbing is an incomplete algorithm. This drawback is mitigated when allowing, with a small percentage, some worse variations in the selection, that is, we allow the search to consider selections with a smaller fitness, hoping that we may get a better selection in future iterations. This modification is the Simulated Annealing Algorithm:

Notice that even if the new selection has a smaller value, there is a probability p of this selection being accepted. This probability may be defined in different ways, and it usually must change as we are close to the end of the iterations.


Fitness for 10 iterations of the Hill Climbing Algorithm

We have created a super simple Hill Climbing algorithm by using the fitness function we created in the Genetic Algorithm. And we have created also a Simulated Annealing algorithm using the same functions. For the probability of accepting a result, we have used (as the AIMA suggest) a probability dependant on the iteration number, as final iterations are less likely to accept worse selections. Results for the Hill climbing algorithm are shown above. Notice that the fitness is always climbing.

Now, compare it to the Simulated Annealing:

Fitness for 10 iterations of the Hill Climbing and Simulated Annealing Algorithm

Well, Hill climbing keeps only augmenting until it reaches a local maximum. But Simulated Annealing is able to pick a bad selection, only to improve it later.


We have seen today some Local Search algorithm. This family of algorithms is interesting when the path is irrelevant to solution, as in this case. We do not care which mutations, that is, which operations we have performed in the original selection that result in the best army for W40K, we only care for the final army list.

The results show above, were difficult to obtain because of the probabilistic nature of these algorithms, meaning that every time I ran these algorithms the graphics may change, but in the end they will end showing similar results.

This is the last post about the selection problem, at least temporally. I wanted to start explaining search algorithms and how they are important for Artificial Intelligence. In the next posts we will continue to use Warhammer 40K as our example, and we’ll build some small intelligent agents able to play Warhammer 40K.

Stay safe!

I am a Über nerd, interested in Robotics, Machine Learning and Computer Science in general, as well as Entrepreneurship.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store