Monday, 15 February 2016

Serious fractal optimizing

Guillem Duran, my friend and college from twitter, is now full-time at work with me at converting the "one way" fractal algorithm used in my latest AI into a "serious" general optimising algorithm in python: given a function, find its global (as opposed to local) maximum -or minimum- value it can get on its entire domain -the points on the state space where the function is defined.

Last week we finished the conversion, so we are just starting to benchmark it against other "state of the art" similar algorithms, but to have a first view of the you have a preliminar video here:



In this video above you see the new algorithm dealing with a hard 2D function called "egg holder". The function is a nightmare to optimise: it has a lot of local maximums, some very distant, and all very similar on its max f(x) value, so getting stuck on a local maximum is quite easy.


As you can see, this hypnotic video is divided in five windows showing different information on the process.

Window #1 (upper-left)

This is exactly what the algorithm is doing. You see a dot on each point where the algorithm is reading the function value (in this example, it reads 200 values per tick or "epoch"). Lines are meant to reflect the cases in witch one of those 200 points decide to move to the location of other node (clone), so you can see how points in a zone can jump to another zone if it thinks it is more interesting to search there.

Window #2 (upper right)

Here you see the function being optimised interpolated form the function reads the algorithm is done so far. Those points are not used in the algorithm so we have kept them just to make this nice drawing.

Window #3 (centre)

Here you see the locations where the function has been read by the algorithm (coloured depending on their f(x) value) and the points being read now (on black), so you can see how the seeking of the best point is performed.

A red start in the upper right left corner shows the optimum point, notice how it is placed in the frontier of the domain, making it even more difficult to find.

Window #4 (bottom left)

Graph of the absolute error (distance to the known optimum function value).

Window #5 (bottom right)

Number of ticks ("epoch" in the video) from the last time you update the best-so-far candidate. We try to keep this number below 50 by automatically adjusting some internal parameters.

From this point we plan to benchmark it against basin hopping and simulated annealing algorithms on some other hard 2D functions, and then jump into more difficult tasks as folding some easy proteins (minimising Lennard-Jones potentials) and finding Pareto frontiers for multi-objective optimisation.

If the algorithm proves capable of solving those kind of problems "easily" may be we have something powerful at our hands.

No comments:

Post a Comment