I have recorded a short video directly from my screen so you can watch it at work in slow motion, generated in real time in a very slow PC (no GPU, no cuda, no paralellization, no optimising, just old and dirty standard code) so you can watch the fractal as it grows up.
What you will see in the video are the "tips" of the branches of the fractal as it evolves in time (visit the post about "Fractal algorithm basics" for more info about what those "branches" are), like a "front wave" of imaginary futures scanning all the possibilities in front of you.
It really acts as a flock of silly birds, but a very special one: all the birds are totally blind, they change direction and velocity totally randomly, like having a crazy monkey pushing the joysticks randomly, but when one bird crashes, it is cloned to a safer position, the position of one of the "best birds" in the actual flock. This process of cloning and collapsing is what defines the "Fractal growth" as commented in the last post, and it makes it possible for the futures to reach the exit of a complicated maze quite automagically.
The video start by showing you the final fractal paths used to solve the maze, then I switched the app into "Slow motion debug" mode and made a new step of the thinking process, so you can see the fractal forming. The params were 1000 futures and 100 seconds.
As you can see, the fastest futures some times crash in a turn and then all those dead futures reincarnate on slower futures (as they were still alive) so a second try or front wave is reinforced into existence but travelling at slower speed. As a result, the "flock" is capable of making all the turns and get out of the maze.
The decision is taken once the fractal stops. You take the best of the remaining "birds", ask it "Witch direction did you start pushing the joystick to?", and use the answer as your "intelligent decision". It is quite simpler than in previous versions, and far more reliable.
As a "unfair" comparation, I tried this same maze using the oldy and linear "Entropic AI", but even using 2000 futures, none of them managed to safely drive those 10 seconds... they all died. This is why the fractal growth was needed in first place!
|Linear version failing to simulate 10 seconds only.|
I plan to have a "Two Way" version of the fractal for the next week as I have already worked out all the fine details of it. Basically it will resemble what I commented some posts back when I wrote about "Using Feynman integrals", and it will mimic both the Feynman diagrams paradigm and the idea behind the Feynman integrals, but in a totally revamped and fractal way.
So if I am not terrible mistaken, next week I will come back to show you the first "Feynman Fractal AI" video, the "Two Way" version of this fractal AI.
I also think this will make a really improved deep learning algorithm in some time, as it basically can do back and forth propagation learning in a continuous and auto-reinforcing way... but I am not still ready to make this big jump, just a -big- step nearer.