I have recorded a short video directly from my screen so you can watch it at work in slow motion, generated in real time in a very slow PC (no GPU, no cuda, no paralellization, no optimising, just old and dirty standard code) so you can watch the fractal as it grows up.
What you will see in the video are the "tips" of the branches of the fractal as it evolves in time (visit the post about "Fractal algorithm basics" for more info about what those "branches" are), like a "front wave" of imaginary futures scanning all the possibilities in front of you.
It really acts as a flock of silly birds, but a very special one: all the birds are totally blind, they change direction and velocity totally randomly, like having a crazy monkey pushing the joysticks randomly, but when one bird crashes, it is cloned to a safer position, the position of one of the "best birds" in the actual flock. This process of cloning and collapsing is what defines the "Fractal growth" as commented in the last post, and it makes it possible for the futures to reach the exit of a complicated maze quite automagically.
The video start by showing you the final fractal paths used to solve the maze, then I switched the app into "Slow motion debug" mode and made a new step of the thinking process, so you can see the fractal forming. The params were 1000 futures and 100 seconds.
As you can see, the fastest futures some times crash in a turn and then all those dead futures reincarnate on slower futures (as they were still alive) so a second try or front wave is reinforced into existence but travelling at slower speed. As a result, the "flock" is capable of making all the turns and get out of the maze.
The decision is taken once the fractal stops. You take the best of the remaining "birds", ask it "Witch direction did you start pushing the joystick to?", and use the answer as your "intelligent decision". It is quite simpler than in previous versions, and far more reliable.
As a "unfair" comparation, I tried this same maze using the oldy and linear "Entropic AI", but even using 2000 futures, none of them managed to safely drive those 10 seconds... they all died. This is why the fractal growth was needed in first place!
Linear version failing to simulate 10 seconds only. |
I plan to have a "Two Way" version of the fractal for the next week as I have already worked out all the fine details of it. Basically it will resemble what I commented some posts back when I wrote about "Using Feynman integrals", and it will mimic both the Feynman diagrams paradigm and the idea behind the Feynman integrals, but in a totally revamped and fractal way.
So if I am not terrible mistaken, next week I will come back to show you the first "Feynman Fractal AI" video, the "Two Way" version of this fractal AI.
I also think this will make a really improved deep learning algorithm in some time, as it basically can do back and forth propagation learning in a continuous and auto-reinforcing way... but I am not still ready to make this big jump, just a -big- step nearer.
cool stuff.
ReplyDeleteThanxs a lot for your comment Jose, coming from you makes it very valuable to me.
ReplyDeleteI have plans on adapting it to robotics in a very general way so you could easily add "fractal common sense" to any form of robots, along with a powerful, lighting bolt like intelligence.
Up to now most of the components are done and, as you can see, nice working. A final part needs to be added where general goals like "collect honey" are automatically broken into simplier goals easy to follow for the fractal AI.
It will surely be after the two ways version is on production and later adapted to learning on deep neuronal networks. I am not that familiar with robotics to make a clean jump for my self so I can not focus on this now. Too many proyects for me!
Yes, very cool stuff. I'm very pleased to have found your site just now after searching on "fractal agents". I'm very new to both fractals and agent-based models, and have only done a smattering of AI learning so far. But I've just started my own blog, "Thinking in Models" (qyoom.github.io) where I will be doing my own research. I want to plunge into reading your blog and learn as much as I can from following you. Cheers!
ReplyDeleteThanks Richard, I have already read the two first posts in your blog: I never thought about using genetic algorithms without a bias for adaptation/survival, quite interesting!
DeleteFractals are not so far from genetic algorithms, in fact they are 100% genetic! Think on a tree deciding on wich branches will let grow and ocasionally bifurcate and wich will dry out to make room for the new ones.
It is a competition between branches, and again, the number of branches is keep on a given number, as in the real trees: Leonardo already noticed that a horizontal cut on a tree always show the same area of wood: a big trunch down will equals a lot of small branches in the upper part, so the total area of the branches is a constant, as the population in your posts.
Basically, if you mix genetics algorithms with keeping future entropy high (basicaly meaning "try not to focus only on one kind of populaton, even if it performs fantastic, look also for keeping diversity high"), and you will have a nice fractal algortihm.
So, if you use you bar representation of population change (like in your first post), fractals will avoid minoritary populations to vanish form the scene, as you never know if a not-so-adapted memeber can evolve into the best match after a couple of mutations.
Balancing between the prevalence of the best fitted vs avoiding loss on the "bio diversity" is the key when using fractals.