Sunday, 31 May 2015

Pauli's Exclusion Principle

Quantum physic plays a big role in developing an artificial intelligence, more than many could think at first glance.

Back in the days I was developing the Entropic Intelligence, I needed to discard similar ending futures before evaluating the "Future Entropy" of a given option. This was because "entropy" always involves using a given minimum distance so futures that ends closer than this must be considered only one single future. If this remainded you some quantum principle like exclusion, you were quite rigth.

Entropic Intelligence grouping future end points

When I started the conversion into a fractal intelligence, I first thought I could get ride of this ugly step of deleting some futures. This post is to explain I could NOT do it. I still need to perform this "pruning" of the fractal so its branches doesn't condensate into a single small zone.

Lets see an example of a "good fractal" that doesn't condensate on a single point, this is how all fractals used by the AI should look like:


As you can see, the fractal density is not too high at any given point, and this is why the fractal can spread to almost half a lap ahead of the kart's position.

But without an exclusion principle, and as you will see in the video below, things can go terrible wrong with the fractal, making it to condensate in not-so-good zones, missleading the AI into easy to avoid traps:

Fractal condensate in a zone making AI blind to other better options.
This is why I tried to simulate quantum physics using fractals before jumping into the fractal intelligence. And what I learned from this avdventure is proven to be critical to really applying fractal intelligence optimally.

The botton line is quite surprising: In the karts example of fractal AI -but also on any other form of fractal growth algortihm I have tried- agents need to check for collisions with other karts on the track to detect and avoid this possibilities... but they also need to check collisions with themselves!

Colliding with your self will "automagically" disolve any accumulation that may tend to appear, as fractal growth laws dictate that collided branches (I can name a "future" as a "branch" or a "spark" some times, depending on the mental example I am using at the moment: a plant that grows by bifurcating its branches, or a lighting bolt formed by a miriad of electric sparks moving around) will move to the position where another randomly chosen branch -or future- is. This simple fractal growth rule -there are some more- makes all the problem to vanish.

But there is more fun about it: If you apply this self-crashing detection at the beginning of all the futures being imagined in a frame, they will all collide with each other in a chaos, as they started in the same initial point.

This is a problem nature solved in quantum physics by using what Richard Feynman called "tinny little arrows", or what is also known as "wave equation's phase angle". This quantum "artifact" mandates that a couple of photons leaving an electron will not likely collide with each other, as their "tinny little arrows" points in almost the same direction, so the difference has an almost zero lenght.

Length of such additon of arrows is called "amplitude of the event". The probability of such event (the collision) is then this length squared, an even smaller number.

Again, to safely apply Pauli's exclusion principle to fractal intelligence, I am afraid I will ultimately need to account for some "future's phase angle" so a couple of future positions will more likely collide when they have raced a slightly different length, so arrows are not "in phase" anymore.

I don't know about you, but I am really impressed of how similar quantum mechanics is to fractal AI, not to mention my other two fractal algorithms to maximize/minimize scalar functions. Yes, they are slowly converging into the same quantum physics one!

Let us go now to this new fractal AI video. Compared to the last one, now points 8 and 9 of the list are coded into the AI, but as I commented, just to discover I still need exclusion and may be phase, so I keep it labeled as "beta".


In the first lap debug thinking was off, and everything seemed to go ok, not optimal but quite ok. In the second lap debug thinking is turned on and then we can see how the fractal some times concentrate in small regions, leading to bad decisions, while other spread nicely to half a lap ahead, as it was intended to do.

When trying to judge how optimal the AI is now, please keep in mind that the kart does not pretend to win any race, its only goal is to run "as fast as possible". This is why it usually prefers to widely open on curves, not becuase it is good for winning the race, that is not the case, but because it allows it to keep a high speed on the turns.

Finally, a word about the silly back driving ending: After the bad decision after half of the second lap, the kart start driving backwards to un-trap it self, that was intelligent, but then it keeps on going backwards until it finally crash and break the toy. But why?

The answer is the same as before: The lack of an exclusion principle. When most of the futures goes backwards, if you don't avoid those acumulations with a exclusion principle, it will focus only on this possibility, being again blind to other less probable -but may be more interesting- options, namely, driving forward again to speed up in the long term, something it was supposed to be able to do as it was thinking 30 seconds ahead.

No comments:

Post a Comment