Thursday, 6 July 2017

Retrocausality and AI

Retrocausality is about physical things in the present affecting things in the past. Wow, read it twice. If it sounds to you like breaking the most basic rules of common sense and our most basic intuitions of how things work, you are right, but as weird as it sounds... somehow it makes perfect sense.

Today I found this inspiring article in phys.org about retrocausality. Basically it proves that, if the time symmetry found in all known physic laws is to be accepted as a fundamental law, as it actually is, then causality must go on both directions too, so as unreal it could sound to us, macro-sized humans, it is more than plausibly retrocausality is in the very nature of our world.

Once accepted as a possibility, it solves much of the actual issues with quantum theories: action-at-distance, non-locality, Bell theorem... and it is not more or less plausible than other alternatives, like no-time-symmetry, many-worlds or even the Copenhagen model, so by accepting one "counter-intuitive" possibility, quantum world get less intimidating. I buy it!

Reading it reminded me of one the many variations of the Fractal AI I tried in the past, I wrote about it in the post about the Feynman fractal AI, a model where signal travelled back and forth in time. Here you have a naive drawing of it:


The idea was nice and it was as smart as the "standard" fractal AI, but it could not improve it at all, it was just another way of doing the same stuff, but more complicated, so I finally drooped this idea in the bag of the almost-good ones.



Here you have a nice video of it. The different colours are just a trail of visited positions, the real action is on the black spots:


Would it be possible to build neuronal networks that relay on this concept for learning as you use them, NN without a separate learning phase, where signals arrived at different time-shifts could interact? Well, it makes sense to me, as, when a signal gets the incorrect answer, we are actually penalising its past actions by reducing the weight of previously visited neuronal paths.

I am aware LSTM NN basically do that, but I think about a basic model that use it at the most basic levels, and where learning is not based on any gradient but in past actions of wrongly processed signals.

I just find it the natural way to go... but if and only if the universe has a T-symmetry!

No comments:

Post a Comment