Thursday, 6 July 2017

Retrocausality and AI

Retrocausality is about physical things in the present affecting things in the past. Wow, read it twice. If it sounds to you like breaking the most basic rules of common sense and our most basic intuitions of how things work, you are right, but as weird as it sounds... somehow it makes perfect sense.

Today I found this inspiring article in phys.org about retrocausality. Basically it proves that, if the time symmetry found in all known physic laws is to be accepted as a fundamental law, as it actually is, then causality must go on both directions too, so as unreal it could sound to us, macro-sized humans, it is more than plausibly retrocausality is in the very nature of our world.

Once accepted as a possibility, it solves much of the actual issues with quantum theories: action-at-distance, non-locality, Bell theorem... and it is not more or less plausible than other alternatives, like no-time-symmetry, many-worlds or even the Copenhagen model, so by accepting one "counter-intuitive" possibility, quantum world get less intimidating. I buy it!

Reading it reminded me of one the many variations of the Fractal AI I tried in the past, I wrote about it in the post about the Feynman fractal AI, a model where signal travelled back and forth in time. Here you have a naive drawing of it:


The idea was nice and it was as smart as the "standard" fractal AI, but it could not improve it at all, it was just another way of doing the same stuff, but more complicated, so I finally drooped this idea in the bag of the almost-good ones.

Wednesday, 28 June 2017

Solved atari games

The list of environments from OpenAI that we have already played so far is steadily growing, so I had to made a list to keep track of them. Here we keep and share this growing list, along with it scorings and how it compares with the "second-best" algorithm in the OpenAI gym.

For the interested people: We base all on our work on the "Causal Entropic Forces" by Alexander Wissner-Gross and apply the ideas outlined in the G.A.S. algorithm. We are not actually learning in any way, so all games are independent from each other and first-ever played game is as godd as the 100th game.

First, we list the already finished environments. They include 100 games played and an official scoring in OpenAI gym as the average of those 100 games:

1. Atari - MsPacman-ram-v0: average score of 11.5k vs 9.1k (x1.2)

This was the first env. to be finished and uploaded, so it represent our first official record. We decided to use the "ram" version (instead of the image version) because it is irrelevant for our algorithm but not for a more standard approach, so we had an extra punch.

The main issue here was a dead Pacman takes about 15 frames to be noticeable on screen (there is a short animation) so you need to think in advance at least those 15 frames (ticks) in order to start detecting death.


Sunday, 18 June 2017

OpenAI first record!

We have just submitted our first official scoring on OpenAI gym for the atari game "MsPackMan-ram-v0" based on RAM (so you do not see the screen image, instead you "see" a 128 KB RAM dump).

Our just submitted algorithm "Fractal AI" played 100 consecutive games -the minimum allowed for an official scoring- and get an average score for the best 10 games of 11543 +/- 492, well above previous record of  9106 +/- 143, so we are actually #1 on this particular atari game:


Thursday, 15 June 2017

Fractal optimising, a first paper

The fractal "family" of algorithms actually started as a very naïve optimising algorithm: after all, intelligence is just about maximising a certain "utility function", so they are quite related.

Once the fractal AI was done, the optimisation facet was again re-visited with a much more promissing results to later abandon it again.

And finally, with the help of our friend José María Amigó from the Miguel Hernandez University, we wrote an article about this fractal algorithm we named "GAS" (namely for "General Algorithmic Search" but it was actually for the capitals on our names, Guillem, Amigó and Sergio) and compared it against other similar ones out there (Basin hopping, Descent evolution and Cuckoo search).

Thursday, 8 June 2017

Fractal VS Pack-Man

Last week my friend Guillem adapted the fractal AI for the OpenAI Atari games (OpenAI is a "gym" for AIs), in particular he focused on "Ms Pack Man", an environment labeled as "unsolved" as I write this.

Yesterday the work was almost done and the first videos came out of the pipeline and, to be honest, the results have stonished me, it worked out far beyond my always-optimistic high spectations.

So here is the video that made me so happy yesterday:


Friday, 3 March 2017

Imperfect information

In the actual incarnation of the fractal AI, we need to supply it with its exact state, all the interactions with environment (the simulation) and the potential function, hand crafted, to be followed. This is know as having "perfect information".

Having perfect information of any system is just not feasible, so my models are not usable in real environments, with real drones, moving real motors, as all of them are unknow for us and we will have just some sensor's outputs as our information.

This week I have visited the Cognitive Sciencies research team at Zaragoza University as a guest for a short but intense seminary about my fractal inteligence algoritms in an effort to team with them here and there, but they really emphasized on the sensorial approach -imperfect information case- in order to make our works compatible.

Thursday, 8 September 2016

Generating consciousness

One week ago I wrote this post about an insight on what could "consciousness" be like, and I imagined it as something not-so hard to gasp as we always thought. Today I come back with a "pseudo-code" version of it on my mind.

Those new ideas have come along with an effort in our company to port the fractal algorithm into a distributed, highly scalable architecture. A work in progress that is already producing a great speed-up in our tests.

This new architecture allows me to play with big groups of fractals diseminated over a network of PCs, all doing the same decision work in paralell, to later join all their findings and take a "collegiated" decision.

More interestingly, I can now "pack" some fractals to work as one big fractal and replicate it endlessly to build a tree of cooperating fractals as a nice way to distribute work over the PCs on the network.

But how to arrange them to distribute the work even more efficiently? By building it as a "fractal of fractals", a tree of fractals whose structure evolves dynamically as you use it, to finally form a nice tree-like fractal that adapts its form to live in the environment you gave to it.