Wednesday, 24 October 2018

Hacking Reinforced Learning

My good friend and close colleague Guillem had a really busy year attending talks about Reinforced Learning in several events like Piter Py 2017 (Saint Petersburg, Russia), Europython 2018 (Edinburgh, UK) or PyConEs 2018 (Málaga, Spain), and PyData Mallorca (among others!) introducing Fractal Monte Carlo to a broad audience.

All the talks versed about RL, but the talks held at Europython and PyConES (this last one, in Spanish, still not on-line) were both about "hacking RL" by introducing Fractal Monte Carlo (FMC) algorithm as a cheap and efficient way to generate lots of high quality rollouts of the game/system being controlled.

Tuesday, 16 October 2018

Graph entropy slides

After the series of six posts about Graph Entropy (starting here), I have prepared a short presentation about Graph Entropy, mainly to clarify the concepts to my own (and to anyone interested) and present some real-world use cases.



One of the most interesting ideas introduced in this presentation is a method for, once you had defined the entropy of all the nodes in a static graph, to easily update all those entropy values as the graph evolves over time, both altering the conditional probability of some connections, as also by adding or taking connections, by considering nodes and connection as cellular automaton that can adjust its internal entropies asynchronously.

You can also jump to the original google slides version if you want to comment on a particular slide.

Update (24 Oct 2018): this post was referenced in the article "A Brief Review of Generalized Entropies"where the (c, d) exponents of these generalized entropies are calculated.


Saturday, 25 August 2018

Curiosity solving Atari games

Some days ago I read in tweeter about playing Atari games without having access to the reward, that is, without knowing you score at all. This is called "curiosity driven" learning as your only goal is to scan as much space as possible, to try out new things regardless of the score it will add or take. Finally, a NN learns from those examples how to move around in the game just avoiding its end.



Our FMC algorithm is a planning algorithm, it doesn't learn form past experiences but decide after sampling a number of possible future outcomes after taking different actions, but still it can scan the future without any reward.

Friday, 3 August 2018

Roadmap to AGI

Artificial General Intelligence (AGI) is the holy grail of artificial intelligence and my personal goal from 2013, where this blog started. I seriously plan to build one AGI from scratch, with the help of my good friend Guillem Duran, and here is how I plan to do this: a plausible and doable raodmap to build an efficient AGI.

Plase keep in mind we both use our spare time to work on it so, even if the roadmap is practically finished in the theorical aspects, coding it is kind of hard and time-consuming -we don't have acces to any extra computer power except for our personal laptops- so at the actual pace, don't spect anything spectacular in a near future.

That said, the thing is doable in terms of a few years given some extra resources, so let's start now!

AGI structure

A general intelligence, being it artificial or not, is a compound of only three modules, each one with its own purpose that can do its job both autonomously and cooperating with the other modules.

It is only when they work together that we could say it is "intelligence" in the same sense we consider our selves intelligent. May be their internal dynamics, algorithms and physical substrate are not the same nor even close, but the idea of the three subsystems and their roles are always the same in both cases, just they are solved with different implementations.

In this initial post I just enumerate the modules, the state of its developemnt, and its basic functions. In next posts I will get depper into the details of each one. Interactions between moduels will be covered later, when the different modules are properly introduced

Wednesday, 18 July 2018

Graph entropy 6: Separability

In the standard Gibbs-Shannon entropy, the 3th Shannon-Khinchin axiom about separability says that, given two independent distributions P and Q, the entropy of the combined distribution PxQ is:

H(PxQ) = H(P) + H(Q)

When P and Q are not independent, this formula becomes an inequality:
H(PxQ) ≤ H(P) + H(Q)

Graph entropy, being applied to graphs instead of distributions, allows for some more forms of combining two distributions, giving not one but at least three intersting inequalities:

Wednesday, 13 June 2018

Graph entropy 5: Relations

After some introductory posts (that you should had read first, starting here) we face the main task of defining the entropy of a graph, something looking like this:


Relations

We will start by dividing the graph into a collection of "Relations", a minimal graph where a pair of nodes A and B are connected by an edge representing the conditional probability of both events, P(A|B):


Tuesday, 12 June 2018

Graph entropy 4: Distribution vs Graph

In previous posts, after complaining about Gibbs cross-entropy and failing to find an easy fix, I presented a new product-based formula for the entropy of a probability distribution, but now I plan to generalise it to a graph.

Why is it so great to have an entropy for graphs? Because distributions are special cases of graphs, but many real-world cases are not distributions, so the standard entropy can not be applied correctly on those cases.

Graph vs distribution

Let's take a simple but relevant example: there is a parking lot with 500 cars and we want to collect information about the kind of engines they use (gas engines and/or electric engines) to finally present a measurement of how much information we have.

We will assume that 350 of them are gas-only cars, 50 are pure electric and 100 are hybrids (but we don't know this in advance).

Using distributions

If we were limited to probability distributions -as in Gibbs entropy- we would say there are three disjoint subgroups of cars ('Only gas', 'Only electric', 'Hybrid') and that the probabilities of a random car to be on one subgroup are P = {p1 = 350/500 = 0.7, p2 = 50/500 = 0.1, p3 = 100/500 = 0.2}, so the results of the experiment of inspecting the engines of those car has an Gibbs entropy of:

HG(P) = -𝚺(pi × log(pi)) = 0.2496 + 0.3218 + 0.2302 = 0.818

If we use the new H2 and H3 formula, we get a different result, but the difference is just a matter of scale:

H2(P) = ∏(2 - pipi) = 1.2209 * 1.2752 * 1.2056 = 1.8771

H3(P) = 1 + Log(1.8771) = 1.6297