Wednesday 13 June 2018

Graph entropy 5: Relations

After some introductory posts (that you should had read first, starting here) we face the main task of defining the entropy of a graph, something looking like this:


Relations

We will start by dividing the graph into a collection of "Relations", a minimal graph where a pair of nodes A and B are connected by an edge representing the conditional probability of both events, P(A|B):


Tuesday 12 June 2018

Graph entropy 4: Distribution vs Graph

In previous posts, after complaining about Gibbs cross-entropy and failing to find an easy fix, I presented a new product-based formula for the entropy of a probability distribution, but now I plan to generalise it to a graph.

Why is it so great to have an entropy for graphs? Because distributions are special cases of graphs, but many real-world cases are not distributions, so the standard entropy can not be applied correctly on those cases.

Graph vs distribution

Let's take a simple but relevant example: there is a parking lot with 500 cars and we want to collect information about the kind of engines they use (gas engines and/or electric engines) to finally present a measurement of how much information we have.

We will assume that 350 of them are gas-only cars, 50 are pure electric and 100 are hybrids (but we don't know this in advance).

Using distributions

If we were limited to probability distributions -as in Gibbs entropy- we would say there are three disjoint subgroups of cars ('Only gas', 'Only electric', 'Hybrid') and that the probabilities of a random car to be on one subgroup are P = {p1 = 350/500 = 0.7, p2 = 50/500 = 0.1, p3 = 100/500 = 0.2}, so the results of the experiment of inspecting the engines of those car has an Gibbs entropy of:

HG(P) = -๐šบ(pi × log(pi)) = 0.2496 + 0.3218 + 0.2302 = 0.818

If we use the new H2 and H3 formula, we get a different result, but the difference is just a matter of scale:

H2(P) = ∏(2 - pipi) = 1.2209 * 1.2752 * 1.2056 = 1.8771

H3(P) = 1 + Log(1.8771) = 1.6297



Monday 11 June 2018

Graph entropy 3: Changing the rules

After showing that the standard Gibbs cross-entropy was flawed and tried to fix it with a also flawed initial formulation of "free-of-logs" entropy, we faced the problem of finding a way to substitute a summary by a product without breaking anything important. Here we go...

When you define an entropy as a summary, each of the terms is supposed to be a "a little above zero": small and positive ๐›† ≥0 so, when you add it to the entropy it can only slightly increase the entropy. Also, when you add a new probability term having (p=0) or (p=1), you need this new term to be 0 so it doesn't change the resulting entropy at all.

Conversely, when you want to define an entropy as a product of terms, they need to be "a little above 1" in the form (1+๐›†), and the terms associated with the extreme probabilities (p=0) and (p=1) can not change the resulting entropy, so they need to be exactly 1.

In the previous entropy this ๐›† term was defined as (1-pipi), and now we need something like (1+๐›†) so why not just try with (2-pipi)?

Let us be naive again an propose the following formulas for entropy and cross-entropy:

H2(P) = ∏(2-pipi)

H2(Q|P) = ∏(2-qipi)

Once again it looks too easy to be worth researching, but once again I did, and it proved (well, my friend Josรฉ Marรญa Amigรณ actually did) to be a perfectly defined generalised entropy of a really weird class, with Hanel-Thurner exponents being (0, 0), something never seen in the literature.

As you can see, this new cross-entropy formula is perfectly well defined for any combination of pi and qi (in this context, we are assuming 00 = 1) and, if you graphically compare both cross-entropy terms, you find that, for the Gibbs version, this term is unbounded (when q=0 the term value goes up to infinity):

๐›ŸG(p, q) = -(p × log(q))



In the new multiplicative form of entropy, this term is 'smoothed out' and nicely bounded between 1 and 2:

๐›Ÿ2(p, q) = (2-qp)


Sunday 10 June 2018

Graph Entopy 2: A first replacement

As I commented on a previous post, I found that there were cases where cross-entropy and KL-divergence were not well defined. Unluckily, in my theory those cases where the norm.

I had two options: Not even mentioning it, or try to go and fix it. I opted for the first as I had no idea of how to fix it, but I felt I was hiding a big issue with the theory under the carpet, so one random day I tried to find a fix.

Ideally, I thought, I would only need to replace the (pi × log(pi)) part with something like (pipi), but it was such a naive idea I almost gave up before having a look, but I did: how different do those two functions looks like when plotted on their domain interval (0, 1)?

Wow! They were just mirror images one of each other! In fact, you only need a small change to match them: (1-(pipi)):


Graph entropy 1: The problem


This post is the first on a series about a new form of entropy I came across some months ago while trying to formalise Fractal AI, the possible uses for it as a entropy of a graph, and how I plan to apply it to neural network learning and even to generate conscious behaviour in agents.

Failing to use Gibbs

The best formula so far accounting for the entropy of a discrete probability distribution P={pi} is the so-called Gibbs-Boltzmann-Shannon entropy:

H(P) = -k*๐šบ(pi × log(pi))

In this famous formula, the set of all the possible next states of the systems is divided into a partition P with n disjoint subsets, with pi representing the probability of the next state being in the i-th element of this partition. The constant k can be anything positive so we will just assume k=1 for us.

Most of the times we will be interested in the cross-entropy between two distributions P={pi} and Q={qi}, or the entropy of Q given P, H(Q|P), a measure of how different they both are or, in terms of information, how much new information is in knowing Q if I already know P.

In that case, the Gibbs formulation for the cross-entropy goes like this:

H(Q|P) = -๐šบ(pi × log(qi))

Cross-entropy is the real formula defining an entropy, as the entropy of P can be defined as its own cross-entropy H(P|P), having the property of being the maximal value of H(Q|P) for all possible distributions Q.

H(P) = H(P|P) ≥ H(Q|P)

As good and general as it may looks, this formula hides a very important limitation I came across when trying to formalise the Fractal AI theory: if, for some index you have qi=0 then if pi is not zero too, the above formula is simply not defined, as log(0) is as undefined as 1/0 is.