Looking back at level 1 and 2 of our AI, you can notice that in both cases we are scoring each option using just a count of the different futures we were able to find.
But this is not a real entropy definition! If a macrostate has N possible and equiprobable microstates, then its entropy is not just N, it is:
S = k*Ln(N)
As k is constant, we can forget about it, and so instead of using N to score each option as in the first videos, we now use Ln(N) for the orange kart:
Here we have tree karts, each one using a different entropy formula:
White kart use "level 1", the count of futures not ending on a crash. This is not a real entropy aproximation, but is left for personal nostalgic reasons. As you can see, the kart some times freezes, it is because many futures have been discarted and too little were left to really make sense of it.
Yellow kart has "level 2" AI so it is using N, the number of different futures, to score its options. Not really a entropy formula again, but quite near, and works ok.
Orange kart is on "level 3", it is the one doing it near right, it uses Log(N) to score each option and, if all futures were really equally probable (they are not, longest runs are more difficult to get that short ones) then we were using THE REAL ENTROPY to drive the kart.
Only facing the fact that the futures are not equally probable we will be able to make this AI better, much better, but this will be on another entry.