Entropic intelligence, or the natural tendency on
intelligent beings to do whatever it takes in order to maximize entropy
generation (but measured not on the present moment but at some point in
the future) not only do generate intelligent behaviour as the original paper authors suggested, it is the missing part we needed to push actual
AI into real "human like" intelligence.
It is now a year from my first contact with this idea, and for the first time I find my self prepared to name it correctly and give a definition of what this algortihm is really doing inside.
During all this time, the "intelligence" algortihm itself and the resulting behaviours have been named -in my thoughts, the code and the post here- with many different worlds, basically because I didn't know what exactly was emerging on the simulations, just that it seemd deeply related to the intelligence concept some how.
Don't spect any kind of mathematical prof, all the concepts we are dealing with are not properly defined in the actual science, so I can only rely on my intuition, and the fact that intelkigent behaviour do emerge, visually, on the videos.
I staterted calling it "entropic intelligence", but there are levels inside this idea and I needed different names or concepts for each part. Lately I switched to use "brute intelligence" as the name for the simpliest algortihm and "goals" for the different additions needed to push this basic intelligence up to something usable.
Common sense
In the simpliest version of this algorithm, the one that score each option with the log of the number of possible different futures found, what we get is basically a generic way to numerically represent and simulate the part of the intelligence usually known as "comon sense".So is the "common sense" living it last days of being the "less common" of all the senses? Yes, I really think so.
What exactly does it means? Well, this simple algortihm can be applied to any system you can simulate, without any previous knowledge of the system and without any additional information or goals to drive the intelligence, it is a "neutral" way to add intelligence to anything we can think of.
Imagine you are going let this AI to drive a kart, or a helicopter, or manage a whole industrial proccess by moving joysticks and you just ask it to do it carefully, with common sense, trying at any cost to have it up and running when I come back from my lunch time. This is what you are getting with this AI, a baby sitter capable of taking care of ANY system you give to it, but with no specific goals except "keep it up and running for me".
Witch "golden rule" could you think of to condense this idea? "Always move into situations with as many reachable futures as possible".
The extended version could be like this: Avoid sitations that only have a few possible ways of continuing, instead, choose situations with lots of different futures reachable, so if one of those possible ways to contiue get "blocked" in the short term, you always have plenty of other choices to take.
Don't try to do fancy things with your RC helicopter, like low passing over our heads at high speeds, as those course of actions involves having quite a few possible ways to go throug it, mabe only one, and it can lead you to the dissaster if the only way to survive it is to be lucky and rest on the asumption that nothing unexpected will block your narrow way.
Instead, keep the helicopter high and moving, so whatever it happends next, you always have plenty of different options to take and survive.
It is simple, universal, neutral and it does work impressively well. You can watch it working on this video that shows a kart on a track just provided with this "common sense":
Artificial psicology
In th full version of the algorthim, what we represent and simulate is the full "psycological" part of our minds, the one that decide what exactly to do at any moment (and also what to avoid) based on the supplied simulation of the reality, plus a "physcologic" part that ultimately dictate what will this intelligence "like" or "dislike" to do.
In this process of "thinking" we already showed in all the past posts, this kind of AI just needed to
be feed with two external things: a way to imagine out what would
happend if I do this or that (a simulator of the system, not necesarily
too acurate) and a measure of how better -or worst- is this change that I
can simulate.
This second part acts as the "distance" the system had to
"walk" to go from point A (the initial state of the system) to the final state B
(the simulated end position of the system after a given time). Both states of the system, A
and B, are technically said to be in the "phase space" of the system, a
way to say they are two among all the possible states of the system in time.
By adding such a function to the brute inteligence of the
entropic AI, we are technically defining a metric in the phase espace of the system.
This could seems as just a set of mathematical and basic physics details to be solved and forgotten, but the kind of function we finally
apply will, in fact, determine how the system will behave. Your are
definig the inteligence personality.
I have made extensive tests with all kind of funtions that
made any sense to me, and carefully watched the resulting behaviours in the post produced videos.
Some times the karts behavied fearless, other formulaes brougt real fear
to them while other just made them cautious, but as a general rule,
adding complexity to the formulaes without much care about the way we do
it will end up in any kind of patological personality.
Bipolarity, squizofrenia, suicide tendencies (a big lot of them)
will do arise if you just play around with this distance formula. If you
use a distance definition that allows negative distances, for instance,
you will have to deal with fear, a fear that will make your inteligence
to panic when confronted with situations that gives negative distances
any where it decides to go, and freeze it.
So the question here is witch mathematical form should the
goals I use to define the metric take in order to get a usefull, trusty
behaviour?
The short answer is the ones that makes a balanced
personality. My wife is a fairly good psicologist and we spend many
late hours talking about all this, and we both are quite surprised on
how perfectly all matches.
But to be a little more concise, I can offer you a recipe that only use tree kind of simple goals:
1) Positive goals are the best ones, they just sum an
amount to the distance, and if you need to mix several of them, you just
need to sum them all. The are the only ones than will build a real
distance and they are the only ones if you plan to get a perfect
inteligence.
For instance, scoring each of the karts futures with the
square of the euclidean distance raced, the fist goal-driven intelligen I
simulated, gives you a near optimal intelligence capable of rivaling
with any human. But it worked nicely only for the fact that I swipped
out any risky end in the simulation phase: when the kart crashed on a
wall during the simulation of a future, I just ended the future and so
the "raced distance" was smaller.
You can visually compare the previous "common sense only" kart with another one with a "phycological" tendency to run as fast as possible and compare your self:
If I should had been more realistic on my simulation by calculating the bounces of the kart with the track limits, as I did eventually, then the futures wont stop after a crash any more, and the raced distance will also sum the part after the bounce, making it much more attrative to the intelligence. It will not fear breaking the kart at all, so eventually it will just crash and die.
It is a silly and 100% optimistic intelligence, and believe me, it is not a nice combination. The videos showing this behaviour were never recorded, but you can easily simulate it on the V1.0 application, just move the slider "Keep health" fom the initial strength of 1 (rightmost possition) down to 0, and watch your karts crash and break into pieces (well, the simulation won't make the kart burn in flames, but it will crash and stop moving).
Basically we have two ways to go from here: try to take
away from the combo the "silly" part, or the "100%" from the optimistic
part. Both are possible to simulate, but here we will just consider
using our basic brute layer 1 intelligence and see how far we can go
before we think on seriuosly simulate higher levels of intelligence, the
multi layered model I will hopefuly have on production on some months
from now.
So if the intelligence you are using is not state of the
art, if you are using a "one layer" of this entropic inteligence as in
the cases I am actually simulating, then you will need to add some short
of " negative goals".
I refused for long to even consider this option. After all,
entropy is about always having a positive growth, so if this distance
have to mimic some kind of entropy drived proccess, and if aesthetically
using a distance that is not a real distance to build a metric seems to
you an aberration, then may be we should start by forbiding any kind of
goal not being strictly about add a positive value.
But we need them, so I stop complaining and eventually get a nice way to deal with it.
In the next of these "psicological" posts I will show you a
couple of ways to add negative goals to the model without going too mad
(the intelligence, not you) that will give us some nice videos, showing more complex behaviours, but be warned: they will have odd
consecuences.
No comments:
Post a Comment