Thursday, 20 November 2014

A Talk on Emotional Intelligence

Yesterday I presented the "Entropic Emotional Intelligence" model in a long (2h!) talk at the Computer Science faculty, Murcia University, as a way to add intelligence to video game players:

In about one week there will be an official video (in spanish) in the university tv site, that I will upload to youtube so I can subtitle it to english.

The slides in spanish and english are available at the blog's download page.

In the meanwhile, the paper I am working on is getting slowly to the finish line. I can't really publish it on any journal because they only accept 30 pages or less articles, not +100 pages, so I will publish it directly in ArXiv.

Tuesday, 4 November 2014

Is it the Terminator AI?

Most of the people I talk about this magic algortihm use to bring into the conversation the terminator that using a powerfull artificial intelligence decided to destroy the humans. Scaring.

It has been something that has really preocupated in the process, really, I don't want to help create a perfect weapon anymore than you would do!.

Busy writting

I consider the algortihm of the "Entropic Emotional Intelligence" almost fully completed, not in this blog, but in my mind. I still will need some months to put all the ideas on the code and test it in its finished form. I have great expectations, but I think it will take a lot of CPU!

In the process of building the general version of the algortihm, I am also writting a complete academic paper detailing the algortihm with a more technicall aproach than I can follow in this blog.

It will keep me busy for some monts, but after that time, I promise to add it to arXiv inmediately.

Ah! I also found the idea for my next algortihm: is about consciousness...


If the algorithm wants to be generally usable, it must be able to gracefully deal with uncertainty.

Wednesday, 29 October 2014

Gain feelings

This is the 6th -and last- post about "Emotional Intelligence", please visit the Introducing Emotional Intelligence, goal-less intelligence, enjoy feelings, fear feelings and mood feelings before you go on reading for a proper introduction on the subject.


To end the enumeration of the three basic feelings associated with a goal, we need to deal with gains and looses, and their proper feeling representation.

Gain or Loose feelings

When something you had at the initial state, like your health level, is higer or lower in the final state representing the end point of the future, it means you have gained or lost something.

Emotions full power

In this post I just show you a couple of new videos using the full "emotional" model for the goals and also a new system to auto adjust the joystick sensitivity (I will comment on this on a future post, it is far more important that it seems).

Asteroid field

First video shows 6 rockets being streessed by a asteroid field (with 50 of them) randomly falling down and how the actual intelligence can deal with this without getting nervous at all (this is thanks to the new joystick model).

I created this simulation because I needed some visual way to judge how competent the agents are in hard/delicate/streessing situations. It was the third or fourth video of a serie, as it was almost imposible to make a rocket be hited by a rock using 10, 20 of 30 asteroids at the same time, so finally I tried with 50, and even then only one rocket get hited!

We need the algorithm to be solid rock and stable, so this kind of tests are of great interest to me.

It is really the most remarkable video I have produced this far.

Monday, 27 October 2014

Mood feelings

This is the 5th post about "Emotional Intelligence", please visit the Introducing Emotional Intelligence, goal-less intelligence, enjoy feelings and fear feelings before you go on reading for a proper introduction on the subject.


After discussing the simpliest feeling associated with a goal, the enjoy feeling and its counterpart, the fear feeling, and the way the are added to calculate the global "step enjoy" feeling after a agent change of state -or step- we are now going to start dealing with the enjoy modulators.

We will start with the "mood feelings", the simpliest and more evident form of enjoy changers, and then turn into the most strange ones, the gain and loose enjoy modulators.

Fear feelings

This is the 4th post about "Emotional Intelligence", please visit the Introducing Emotional Intelligence, goal-less intelligence and enjoy feelings before you go on reading for a proper introduction on the subject.


As commented in Introducing Emotional Itelligence post, the goals, when are defined usign "feelings" toy models, have only three scoring parameters, three kind of emotional "outputs".

The first of them correspond to things you "enjoy" experiencing, like speeding where you enjoyed velocity. Enjoy feelings are basically added together into a general "enjoy feeling" score after each movement the agent does.

But all the three components of a goal, each kind of basic feeling (enjoy, moods and gains) has a reverse, a negative counterpart you need to know and properly manage in your algorithm.

Fear feelings

Saturday, 25 October 2014

Enjoy feelings

This is the 3rd post about "Emotional Intelligence", please visit the Introducing Emotional Intelligence and goal-less intelligence before you go on reading for a proper introduction on the subject.

Enjoy feelings

Once I had this simulation with the goal-less algortihm working I wanted to go further. The kart was really driving it quite nicely, but it clearly was not optimal.

Why? The idea was so simple and powerful it was not clear the problem at a first glipse.

Wednesday, 22 October 2014

Goal less intelligence

This is the 2nd post about "Emotional Intelligence", please visit the Introducing Emotional Intelligence before you go on reading for a proper introduction on the subject.

Goal less intelligences

In my first post I already commented on the internals of the simpliest entropic intelligence possible, one that scores all the futures as 1. If you haven't read it and want to know in more detail this case, you can visit the link first. Anyhow, I will try to summarize the basic working of tis model again.

Tuesday, 21 October 2014

First "emotional" video

My code is not still "fully emotional" this far, some cases are not still used and others lack more testing, but I am ready to produce my first video where goals are considered in this new "emotional" way.

The video just shows the old test case of a set of agents -karts in this case- moving around, where the must collect drops and then deploy it on squared containers to get a big reward, but this time they are rockets inside a cavern and follow the goals in a fully "emotional way".

The changes are not totally evident in this case, the task is too simple to make a great difference, surely I need to find more challenging scenarios for the next videos. But you will still notice the big step in the small details: how actively the pursue theirs goals and how efficiently they do it.

Monday, 20 October 2014

Introducing Emotional Intelligence

In the actual entropic intelligence algorithm, the scoring you assign to each different future you imagine is the golden key, as they determine how the options you are considering will compare to each other, ultimately defining the resulting "psicology" of the agent that makes it behaves one way or the other.

These future scorings are made up by adding the effects of a set of different "motivations" or goals the agent has, like in "I love speeding", "I care about energy" or "I care about health", measured over the future's path, step by step, like in a path integral aproximation.

Being able to define the right motivations set for an agent, along with a proper way to calculate the different effects those motivation could have on every step the agent takes, and mix them together to get the correct future's score, is ultimately what I am looking for and the life motiv of this blog.

Friday, 19 September 2014

Adding evolution to the mix

My second experiment was about using a simple evolutive algorithm to fine adjust the goal strengths in order to get the best possible set of strengths given the environment you place the players in.

I added a "magic" check box on the app so you can switch on this evolution, then add new players if desired, goals, and let the population grow and the best adapted selected over time.

Follow my orders!

After some months without working in this algorithm, I am back with some new ideas to code, but before this, I want to show you a couple of experiments I made before the summer break.

First one, showed at the talk in Miguel Hernandez University in an early stage, is just a concept test: could this intelligence be used to drive a vehicle without effort and safely, but following you directions in real time?

Imagine a real car using this algorithm to drive you anywhere, you can let it drive for you as in a google car, but with an added hability of a "semi-autonomous mode".

Tuesday, 1 July 2014

Why do reductive goals exits?

Ten minutes ago I discovered what exactly "reductive" goals represent, or I think so.

As you know (other way, read the old post first) this entropic intelligence needs a simulation of a system to be able to work, but also, if you pretend it to make some hard work for you, you also need a set of "goals" that represent how much you earn when system travels from point A to point B.

Those goals I already talked about, could be categorised in "positive" goals, like earning points for the meters you run, or the energy you pick. Then we also needed "reductive" goals to make it work properly.

Sunday, 29 June 2014

On the media for the first time!

I want to share an recent article about the video of the seminary I held by Francis Villatoro (@emulenews), one of the more important science blogger in spanish.

The original article is in plain spanish, but google does a nice work this time, so you can read it translated to english if you wish.

I can't be happier today!

Friday, 27 June 2014

Video: Seminary about entropic intelligence

Last May I held a little seminary (90 mins.) about this concept of "entropic intelligence" and how it could be used on optimizing and cooperative games at Miguel Hernandez University in Elche, Spain (UMH).

It was a talk in spanish, and youtube doesn't allow me to edit any subtitles, so don't trust even the automatic spanish subtitles, I had a look around and well, it was a big joke!

It is by far the best way to "catch up" with all the concepts presented on this blog!

Friday, 16 May 2014

Cooperating... or not.

Cooperating is quite easy in this framework: If you get 10 points in your score for a given future, then the other players will also get those 10 extra points. That simple.

So, if we all are cooperating on a goal, then we all share a common scoring for that goal (being it the sum of the scorings for this goal of all the players) no matter who exactly got each point.

In the case of a reductive goal, it is the same, all the players reduce their scoring with the reductive goal a single kart get, so again there is asingle reductive coeficient (multiply the reductive coefs. of all the players to get it) that is shared by all the players.

This last point is not free of troubles: If a players dies, its redcutive goal for health drops to zero, so my own scorings will be all... zero! So I lost the joy of living and let the rocket fall down to ground and break... uh! not so good to cooperate in those conditions!

The following video shows a group of players cooperating on all theirs goals. The effect is not much evident just because one layer of intelligence only simulates five seconds or so, and it is not long enough to really appreciate the difference. I hope the use of a second layer of AI (not implemented yet) will make it much more visible.

Wednesday, 14 May 2014

A seminary on optimizing using entropic intelligence.

This past monday I hold a small seminary on the University Miguel Hernandez (UMH) of Elche about optimizing using this entropy based AI, in short there will be a nice video of it, but in spanish my friends (I will try to subtitle it to english if I have the right to do it on the video and the patience).

The note about the conference can be found here (again, the little abstract is in spanish, and google translate didn't work for this url, at least for me):

A google translation of the abstract, not so bad... once I have fixed some odd wordings:


Entropy is a key concept in physics, with an amazing potential and a relatively simple definition, but it is so difficult to calculate in practice that, apart from being a great help in theoretical discussions, not much real usage is possible.

Tuesday, 22 April 2014

Layers and layers of intelligence.

Those days I have been busy ioroning out the ideas about how different "levels" of entropic intelligence could be layered, one over the other, to make up our complex and sophisticated mind.

I have come across with a very simple -once you get the idea- way to arrange it like a bunch of layers of "common sense" placed one on top of the previous one.

There are at least two ways of explaining it: algortihmic (for programmers) or entropy laws (for physicists) so I will focus first in the algortihmic aspect so you can make your own "multilayered common sense intelligence machine" if you are in the need (I already am on the work, but it is still far from done).

So lets go for it the easy way using the old and good "kart simulation" example.

To be clear about the problem we are facing I will just go to the point: usign 100 crazy blind monkeys to randomly drive the kart in the 100 futures I have to imagine, and then take a "common sense" decision based on the things those crazy monkeys did, may be, only may be, was not such a clever idea after all.

Negative goals

We have seen how "common sense" works and how to bend it to our likings by adding positive and reductive goals, the video clearly showed the benefit of the mix of goals used, but are they enough avoid danger, or do we need something more... powerful?

Negative goals are quite natural for us: if the kart lower its health by a 10%, you can think of it as a mere "reduction" applied to the possitive goals -distance raced squared in this case- or something purely negative: a -10 in the final score.

If we try to get the same results as in previous video but using some short of negative goals, we will end up with something odd: the fear is ok in some really dangerous situations, they help you avoiding them efectively, but too much fear, a big negative scoring arising in some moment, will make the "common sense" to freeze. You have added a "phobia" to something.

Monday, 21 April 2014

"Reduction" goals

In the last post we describen "common sense" and how to use them with positive goals, but I also commented how badly we need to learn to deal with negativeness: as it was always said, a little of frear is good.

In the physical layer of the algortihm, we always talk about entropy, and it is not different this time, so lets go down to the basics to understand how to think about negative goals the rigth way.

A living thing is a polar estructure, it follows two apparently oposite laws of entropy at the same time.

First of all, a living thing is a physical thing, so all the physic laws of the macroscopic world we live in apply, and we know it means obveying the second law of thermodinamics: the instantaneus entropy always has to grow, and in the optimum possible way.

On top of this physic law seats the second one: keep your internal entropy low, as low as possible.

Robotic psicology?

Entropic intelligence, or the natural tendency on intelligent beings to do whatever it takes in order to maximize entropy generation (but measured not on the present moment but at some point in the future) not only do generate intelligent behaviour as the original paper authors suggested, it is the missing part we needed to push actual AI into real "human like" intelligence.

It is now a year from my first contact with this idea, and for the first time I find my self prepared to name it correctly and give a definition of what this algortihm is really doing inside.

During all this time, the "intelligence" algortihm itself and the resulting behaviours have been named -in my thoughts, the code and the post here- with many different worlds, basically because I didn't know what exactly was emerging on the simulations, just that it seemd deeply related to the intelligence concept some how.

Monday, 31 March 2014

Dancing with the danger

As I mentioned in the last post, negative scoring was meant to be mandatory in the AI, so level 7 brought the posibility to use those negativeness for good.

Now we have here the first example of how good negative scoring can be: A new tendency to not going outside the track, even if a big drop is in the edge of the track attracting you to the dissaster.

Before explanations, a first video with 3 karts:

Tuesday, 25 March 2014

The new intelligence level 7

For the 3rd or 4rd time, I think the new improvement in the base intelligence algorithm can be the final one, so let be cautious on this: may be the new AI level 7 is the perfect one.

But let's start with the origin: I really need negative scorings on the futures. I previously thought it was an abomination, and in a philosophical point of view still is, but it is mandatory if you want to have an real AI, one that can compite to acomplish a goal, to beat an oponent or to get as many score as possible: to have an intelligence that is usefull on optimizing, on game theory, etc.

Why it rendered mandatory to have negative scoring will be covered on a next post, now I will just introduce you the level 7, and we will start with a visual comparasion with some previous intelligence level on my old and good "test circuit".

Friday, 21 March 2014

No more suicides

Last video was quite impresive, a rocket flying inside a cave at full speed, but for me it was really disapointing: it showed up quite clearly a suicide tendency in the AI.

As you can see, the rocket leaves the track 2 or 3 times as it gets too fast to stop before crashing, but if you look closely, it happends that the crash WAS not impossible to avoid, not at all, but somehow the rocket put hands down, stop fighting, give up and let itself crash without even trying to scape. Why?

As I commented on the last post, I considered usign a negative scoring tendency to "keep alive" the player: if you die before 2 seconds while imagining a future, then give it a negative scoring so the player will actively avoid it. It was a really desperate try, as score is internally an "entropy gain", and allowing negative values is like allowing entropy to lower with time... it is a physic abomination, and I am really happy it didn't work out in my tests (I didn't finally use negative scoring but something as ugly as this).

Wednesday, 19 March 2014

Driving a rocket inside a cave!

Version 0.7 of the software came with a great generalization of all the parts that conform this AI (at the cost of a noticeable drop in perfomance) so it is now possible to use a base class of  "Player2D" to create a brand new kind of vehicle quite easily, with a minimun amount of code: just define the vehicle params, its simulation code, the drawing stuff, and thats all.

It was then time to try with a brand new creature and compare it with the old known kart. I decided to code a classical rocket that travels the circuit as if it were a vertical cave, with a gravity force downwards that only apply to rockets, and see how the AI deal with it.

I have to admint I made it way too powerful, and given that the AI will try to maximize entropy, it is short of natural that it will drive the rocket as fast as it can. Notice how much it likes to spin around as a way to hover around a place before it decides where to go: the quickiest it rotates, the more future options it has, as it can leave the spinning to any needed direction in a short time. That is way spinning is a nice option to its eyes.

So here you have a rocket and a kart trying to get sweet drops from the circuit, look carefully at the rocket as it enter the narrowest parts of the circuit, it is amazing how good the AI is managing such a difficult to master kind of ship:

Monday, 17 March 2014

Mixing goals

Next video shows 36 karts driving around with 3 goals each one:

1) Move fast.
2) Pick up the drops.
3) Store the drops.

The internals are easy:

1) When you drive N meters, you score N.
2) When you drive over a drop of size 5, if you have internal storage left (a kart has a capacity of 100) you "upload" the drop and get 5 points into your scoring.
3) When you drive over a "storage" rectangle, if it still has free space, your internal storage is "downloaded" into the storage and you receive scoring for it.

The result is something like a group of ants collecting food and accumulating on some spots:

Wednesday, 12 March 2014

Garbage collectors

In the last post I showed you "motivations", after it I renamed it, on the code and on my mind, to "goals", much shorter and general!

But in this previous video there were only "personal goals".

Each player (or kart) had its own set of motivations, and that is why only orange kart was able to eat the orange sweet drops on the track: only this kart was able to "see" them.

This time I have added "team goals", so a goal is shared by all of the players in the team. Now, if I add a "sweet drops" goal, all the karts will fight to get the sweet drops on the track:

Tuesday, 11 March 2014

Introducing "motivations"

The intelligence at level 5 is quite nice, almost perfect, or may be perfect, but there is something we can't control: the goals.

We never knows what the AI will decide to do to solve the puzzle we present to it, nor we know witch goal, if any, will it follow.

This is quite nice to have something like this, it could react to unespected scenarios as if it were used to them, but it would also be lovely to be able to "drive" the curse of action towards some more mundane goals: domesticate the intelligence and make it follow our likings.

Redefining goals will make this AI suitable for optimizing -in any sense- any system you could define and simulate, in a "intelligent way". Anything. Amazing.

Beyond entropy

Level 5 of intelligence seems to be reflecting the actual definition of entropy on the original paper, so before going any further, we will write it in pseudo-code and embed on it the example of the kart seen in video 1 entry:

Friday, 7 March 2014

AI level 4 and 5: Entropy and energy

In this video, yellown kart is using "level 3" intelligence so it score an option with N different futures with Ln(N), and it would be ok if all futures were equiprobable, but they aren't.

In the case not all futures are equipropable you have to switch to another, more complex, way of calculating entropy.

When you have N microstates but each one has a different probability of happening, call it P(i), in the instantaneous or "clasical" entropy, we use:

S = Sum. on all possible microstates(P(i)*Ln(P(i)))

AI level 3: Using real entropy

Intelligence "Level 3"
Looking back at level 1 and 2 of our AI, you can notice that in both cases we are scoring each option using just a count of the different futures we were able to find.

But this is not a real entropy definition!  If a macrostate has N possible and equiprobable microstates, then its entropy is not just N, it is:

S = k*Ln(N)

As k is constant, we can forget about it, and so instead of using N to score each option as in the first videos, we now use Ln(N) for the orange kart:

Sunday, 2 March 2014

Intelligence Level 2

Video 4: Intelligence level 2

In video 1 we commented on the simpliest way to implement the entropic intelligence on a kart: count how many different end points you have in the futures that start by chosing "+5", and compare with the number you get for choice "-5", then average and take this as your decision. We will call this "intelligence level 1".

But as simple as it seems, almost every aspect of the AI explained on video 1 can be redefined so the driving of the AI "looks" more natural, as if the driver were a real driver doing his best.

By the way, chosing a kart simulation as a test-bed for the algortihm as proven to be a really good choice, as it is very easy to just observe two kart driving side by side, each one with a different version of the AI, and tell witch one of them was doing a better job. It wouldn't have been that easy with another simulation.

So, steeping over video 2 and 3, that just show intelligence level 1 solving different circuits -you can watch them on the "YouTube" link avobe- we jump to video 4, the first one to really level up intelligence to level 2:


Saturday, 1 March 2014

The entropy

Before going any further on the algorithm itself, we will stop for a moment on the real meaning of those "causal entropic forces" the algorithm is based on.

This is a little technical -but quite interesting- and I will try my best on being easy to follow, but feel free to pass on this and focus on the algortihmic-only articles if you want. You will get as much knowledge of the AI as you will need to apply it, but be warned: when it comes to defining your own system, adjusting the params of the AI and polishing the way you measure how better a new situation is compared to a previous one, the understanding of the underlaying physics will give you an extra insight on the proccess and will help you pin-point the weak points on your implementation.

Disclaimer: I am not a physicist, just an oxidized mathematician and a programmer who loves reading about the subject, so please be benevolent when commenting! I just pretended anyone could have a clear picture of the concept itself and the extreme power under the nice sounding word "entropy".


Entropy is a very powerful phisical concept, it is behind almost all laws of the classic phisics. It is quite simple in its definition, but almost imposible to directly use in any real world calculation.

Thursday, 27 February 2014

Video 1 - The basics.

Before this blog I used to publish videos on a Youtube playlist about how the AI was better on this or that, it gave me a way to get comments on the subject.

Now that the blog is up and working, I would like to start by reviewing all those "old videos" and give an explanation on how the algorithm was working by this time, how good it was and witch things needed to be changed.

So today we will comment on the first video, it is the most important one to understand in order to get the whole idea of the algorithm, so read carefully and comment on any aspect you feel is not cleared in this post, I will try to help my best.

So start by watching the video:

Wednesday, 26 February 2014

Wellcome to the Entropic AI blog

In this my first post, I would like to introduce you to the history behind this kart simulation.

Back in april 2013, I read an article at (big thanxs to José Elias for the greatest blog) about a new approach at artificial intelligence based solely on entropy concepts, driven just by thermodinamical laws, that was surprisingly good in making phisyc systems, of any kind, to "behave intelligently": Causal Entropic Forces by Alexander D. Wissner-Gross.

It chatched my attention so I jumped from link to link to link in search of some more insight on it. Reading those links made me understand the idea even before taking a look at the original paper, and when I read the world "Montecarlo", the algorithm popped up in my mind.

So I sat and code a quick and dirty approach to the algoritm in a couple of days, resulting in the first version of the kart simulator. It was very very simple, but the AI managed to drive the kart on track quite impresivley... and I didn't code anythink like "run" or "drive inside the track".