I want to share an recent article about the video of the seminary I held by Francis Villatoro (@emulenews), one of the more important science blogger in spanish.
The original article is in plain spanish, but google does a nice work this time, so you can read it translated to english if you wish.
I can't be happier today!
So called "Intelligente behaviour" can be defined in a pure thermodinamic languaje, using just "entropy". Formulaes look pretty intimidating, but once you get the idea, coding it into a working AI is quite simple. Fractalizing the same idea takes away entropy calc form the AI and makes it work much better.
Sunday, 29 June 2014
Friday, 27 June 2014
Video: Seminary about entropic intelligence
Last May I held a little seminary (90 mins.) about this concept of "entropic intelligence" and how it could be used on optimizing and cooperative games at Miguel Hernandez University in Elche, Spain (UMH).
It was a talk in spanish, and youtube doesn't allow me to edit any subtitles, so don't trust even the automatic spanish subtitles, I had a look around and well, it was a big joke!
It is by far the best way to "catch up" with all the concepts presented on this blog!
It was a talk in spanish, and youtube doesn't allow me to edit any subtitles, so don't trust even the automatic spanish subtitles, I had a look around and well, it was a big joke!
It is by far the best way to "catch up" with all the concepts presented on this blog!
Friday, 16 May 2014
Cooperating... or not.
Cooperating is quite easy in this framework: If you get 10 points in your score for a given future, then the other players will also get those 10 extra points. That simple.
So, if we all are cooperating on a goal, then we all share a common scoring for that goal (being it the sum of the scorings for this goal of all the players) no matter who exactly got each point.
In the case of a reductive goal, it is the same, all the players reduce their scoring with the reductive goal a single kart get, so again there is asingle reductive coeficient (multiply the reductive coefs. of all the players to get it) that is shared by all the players.
This last point is not free of troubles: If a players dies, its redcutive goal for health drops to zero, so my own scorings will be all... zero! So I lost the joy of living and let the rocket fall down to ground and break... uh! not so good to cooperate in those conditions!
The following video shows a group of players cooperating on all theirs goals. The effect is not much evident just because one layer of intelligence only simulates five seconds or so, and it is not long enough to really appreciate the difference. I hope the use of a second layer of AI (not implemented yet) will make it much more visible.
So, if we all are cooperating on a goal, then we all share a common scoring for that goal (being it the sum of the scorings for this goal of all the players) no matter who exactly got each point.
In the case of a reductive goal, it is the same, all the players reduce their scoring with the reductive goal a single kart get, so again there is asingle reductive coeficient (multiply the reductive coefs. of all the players to get it) that is shared by all the players.
This last point is not free of troubles: If a players dies, its redcutive goal for health drops to zero, so my own scorings will be all... zero! So I lost the joy of living and let the rocket fall down to ground and break... uh! not so good to cooperate in those conditions!
The following video shows a group of players cooperating on all theirs goals. The effect is not much evident just because one layer of intelligence only simulates five seconds or so, and it is not long enough to really appreciate the difference. I hope the use of a second layer of AI (not implemented yet) will make it much more visible.
Wednesday, 14 May 2014
A seminary on optimizing using entropic intelligence.
This past monday I hold a small seminary on the University Miguel Hernandez (UMH) of Elche about optimizing using this entropy based AI, in short there will be a nice video of it, but in spanish my friends (I will try to subtitle it to english if I have the right to do it on the video and the patience).
The note about the conference can be found here (again, the little abstract is in spanish, and google translate didn't work for this url, at least for me):
http://cio.umh.es/2014/05/07/conferencia-de-d-sergio-hernandez-cerezo.html
A google translation of the abstract, not so bad... once I have fixed some odd wordings:
Abstract:
Entropy is a key concept in physics, with an amazing potential and a relatively simple definition, but it is so difficult to calculate in practice that, apart from being a great help in theoretical discussions, not much real usage is possible.
The note about the conference can be found here (again, the little abstract is in spanish, and google translate didn't work for this url, at least for me):
http://cio.umh.es/2014/05/07/conferencia-de-d-sergio-hernandez-cerezo.html
A google translation of the abstract, not so bad... once I have fixed some odd wordings:
Abstract:
Entropy is a key concept in physics, with an amazing potential and a relatively simple definition, but it is so difficult to calculate in practice that, apart from being a great help in theoretical discussions, not much real usage is possible.
Tuesday, 22 April 2014
Layers and layers of intelligence.
Those days I have been busy ioroning out the ideas about how different "levels" of entropic intelligence could be layered, one over the other, to make up our complex and sophisticated mind.
I have come across with a very simple -once you get the idea- way to arrange it like a bunch of layers of "common sense" placed one on top of the previous one.
There are at least two ways of explaining it: algortihmic (for programmers) or entropy laws (for physicists) so I will focus first in the algortihmic aspect so you can make your own "multilayered common sense intelligence machine" if you are in the need (I already am on the work, but it is still far from done).
So lets go for it the easy way using the old and good "kart simulation" example.
To be clear about the problem we are facing I will just go to the point: usign 100 crazy blind monkeys to randomly drive the kart in the 100 futures I have to imagine, and then take a "common sense" decision based on the things those crazy monkeys did, may be, only may be, was not such a clever idea after all.
Negative goals
We have seen how "common sense" works and how to bend it to our likings by adding positive and reductive goals, the video clearly showed the benefit of the mix of goals used, but are they enough avoid danger, or do we need something more... powerful?
Negative goals are quite natural for us: if the kart lower its health by a 10%, you can think of it as a mere "reduction" applied to the possitive goals -distance raced squared in this case- or something purely negative: a -10 in the final score.
If we try to get the same results as in previous video but using some short of negative goals, we will end up with something odd: the fear is ok in some really dangerous situations, they help you avoiding them efectively, but too much fear, a big negative scoring arising in some moment, will make the "common sense" to freeze. You have added a "phobia" to something.
Negative goals are quite natural for us: if the kart lower its health by a 10%, you can think of it as a mere "reduction" applied to the possitive goals -distance raced squared in this case- or something purely negative: a -10 in the final score.
If we try to get the same results as in previous video but using some short of negative goals, we will end up with something odd: the fear is ok in some really dangerous situations, they help you avoiding them efectively, but too much fear, a big negative scoring arising in some moment, will make the "common sense" to freeze. You have added a "phobia" to something.
Monday, 21 April 2014
"Reduction" goals
In the last post we describen "common sense" and how to use them with positive goals, but I also commented how badly we need to learn to deal with negativeness: as it was always said, a little of frear is good.
In the physical layer of the algortihm, we always talk about entropy, and it is not different this time, so lets go down to the basics to understand how to think about negative goals the rigth way.
A living thing is a polar estructure, it follows two apparently oposite laws of entropy at the same time.
First of all, a living thing is a physical thing, so all the physic laws of the macroscopic world we live in apply, and we know it means obveying the second law of thermodinamics: the instantaneus entropy always has to grow, and in the optimum possible way.
On top of this physic law seats the second one: keep your internal entropy low, as low as possible.
In the physical layer of the algortihm, we always talk about entropy, and it is not different this time, so lets go down to the basics to understand how to think about negative goals the rigth way.
A living thing is a polar estructure, it follows two apparently oposite laws of entropy at the same time.
First of all, a living thing is a physical thing, so all the physic laws of the macroscopic world we live in apply, and we know it means obveying the second law of thermodinamics: the instantaneus entropy always has to grow, and in the optimum possible way.
On top of this physic law seats the second one: keep your internal entropy low, as low as possible.
Subscribe to:
Comments (Atom)