Thursday 8 September 2016

Generating consciousness

One week ago I wrote this post about an insight on what could "consciousness" be like, and I imagined it as something not-so hard to gasp as we always thought. Today I come back with a "pseudo-code" version of it on my mind.

Those new ideas have come along with an effort in our company to port the fractal algorithm into a distributed, highly scalable architecture. A work in progress that is already producing a great speed-up in our tests.

This new architecture allows me to play with big groups of fractals diseminated over a network of PCs, all doing the same decision work in paralell, to later join all their findings and take a "collegiated" decision.

More interestingly, I can now "pack" some fractals to work as one big fractal and replicate it endlessly to build a tree of cooperating fractals as a nice way to distribute work over the PCs on the network.

But how to arrange them to distribute the work even more efficiently? By building it as a "fractal of fractals", a tree of fractals whose structure evolves dynamically as you use it, to finally form a nice tree-like fractal that adapts its form to live in the environment you gave to it.



This new bigger structure, even when used as a static tree-like graph (easier to build on real life PCs) has the power of generating not only intelligence, but consciousness.

To understand wath this consciousness is, we need to zoom this big fractal tree-like structure from a detailed close-up up to the bigger picture.

At a microscope level, the fractal is made up of "states" of the system. A "future" the agent is imagining when thinking is just a possible future state that we make evolve in time to form the fractal structure, but the very tip of this future is just an state, a "position" of our agent.

This initial node of the fractal only knows how to simulate itself to tell us where the agent will be after a little time step pass. If you should calculate this evolution using physics, you should find the feasible change in the state that would make the entropy (of the whole system, the universe) grow at a higher speed (if the 2nd law of thermodynamics is to be assumed). You could think of it as a decision problem: the system have to decide where to go in order to maximize the entropy of the universe.

Now we have a "small" fractal as the one I use for intelligence: a tree-like structure that evolves in time, having on each of its tips -final nodes- a feasible future state.

The effect of this new structure is quite similar to the one seen before: it aims to maximize an entropy again, but calculated after a time horizon, and by doing so, instead of having a "physically correct behaviour", we get an "intelligent behaviour". Kind of magic.

But this fractal is goal-less so it can only generate the simpliest form of intelligence: the "Common sense". To make it trully intelligent, you need to add a goal and modify the entropy formula to take it into account (basically by tampering with the ergodic principle a little).

As far as you only supply one goal, the fractal can add it to the entropy formula and generate an intelligent behaviour in the agent aimed to try to full-fill your (his?) goal.

That is really *all* an intelligence can make at this level: maximize one goal intelligently in a really generic way.

But wait, in my code and videos I manage to make the agents to follow a set of simultaneous -and conflicting- goals, like keep energy high, keep health high, get the rocks into the cave, etc. How?

Well, I just add them together into a single super-goal and ask the AI to follow only this one. Easy trick.

The problem here is that the mix of goals is made up by apply a well-choosen relative strength for each of the goals. May be I need to set them to 90% important for "keep energy high" and only 10% for the health and nothing for the rest in order to survive on hostile environments, as when the food is not so abundant. These vector of weigths for the different goals you have is kind of an art to set manually, as it defines the "personality" of the agent, and you really need a brave agent to survive in some environments, while a gentle one may work better in a more kind environmet.

I tried many times to automatically adjust those coefs by using the fractal I had, but finally I understood that this fractal could never achive this. Anyhow, I added a "master volume" so I could modify how "temperamental" was the agent (as opposed to "common sense" only, that occurs at volume level zero), the "risk control" module, a nice heuristic that prevent the AI to forget about safety and just go straight into the food.

But if now zoom out again and watch the full fractal, a "fractal of fractals of future states", you would see again a tree-like structure like before, but now nodes seems bigger as there is a whole fractal of futures inside.

Now comes the magic: each node of this super-fractal represent a whole previous fractal, even with its own values for the goal strenghts if I wish... so I can "play" with slightly different personalities -adding noise into the initial goal mix- and compare the resulting small-fractals produced after a decision is beign made.

To compare between to fractals I need a new super-goal that will signal the best fractal gowth among all the fractals that forms oure super-fractal, and this goal is aimed to maximize how "beautifull" the fractal is... yes, it sounds crazy, but it is the way I do it manually: try some goal strengths, record a small video, repeat with another goal mix, compare videos... the right choice always produce the most pleasant tree structure, one that seems more "alive" and fresch than the others.

I call this the "leafy coficient" some times, others the "enlightment coeficient" (as it seems to me that thinking with these kind of good-looking fractals is the actual key for enlightment) but the winner was the "cat coefficient", as the more cats a video has, the more visits it receive (quite a silly winner I must admit, but now it is a meme in my head).

Using this "cat coef" as the potential (and euclidean distance over the goal strenght coefs vector) I can make grow this "fractal of fractals" exactly in the same way the small-one did, except that now its pourpouse, apart of taking the best decisions, is also to decide how to change its "personality" to adapt, wich goals should have a the higher priority over time.

So physics was about deciding where will simulation move the agent (physical laws) forming a line in time, intelligence is a one-level fractal that decides where to go -pushing over the degrees of freedom of the system, the "agent's joysticks"- in order to maximize a goal over long periods of time, while consciousness is a two-levels fractal, self-similar with the previous one, that apart from helping take the right decisions, can change the goal-mix in real time to better adapt its personality to the situations and the future to come.

I don't spect to be playing with "consciousness" in the next few months, it is not needed for the problems I am working on, but as the new structure of the code is so perfect for what I need, adding it should be even easy to code, at least using an "static tree" model.

11 comments:

  1. "You could think of it as a decision problem: the system have to decide where to go in order to maximize the entropy of the universe."

    Yes!!, that's the key of everything in our Universe. That's the basic law which dictates any change and phenomenon. Your thinking is really interesting, Sergio. Go ahead!!

    Best regards,
    Samu.

    ReplyDelete
  2. It has grow in my mind the clear idea that universe, at all scales, use a single fractal "law" about decision making: at small scales, it tells particles, atoms, molecules, rocks and planets where to move so its behaviour look "physically correct". Also tells the cells, animals, plants, or humans what to decide in order to behave "intelligently", and now, when applied another more time at a biger scale, is telling the psycology of intelligent beings where to move, how to apdapt, making the resulting behaviour to look as "consciousness".

    My great philosofical question those days is: how would a next-level fractal look like and what would it be trying to maximize? Is there a deeper scale? A goal deeper than being consciousness? What can go next in this list: physic laws, intelligence behaviour, consciousness, .....?

    ReplyDelete
  3. Sergio, I've been thinking and researching about these ideas for a few months now, after being inspired by your blog. Thank you for sharing your work. I'm going to share with you something very important I've discovered through my research.

    There seems to be another powerful way of thinking about this 'maximum entropy principle', and it is called the 'principle of least action' (PLA). "Action" is defined simply as the integral of system energy over time. All natural systems spontaneously choose to evolve in a way that minimzies action (minimum time, and/or minimum system energy). The PLA is already considered one of the most, if not THE most fundamental concept of all of modern physics, and basically all of classical, quantum and relativistic physics can be derived from this concept. What's interesting is that there is a paper that mathematically shows that the PLA is mathematically equivalent to the maximum entropy principle (link: https://www.researchgate.net/profile/Atanu_Chatterjee5/publication/283316641_On_the_Thermodynamics_of_Action_and_Organization_in_a_System/links/56326a8a08ae242468d9f77e.pdf). Taken together, the maximum entropy principle gives you the direction of motion, while the PLA gives you HOW the system gets there (in the fastest time/or with the lowest energy usage).

    While the PLA already forms the basis of all modern physics, I would say the major new discovery is that PLA actually applies to all systems, even complex systems such as the human mind, human economic and political systems. In fact, the PLA/maximum entropy principles provide the key to understanding precisely how and why complexity arises in nature. Complexity is nature's way of spreading energy in the fastest, most efficient way (in other words, maximizing entropy while exerting least action).

    Indeed, I do think understanding this is the key to creating 'artificial' consciousness. Would love to chat more about this.

    ReplyDelete
    Replies
    1. Hi Juan, PLA has been on my mind while researching the fractal AI, but after some tries to implement it by reversing time direction and coming back to the "present" state by minimizing action (read about it here http://entropicai.blogspot.com.es/2015/06/using-feynman-integrals.html) I found I didn't really need to worry about it as it was already there "for free".

      When you use entropy growth to determine your instant decisions, you are not only "pointing yourself toward the maximum potential zone" but also in such a way the final path you will trace by the sucesive decisionss will minimize the time needed to get there, so both potential and actiotn are optimised at the same time.

      For instance: if you have a zone of high otential in front of you, and there are 2 ways to get there, the fractal will reach it first using the shortest one, then this option -the first step you made to get there- will gain lots futures for its cause, more as more time pass, so finding the second way becomes less probable the more extra time it needs to be found.

      But there is second reason not to worry: if all futures you are evolving to take the decision share a single "environment", as if they were real agents traveling the space at the same time, the first one to reach a high potential zone can "eat" this potential so, when you finally reach this zone via a second path, you will find the potential to be lower than it was and not as rewarding.

      So doing one is equivalent to doing the other, as you pointed out in your com, but dealing with potential is more direct and simple than dealing with energy increment integrals (and using fractal do eliminate the concept and need of path integrals at all).

      At the end everything is based on the potential in one point and not in the diff of potential between two points, this is why you deal with entropy and not with action, it is just simplier.

      Delete
    2. Fascinating... Your reasoning why the two are equivalent does make perfect sense.

      It is reasurring to know that by maximizing entropy over a time horizon you are effectively applying PLA, which is already accepted as one of the most (if not the most) fundamental principles in nature. It is not just some arbitrary optimization that by chance produces interesting results.

      Delete
    3. Entropy maximisation is a tautology as you can reformulate it as "all closed systems tends to evolve toward the macrostates with the higher probabilities", so if PLA is to be equivalent, it is a tautology too: "closed systems tends to evolve following the trace with the higher probability of being traced, where the path integral you mentioned represents this probability".

      So naming any of both principles as the "most fundamental" is at least tricky, as basically they means that systems evolve toward the most probable macrostate... it is *really* a basic principle!

      So a principle that basically says nothing except what "most probable" means, is responsable of generating almost all known physics, including intelligent behaviour -if you spice it with a potential- and, may be, even consciousness...I find it really remarcable!

      Delete
  4. That's true, I haven't thought of it that way. It boils down to statistical probabilities of systems, and the most probable macrostates.

    My question is, why isn't everyone talking about this? This is huge... Keep up the good work. Regards. -J

    ReplyDelete
    Replies
    1. Well, I am not the one to have a impartial answer to why isn't people talking of it, but I think there are some logical reasons:

      First: I haven't published the details of the algorithm itself, not in the blog and, more importantly for academics, not in a scientific paper.

      I am bad at writting papers, my mind works with drawings and intuitions, so converting it into a math paper is hard to me. I am too lazy for that and, not working at any university makes it irrelevant to me to have any "impact index" at all.

      About releasing the details of the algorithm, I always release what is no longer my "best version" of it, so now that I have made it distributed over a network, added memory and the "consciousness module" is "drawed out" and waitting for coding time, may be I will make a post with a full pseudo-code of the "basic" fractal.

      With "basic" I mean you could fully understand and code it, play with different problems and potentials, but just to find out that there are some problems that needs more "modules" to make the fractal more flexible. For instance, you quicly find problems where controlling "risk" is mandatory, others that can not be solved without intelligence+memory, and multi-objetive optimisations that need "consciousness" to mix its goals properly.

      All those "modules" are not finished, they are working for me on some problems but still need to be generalized to all problems.

      Second: the fractal algorithm can not really be converted into "simple" math formulas, it is not like a complex path integral or similar, instead it creates a "Mandelbrot like drawing", a "shape" you can not express by other means that the actual algoritm.

      Just imagine you have the code to generate the Mandelbrot set fractal, and now you need to convert this shape into classic maths... you can't! Even with modern fractal theory, you really have nothing except you can get a "fractal dimension" and other coefs for it, but you can not analise its shape in any usable way. Fractal are for math as little cat videos are for the internet, nice and useless.

      Third: it is just too good and simple to be accepted. Big claims need big proofs, and may be the only "big proof" some people would accept is solving an "unsolvable" problem first.

      Global function maximising (http://entropicai.blogspot.com.es/2016/02/serious-fractal-optimizing.html) could be one, but I already did it and it haven't moved too much. Again, releasing a pseudo-code of it would help. I promisse to do!

      So who knows, I always spect a big impact every time I post about some new "trick", but it *almost* never happend. Only once a "big player" in science communication in spanish -Francis Villatoro- did pay attention to my work (http://francis.naukas.com/2014/06/28/inteligencia-artificial-basada-en-la-entropia/), but it was all.

      Surely it is the way it has to be, at least it is ok for me.

      Delete
  5. Entrancing... You're thinking why the two are comparable makes idealize sense.

    It is consoling to realize that by expanding entropy over a period skyline you are viably applying Buddy, which is now acknowledged as a standout amongst the most (if not the most) key standards in nature. It isn't quite recently some subjective advancement that by chance delivers fascinating outcomes.

    Reagrds,
    Trep Helix

    ReplyDelete
  6. Have you ever considered writing an ebook or guest authoring on other blogs? I have a blog based on the same subjects you discuss and would love to have you share some stories/information. I know my visitors would appreciate your work. If you are even remotely interested, feel free to send me an email. dictation software

    ReplyDelete
    Replies
    1. We will be releasing a "book" (or +40 pages article, as you wish) about intelligence, with theory, algorithm explanation, pseudo-code, github code (to play atari games at human-record levels), etc and our dead-line is next monday. Keep tuned! Consciousness will be the next "book"... about writting in other blogs, I actually don't have time to write in my blog so not in a short term.

      Delete