Today I want to give an overview of how the "complete fractal AI algorithm" could look like in some months, the ideas I am actually working on, and specially some random thoughts about consciousness.
The part I am now working at is about how to add memory to the fractal AI. This far, fractal AI was totally memory-less, meaning it do not learn from experience at all. I now call this an pure instinct-drive or intuitive mind. When you ask something to this AI, it thinks on the problem from scratch, and gives you an answer that is good enough for evolving in this medium with intelligence, a "real time " decision making algorithm good enough for many task.
But while driving a rocket on a difficult environment is hard and can be done with just an intuitive fractal AI, most NP-hard problems -problems where the time needed to solve it grows exponentially with the size of the problem to be solved- are usually not so easy to solve just with pure intelligence, usually you need something more to guide the intuition.
Memory fractalTrying to solve one of those hard problems I came into the conclusion that I needed to let the algorithm play with the problem for a time so it could somehow memorise those decisions that, in retrospective, proved to be right over time, and use all those memories to help the AI decide better and better as more experiences were stored and processed.
I could solve the problem with a simplified version of this idea, now I am working in generalising the method used into a truly fractal model of memory that will work in conjunction with the "memory-less" AI: The memory-less AI will create those memories, and the memories will help the AI by showing it paths of successive decisions that, in similar circumstances, worked fine. It acts as a new goal the AI have to follow, a goal that bases its potential in how similar this state is with those in memories, and on how good or dab those memories were.
I can not show you anything about it as it is not coded on the rocket case (I use the rockets as my general case, as it is a really complete problem to test new ideas) and the preliminary use I am making of it is still not working OK. Basically I need to use a fractal model of memory instead of the more classical one I am using now, until then, I only have clues on how it should work out.
But I can tell you something interesting: My "perfect fractal", what I called the "Feynman fractal" as it allowed time travelling (Feynman integrals or Feynman diagrams allows particles and paths to travel time reversed or not, and both directions are equally important), is actually equivalent to a memory fractal if the memory is really coded as another form of fractal.
It doesn't work as I spected at all, futures does not need to actually travel back in time, instead, recalling past memories and using them to decide makes the same effect and are much simpler to manage mentally and in the code than a complex time travelling fractal.
ConsciousnessAh! The holly Grail of AI, consciousness! I always dream some day, after I could deeply understand the fractal mind, I could naturally see what consciousness really is. I spected it to be some surprisingly twist to the fractal structure, like time travelling, but in some magic way I could not imagine.
But now I think it may not be such a dramatic thing after all, instead it may be just a simple way to modify the internal parameters used in the fractal AI as it is used, a way to change your mind own working parameters to accommodate what is to come.
Although the idea is still half-baked in my mind, the following "real brain" example could clarify it a little:
Your intelligence, being it a neuronal network or not, uses some "scale of values" to measure how good or bad something is. For instance, feeling hunger could have a relative importance to your intelligence of 0.34, while begin thirsty may well be a little more important, 0.65 for instance.
That is your normal "scale of values", the one that serves you well in everyday life. But imagine you need to travel a long river crossing a desolated region. Your mind will probably decide to lower the importance of water and raise the importance of food, as it can foresee that the lack of food will be the real problem during your journey.
This process is actually not making you smarter, or using memories to guide you through the river course, instead is fine-tuning you internal thinking params so the resulting behaviour is more probably going to save your life.
A process that modify all the inner params of an AI to better adapt to the changing environment is, as I now see it, a proto-form of consciousness, and generalising the process to fuse with the actual fractal AI could not only make it 100% parameter-less but, given enough intelligence and memory, make what we recognise as "consciousness" to slowly emerge in the agent behaviour.
So my bet is: the fewer hidden params exists in your AI algorithm that the algorithm itslef can not evaluate and change if needed, the more conscious your algorithm will be. It is not an easy trck to have an AI that can change its own working at will, but this is the way I will try to go after the summer time.
As a side note, black-box algorithm like a neuronal network would prove harder to dress up with the "emperor new clothes" of consciousness?