But controlling a group of agents in a intelligent way is not that simple. I always managed to make several agents to evolve at the same time in the same environment, but it was done just by giving each agent its own personal intelligence, while considering the rest of the agents as mobile parts of the environment: obstacles to be avoided to survive.
Here is a real "swarm intelligence" controlling a group of agents as one:
Making a swarm of agents -a group of rockets for instance- to behave intelligently as a group needs, in some way, to consider the group itself as another agent, a special agent in a higher level of abstraction that can push the individual agents to change its standard behaviours so the group of agents, its structure, follow a "multi agent" goal.
This can not be achieved by just adding a clever individual goal to all the involved players, they need to act as a intelligent structure before you can send them to solve a collaborative task. The structure formed by the sum of all the agents' states is in fact a new kind of multi-sate that defines an intelligent agent formed by a group of also intelligence agents. A fractal tree of intelligence layers.
The power of the idea presented here lies on the ability to break a complex problem into smaller pieces. Imagine a group of individual "fingers" acting as a solid hand with its own will and goals, then eight of those hands acting as a single octopus, and then a group of several octopuses receiving high level orders like "bring this object here". This algorithm will take all those layers of goals and trigger individual actions on all the fingers so the whole structure seems to be acting as a single intelligent being doing whatever it takes to safely bring the object here. We do not talk about individuals cooperating for a common goal but to treat the problem as layers of intelligent agents acting as a single one.
I present here a couple of preliminary videos showing my first attempts to accomplish the task with a fractal AI. I have focused in making a group of rockets to fly in a given formation, so the inner work was about defining a reliable potential for a given structure formed by several agents.
In the first video showed above, a semi-rigid star-shaped structure was simulated where every rocket occupy a given position in the structure (the ideal distance between rockets are pre-set).
The red lines you see in the video seem to act as springs or rubber bands physically connecting the agents, but they don't really exists on the simulation. The forces they seem to apply over the players are "causal entropic forces", they make the agents to prefer going to the positions where the imaginary springs would pull them to. They just love to be there.
The biggest problem with that formation "multi-goal" was the fact that the relative positions of the players were too tightly defined so they moved as a solid object. You clearly notice it when the formation get trapped in big holes or keep circling around a black obstacle. But to make the structure more stable you will need to find a balance between keeping formation and picking up energy from the environment to keep you alive: individual goals and global goals needs to balanced before mixing.
In the next video you can see a more advanced (but lees spectacular) formation potential based on "water waves": if you imagine the space filled with water and then drop a stone on each player position, the circular waves that form will mix into an interference pattern of high and low zones. The height of those mixed waves defines an scalar potential field, and making the fractal to follow this potential leads to an "amorphous crystal net" of agents.
The circles show the zones where the potential generated by each player's position will be higher, so the formation goal is to keep agents over the others circles as much as possible. As you can see, each player has several concentric circles representing waves of different sizes, so if you can not be in the first circle, your second option is to try to touch the second one and so on.
The potential generated by each player is this function of the distance (using as unit distance the agent size) and follow that formulation:
|P = (0.8 + (1-0.8) * ((Sin(PI/2*d))^2 * (2/(d+1))) )^2.0|
The key idea here is that different distance-to-potential formulations can lead to sophisticated group behaviour like formations, but more imaginative ways of defining the structure potential could have the power to make them follow any order you wish.
The next step will be adding more usefull group-goals so they all can play group games like keeping a ball up in the air without touching the walls, for instance, but ultimately it serves as a solid algorithm for controlling complex robots (or group of robots) easily and make them work on any task, as far as we can convert it into a potential field for the fractal to maximize.