You may think a lot of work was needed to plan the movements prior to start the recording, a lot of code to control risk and so on, but it is not the case at all.
99.9% of all the code is just "base code": the simulation of your system is 80% (I coded it from scratch... old school way) and 19.9% is the general fractal AI, the same code I would use for any other case, so there is nothing in all this code about doing this or that that relates to our particular "mining rocket" case.
In fact, with this 99.9% of "base code" the AI could make the system (the rocket in this case, but you can change it with any other simulable system at hand) to survive indefinitely by flying around gracely and avoiding dangerous bottle-necks, without any other thing to care except keep on existing. This is the "common sense"part of the algorithm, and you can not switch it off like in some humans.
The 0.1% of the code that I had to implement to get all those rich behaviours to emerge was just defining "how good" a given situation was, as a real non-negative number. Zero is reserved for the worst of the cases: the rocket is dead (or, in a more general case, you are out of the potential function's domain) while numbers above zero mean "I would like to be on that state this much", so you can compare and decide if a future outcome is better or worst than another.
I call this number the "state potential" as it relates to an scalar physic field, like electric or gravitational potentials, that you could call "utility" or "gain" in your particular case.
The actual potential used is like this:
-Decide where -a 2D position- you want to get with your hook: if the hook is empty, the target point is the position of the nearest rock around, but if a rock is already on the hook, then the target point is the deploy area (the small circle).
-Compare you initial distance to the rock with the distance form your future position.
-If you get 20% nearer to the rock, the potential of this future state is 0.2
-If you are more distant that initially, potential is set to a small number, let say 0.000001
That is all, add this simple potential formula into the general fractal AI that is controlling the rocket so it keeps on existing, and the AI will "automagically" find all those strategies and change its behaviour to follow them.
Now I will show you a second video where you can watch the "debug traces" showing where futures evolved under the command of the AI.
To understand what you see, keep an eye on the colors of the traces: gray traces correspond to the rocket position on each futures, the more into the future, the darker. It is not important today, as we only care about the hook position, but helps you detect when the situation gets dangerous, as in those moments, gray paths seems to "collapse" (there is a "risk meter" on the upper-left corner, in %).
Green lines are the ones to watch: they represent the future positions of the hook. When the hook changes state (from empty to holding a rock or, inversely, when the rock is deployed) the traces get red, so it is good when the traces change color as it means the rocket is doing the job.
Some times the red traces become blue: this is more the good, as it means, in this future, the AI is able to trap a rock (green to red transition) and then take it into the deploy area (green to blue). A hat trick.
In the previous videos where the rockets were basically collecting energy drops to survive and trying to avoid damaging the hull, the potential formula was even more simple:
Potential = Health level * Energy level.
This small change makes the rocket to behave like a bee collecting nectar or like an asteroid miner.
So that is all, only one potential function makes the whole difference and makes the behaviour to totally mutate.
If you get the right potential formula, the fractal will make the rocket perform whatever you need of it.