Thank you, Codingame, for this new challenge!
I found the game to be very well-designed, with a more complex set of rules than usual, introducing significant depth through the Sporer mechanic. This allowed both heuristics and search algorithms to thrive. The UI was clear and straightforward to understand. The varied maps (low-protein/high-protein, closed/open) forced the creation of a bot that could adapt to different contexts (although I didn’t delve too far into this aspect).
My only criticism is that the open maps were highly chaotic, especially those where it was possible to spawn a root near the enemy root by the second turn. These maps seemed very random, with victory often depending on luck and where your tentacle spawned.
I am very happy and proud of my result, especially considering that three nights before the end, I considered giving up. Changing certain heuristics had significantly lowered my bot's performance, and I couldn’t fix the issue. I then completely rewrote my evaluation, simplified it, and this allowed me to reach the Legend League on the same day!
Congratulations to Recurse for the win and a special mention to thebitspud, who finished 24th with Typescript! An excellent result!
Having a local environment for faster development and easy testing of the bot’s behavior is crucial.
For the first time, I developed a local visualizer to observe my bot in action without an opponent. This allowed me to review my bot’s logs, debug quickly, and visualize whether it was behaving correctly.
Since my bot is written in TypeScript and compiled to JavaScript, I could fully integrate it into an HTML page and visualize its various actions.
Here’s the visualizer in action:

And in video format:
visualizer.mp4
One feature I would have liked, and will consider for future challenges, is the ability to automatically reconstruct the map from the first turn of a replay link. Retrieving inputs to display the map is cumbersome in my current version and significantly slowed down debugging for certain replays.
I downloaded a few dozen maps to perform local unit tests on a set of maps and verify if my bot behaved as expected. My initial idea was to set up a scoring system, such as the number of turns required to reach the boundary or claim over 50% of the map, and ensure that improving my bot would continuously improve this score. Unfortunately, I couldn’t use it effectively. However, I believe this approach could be highly efficient, particularly on closed maps.
The arena with its 100 matches is not deterministic enough to ensure that a change in code is genuinely beneficial. Using a scoring system like Psyleague helps exhaustively test various bot versions to select the best one.
Running 3,000 to 4,000 games per version gave me confidence in Psyleague's results.
I kept all versions with significant code changes that showed improvements both in Psyleague and the arena.
Here is the final result of all the versions I stored:
Be cautious, however: modifications to some heuristics can lead to overfitting against the previous version. It’s essential to maintain diverse versions with varying behaviors to mitigate this issue. As I frequently overhauled large parts of my code, I ended up with sufficiently diverse versions.
From the beginning of the challenge, I opted for a brute-force approach with a depth of 0 (evaluating only the first move) and stuck with it until the end.
My bot operates in five phases, four of which involve evaluations:
- Precalculation: Compute the boundary, non-harvestable proteins, and perform a few BFS operations.
- Defense: Place tentacles for defense/attack.
- Spore: Place new roots.
- Grow Sporer: Create sporers for spawning a root in the next turn.
- Expansion: Create organs to collect new proteins.
The Spore and Grow Sporer phases share the same evaluation function, while Defense and Expansion have distinct evaluations.
Each evaluation phase returns one or zero actions per root. Subsequent phases evaluate only roots that haven’t yet been assigned actions.
Early in the challenge, I implemented boundary calculation (points equidistant from my organs and the opponent's) using BFS. Literature seems to refer to this as a "Voronoi Border," though my implementation is just BFS, so I’m unsure if it qualifies.
These boundary cells are represented in yellow on my visualizer, while cells closest to me are green, and those closest to the opponent are red.
This calculation could be leveraged in various ways, especially on closed maps where a few tentacles can block the opponent. I used it in two cases:
- To identify non-recoverable proteins.
- In evaluations to spawn roots at least three cells closer to the boundary.
A major issue with my bot was that it considered all proteins as targets for Harvesters, incurring penalties when constructing organs on them. However, some proteins lie on critical paths to the boundary, making them unavoidable.
Here’s an example:
Proteins on blue arrows are non-harvestable because they must be traversed to reach the boundary, making Harvester construction unnecessary.
Detection Method
Thanks to R4N4R4M4 for this idea.
I ran a Dijkstra algorithm starting from all my organs, assigning a cost of 1 to non-protein cells and 4 to protein cells. This ensures that the shortest path avoids proteins where possible. Dijkstra also records parent nodes to trace paths.
For each boundary point, I backtrack to the origin and mark all traversed proteins as non-recoverable.
This change significantly improved my bot and helped me reach Legend League.
A potential enhancement could involve detecting proteins that can be "jumped" (e.g., placing a Sporer before and a Root after them). However, since my system updates every turn, proteins may regain recoverable status if roots are placed beyond them.
First, I generate a list of all potential tentacle placements and directions, filtering out invalid ones (e.g., pointing at walls or outside the map).
For each placement, I run a BFS to calculate distances to enemy entities.
The score for each action is computed using the following evaluation:
+ w1 / minDistanceToOpponent
+ w2 * totalOpponentOrgansToDestroy (if distance = 1)
+ w3 * myChildOrgans (if distance <= 2)
- w4 if the position is already defended
- w5 if the targeted cell is already defended
+ w6 if the enemy can expand here
+ w7 if I destroy an opponent's protein
- w8 if I destroy my own protein
- 1000 if I destroy my last harvested protein of a type
A critical condition that boosted my rank:
- return -Infinity if (minDistanceToOpponent > 3 && myProteinGains.C <= 1) ||
(myProteins.C <= 1 && opponentProteinGains.C >= 2 && myProteinGains.C <= 1)
If you only have one source of C proteins, you can’t rebuild tentacles or Harvesters elsewhere, limiting development. Sacrificing some defense to acquire more C proteins is often better.
After evaluating actions, I select the best one with a positive score, assign it to its root, and remove related actions. This process continues until no valid actions remain.
This system applies to all evaluation phases.
Spore creation and root spawning use similar evaluations. The goal of a Sporer is to spawn a root, so its evaluation mirrors the best possible root it could create.
The decision to expand or collect more proteins depends on a boolean shouldExpand:
const mainProteinsCollected = myProteinsGains.B > 0 && myProteinsGains.C > 0 && myProteinsGains.D > 0;
const shouldExpand = mainProteinsCollected && totalOpponentGains <= totalMyGains &&
this.totalMyOwnedCells - 10 <= this.totalOpponentOwnedCells;This adjusts the evaluation to prioritize protein collection or faster expansion.
Root evaluation:
score -= minDistanceToProtein === 1 ? 10 : minDistanceToProtein; // Avoid spawning on proteins
for (const protein of state.proteins.filter((p) => !p.isAlreadyHarvested)) {
score += 10 / distanceToProtein;
}
score -= 10 if targetEntity is protein;
if (targetOwner === Owner.ME) {
return score + 100 - minDistanceToBoundary;
}
if (targetOwner === Owner.OPPONENT) {
return score + 100 + minDistanceToBoundary;
}
return score + 100;Additional checks for non-expansion scenarios:
- return -Infinity if minProteinDistance = Infinity
- return -Infinity if minDistanceToMyOrgans - 2 <= minProteinDistance
When shouldExpand is true, the evaluation seeks points near the boundary or deep in enemy territory.
For roots without assigned actions, expansion evaluates all possible growth actions (tentacles, sporers, but not roots).
Why include tentacles and sporers? They can help access proteins when I don't have proteins for Basic.
Evaluation:
- 10,000 if I can’t build another Harvester afterward
- 300 if no enemy presence and no proteins to harvest
+ w1 / minDistanceToOpponent if no proteins to harvest
+ w2 * protein_type_weight if harvesting a protein
- w3 * protein_type_weight if blocking protein harvesting
+ protein_type_weight * (10 - protein_gain_type) / minDistanceToProteinType for each type
- 500 if I overwrite an already-harvested protein (and it’s my last one)
- protein_type_weight * (10 - protein_gain_type) if I destroy a protein
Simplifying this function drastically improved my bot’s performance.
If all actions yield negative scores, my bot considers waiting. To prevent this, I store the best negative action and allow it if no positive actions exist. This often leads to progress (e.g., removing a protein blocking the path), enabling the bot to find positive actions in subsequent turns.
- Beam Search: I attempted a Beam Search in the first turn to optimize the shortest path for collecting B, C, and D proteins. While functional locally, it performed poorly on Codingame servers due to slower CPUs. I replaced it with simpler evaluations before reaching Legend.
- Global Evaluation: A single, complex evaluation function with numerous conditions proved unmanageable. Separating evaluations by phase enabled specific improvements at the cost of broader decision-making.
One idea early on was to create distinct evaluations for closed maps, where 2–3 tentacles could block the opponent.
For example, seed 7912673889433137414:
Blocking the opponent can secure a 2-point win. A separate evaluation focusing on shortest paths to block while maintaining a Harvester would suffice to win such maps.
Given the frequency of such maps, this could yield significant ELO gains.
I thoroughly enjoyed this challenge and hope this post-mortem helps players improve their AIs, potentially reaching Legend, or inspires heuristic and evaluation ideas for future contests.
Feel free to ping me on Discord if you have questions about this post-mortem, and see you in the next challenges!
