The blog post introduces the field of neuroevolution.
So far, we have been exploring many techniques in self-supervised learning (autoregressive models, diffusion models, etc), supervised learning, and reinforcement learning (value-based, policy-based, model-based, etc), most of which involve gradient-based optimizations using a clearly specified, differentiable, and relatively clean and static objective or reward from the environment. However, such objectives and rewards are rarely obtainable in the real world with its unfathomable complexity. Consequently, those techniques, though they perform extraordinarily well in some tasks, are possibly inherently limited in their adaptability, creativity, and robustness for further real-world applications.
However, humans are capable of reasoning, making new discoveries, and continuously adapting to the world full of uncertainty with no clear objective functions, arguably demonstrating the general intelligence the field has been striving for from its inception. A prominent theory of the mechanism behind the emergence of such intelligence is evolution, driven by natural selection, which does not involve gradients of objectives but is a simple heuristic involving stochastic mutations and survival of the fittest genes. Evolution's unique strength is that the gene pool can be kept diverse for continued exploration of a wide area of the search space while exploiting the fittest solutions. In fact, evolution in nature gave birth to diverse species with vastly different genes and survival strategies, including the incredibly complex human brain.
Although other paradigms attempt to keep exploring the search space, such as cross-validation, off-policy control, and rollout, they still struggle with high-dimensional, non-linear, and deceptive objectives, are often stuck in nearby local minima, and suffer from sample inefficiency. To achieve the general intelligence that matches or even surpasses that of humans, or a solution to any highly complex problem with a deceptive objective, evolution, the exact mechanism in nature that gave birth to the brain, is arguably the uniquely suited mechanism. Therefore, we will explore Evolutionary Computation (EC) with the primary focus on Neuroevolution (NE) that applies evolutionary algorithms to neural networks.
Note: The content of this article series is largely inspired by Neuroevolution: Harnessing Creativity in AI Agent Design by Risi, S. et al. (2025). I highly recommend checking it out.
Evolutionary Algorithms
Evolutionary algorithms (EAs) evolve a population of potential solutions for optimization problems. The algorithms involve setting up an initial population, evaluating and selecting individuals based on a fitness function, creating a new population using variation operators like mutation and crossover, and continuing the loop until the termination condition is met. The figure below shows the basic process of EAs. Evolutionary algorithms are well-suited for problems that do not have clear error functions and have multiple complex solutions, though the effectiveness and efficiency of EAs depend on how their components are set up.
In EAs, an individual can be expressed as a genotype, where variation operators are applied. The genotype is typically represented in a string, vector, or graph and is decoded to the corresponding phenotype, whose fitness is evaluated. The genotype representation should be constructed efficiently and in a way that prevents redundancy (same genotypes map to the same phenotype) and poor locality (small change in genotype leads to large change in phenotype), so that the evolutionary process does not waste evaluations and properly converges to optimal regions. The appropriate population size and diversity preservation with an adaptive mutation rate, appropriate fitness computation, weak selection pressure, a structured population, etc., are also important to prevent premature convergence to local optima.
The appropriate definition of a fitness function that reflects the relevant and possibly contradicting qualities and constraints is also crucial for achieving performance and search efficiency in EAs, which often require iterative refinement and domain expertise. Moreover, the termination criteria must be appropriately set up with respect to the problem to prevent premature termination and suboptimal solutions. Although EAs require various considerations for design choices, the same can be said for other methods, and EAs offer the aforementioned benefits when done right. The field of neuroevolution aims to leverage such evolutionary algorithms to evolve neural networks.
Conclusion
In this article, we discussed the motivation behind evolutionary computation and neuroevolution and introduced the basic process of evolutionary algorithms that can be used for evolving neural networks. In the next article, we will discuss several specific types of evolutionary algorithms and relevant techniques in evolutionary computation.
Resources
- Risi, S. et al. 2025. Neuroevolution: Harnessing Creativity in AI Agent Design. Neuroevolution.