Kounios, Loizos, Jeff Clune, Kostas Kouvaris, Günter P. Wagner, Mihaela Pavlicev, Daniel M. Weinreich, and Richard A. Watson. “Resolving the paradox of evolvability with learning theory: How evolution learns to improve evolvability on rugged fitness landscapes.” arXiv preprint arXiv:1612.05955 (2016).
URL1 URL2
It has been hypothesized that one of the main reasons evolution has been able to produce such impressive adaptations is because it has improved its own ability to evolve – “the evolution of evolvability”. Rupert Riedl, for example, an early pioneer of evolutionary developmental biology, suggested that the evolution of complex adaptations is facilitated by a developmental organization that is itself shaped by past selection to facilitate evolutionary innovation. However, selection for characteristics that enable future innovation seems paradoxical: natural selection cannot favor structures for benefits they have not yet produced, and favoring characteristics for benefits that have already been produced does not constitute future innovation. Here we resolve this paradox by exploiting a formal equivalence between the evolution of evolvability and learning systems. We use the conditions that enable simple learning systems to generalize, i.e., to use past experience to improve performance on previously unseen, future test cases, to demonstrate conditions where natural selection can systematically favor developmental organizations that benefit future evolvability. Using numerical simulations of evolution on highly epistatic fitness landscapes, we illustrate how the structure of evolved gene regulation networks can result in increased evolvability capable of avoiding local fitness peaks and discovering higher fitness phenotypes. Our findings support Riedl’s intuition: Developmental organizations that “mimic” the organization of constraints on phenotypes can be favored by short-term selection and also facilitate future innovation. Importantly, the conditions that enable the evolution of such surprising evolvability follow from the same non-mysterious conditions that permit generalization in learning systems.