Hazen and Wong’s Law of Functional Information in Context: Cross-Disciplinary Theories of Evolving Complexity
A GPT 4.5 Deep Research writeup
Introduction: Across science and philosophy, many frameworks attempt to explain how complex systems become more functionally complex over time. These cross-disciplinary theories span physics, biology, information theory, systems science, artificial life, and astrobiology, seeking universal principles behind the emergence of organized complexity. Below we survey major historical and contemporary models – from thermodynamics and cybernetics to assembly theory and open-ended evolution – and discuss how each relates to the “functional information” framework of Hazen and Wong. Throughout, we highlight common themes (e.g. the role of energy flow, information, selection, and emergence) as well as contrasts in emphasis (e.g. self-organization vs. selection for function). Together, these high-level theories outline an integrative view of how nature’s diverse systems can evolve greater functional complexity over time.
Thermodynamics and Self-Organization
One foundational perspective comes from thermodynamics, examining how complexity can arise despite the Second Law’s demand for increasing entropy. Erwin Schrödinger (1944) famously noted that living systems avoid decay by “feeding on negative entropy,” importing free energy to maintain or increase their internal order. In other words, open systems (like Earth’s biosphere) can export entropy to the environment and locally build structure. Ludwig von Bertalanffy (a founder of General Systems Theory) echoed this: in 1940 he described how living organisms, as open systems, “import negative entropy” (high-free-energy matter) and thus “may even develop toward states of increased order and organization”. These ideas set the stage for viewing complexity growth as thermodynamically permitted when energy flows through a system.
Building on this, Ilya Prigogine introduced the concept of dissipative structures. He showed that far-from-equilibrium systems can spontaneously form ordered, complex patterns (for example, convection cells, chemical oscillations, even hurricanes) by dissipating energy. Such self-organization is marked by symmetry breaking and emergent long-range correlations among components. Importantly, living organisms were recognized as extreme dissipative structures – open systems that maintain steady states and create order by dissipating energy gradients. Prigogine argued that these self-organized structures in physics display life-like characteristics, blurring the line between “inanimate” and “animate” complexity. Later researchers (e.g. Eric Chaisson in cosmology) even quantified this trend: as the universe evolved, the density of free energy flow through local systems rose exponentially, “paralleling the increase of [those] systems’ structure” over cosmic time. In short, thermodynamic views provide a universal energy-driven mechanism for complexity: given a flux of usable energy, matter can self-organize into ever more complex, structured arrangements.
Relation to Hazen & Wong: The thermodynamic/self-organization paradigm complements Hazen and Wong’s framework by supplying the necessary condition for complexity growth – an energy source to explore configurations. Hazen & Wong explicitly include stability and persistence as basic “functions” that get selected, which implicitly relies on thermodynamic stability (e.g. a stable mineral structure or a self-maintaining chemical cycle persists because it’s energetically favored). However, thermodynamic theories emphasize spontaneous order (“order out of chaos”) even without explicit selection for a purpose, whereas Hazen & Wong put explicit weight on selection for functional advantage. In practice, these views converge: a dissipative structure that survives due to stability is essentially “selected” by the environment (unstable forms dissolve). Thus, Prigogine’s self-organization can be seen as a precursor to selection for stability in Hazen & Wong’s law. What thermodynamics alone doesn’t require – but Hazen & Wong add – is the idea of functional selection driving open-ended innovation (beyond just reaching a steady-state structure). We will see later how Hazen & Wong incorporate both self-organization (stability) and Darwinian-like selection (novelty) in a single integrative law.
Cybernetics and Systems Theory
While thermodynamics dealt with energy and order, cybernetics and General Systems Theory approached complexity through the lens of information, control, and feedback. Norbert Wiener’s cybernetics (1948) was a cross-disciplinary framework comparing animals and machines as control systems, governed by feedback loops and information flows. Cybernetics introduced the idea that complex organized behavior results from regulatory feedback – for example, a thermostat or a biological homeostasis loop uses information about its state to maintain stability. This focus on feedback-controlled self-regulation provided an early high-level principle for complexity: systems that can “sense” and adjust can achieve stable, goal-oriented functions despite disturbances. In Ross Ashby’s work, the famed Law of Requisite Variety formalized a principle: to effectively control a complex system, a controller must possess an equally complex range of responses. In other words, increasing complexity (variety) in the controller is necessary to manage complexity in the environment – hinting that as systems evolve, their internal informational complexity must grow to handle external complexity.
General Systems Theory (GST), developed by Ludwig von Bertalanffy and others in the mid-20th century, further broadened these insights. GST sought common organizational principles across biology, ecology, technology, etc. It emphasized wholes vs. parts, stating a system is “more than the sum of its parts” due to emergent properties. Key ideas include hierarchical organization (systems made of subsystems), open system dynamics (exchange with environment), and adaptation. GST highlighted that adaptable systems learn and evolve by engaging with their environment, and that certain principles (like feedback, equilibrium, etc.) repeat at many scales. In essence, it provided a conceptual toolkit for thinking about how complexity is organized and maintained in any domain. Later “complexity theory” at places like the Santa Fe Institute built on this, exploring how order, patterns, and structure can arise in chaotic systems (often using computational models of self-organizing, adaptive agents). Concepts like emergence, self-organized criticality, and chaos-to-order transitions came from this lineage. They suggested that complex adaptive systems (from ecologies to economies) may share universal behaviors – for instance, operating at an “edge of chaos” might maximize a system’s ability to generate complex yet stable patterns.
Relation to Hazen & Wong: Cybernetics and systems theory contribute a functional perspective that resonates strongly with Hazen & Wong’s focus on function. Both view “function” and goal-directedness as central – a cybernetic system has a purpose (e.g. hold a temperature), and complexity is meaningful relative to that function. In fact, Hazen has argued that “‘complexity’ only has meaning in the context of ‘function’”. That idea is a cornerstone of Hazen & Wong’s functional information metric. Moreover, the notion of feedback aligns with the idea that complex systems refine themselves via iterative selection (feedback is essentially a selection mechanism to correct deviations). Hazen & Wong’s law is essentially a feedback loop writ large: many configurations are tried; those that “work” (functional feedback) persist and contribute to future states. One difference is that classical cybernetics often considered homeostasis (maintaining a goal state) rather than open-ended increase in complexity. Hazen & Wong extend beyond homeostasis by including novelty generation as a key function driving systems to new complexity. Still, the systems view of emergent order and adaptive feedback provides a conceptual bridge: it shows how once complexity (like an organism or machine) exists, it can use information to maintain and improve itself. Hazen & Wong build on that by asking how such systems evolve in the first place, tying in explicit evolutionary mechanisms.
Information Theory and Complexity Measures
Another cross-cutting pillar is information theory and the quantification of complexity. Claude Shannon’s information theory (1948) introduced the bit as a measure of information entropy, quantifying the uncertainty or surprise in messages. This gave scientists a rigorous way to talk about information content in any system (physical or biological). For example, a random DNA sequence has high Shannon entropy (unpredictable sequence), whereas a highly patterned sequence has lower entropy. However, Shannon’s measure does not distinguish meaningful or functional information – a random DNA and a well-adapted gene sequence of the same length could have equal Shannon entropy, even though one encodes a useful function and the other does not. Similarly, Andrey Kolmogorov’s notion of algorithmic complexity measures the length of the shortest description (or program) that can produce a string. A sequence that appears random (no compressible pattern) has maximal Kolmogorov complexity. But again, “random” complexity is not the same as organized, functional complexity in biology.
Recognizing this, scientists sought measures of “complexity that matters.” Leslie Orgel in the 1970s spoke of “specified complexity,” emphasizing patterns that are both complex and specified by some functional requirement. In 2003, Jack Szostak introduced the concept of “functional information” for biopolymers. He proposed quantifying how much information is required for a molecule (like an RNA strand) to perform a specific function above a given threshold. The logic is intuitive: if only a very rare sequence can do the job (e.g. one unique aptamer binds a target out of billions tested), then that sequence carries high functional information. Conversely, if many sequences work, the functional information is low. Robert Hazen and colleagues formalized this in 2007: they defined functional information I(E) as the information required to achieve at least a degree of function E, given by:
I(E)=−log2[F(E)], I(E) = -\log_2 [F(E)] ,
where F(E) is the fraction of all possible configurations that have function ≥ E. This elegantly measures complexity in terms of how improbable a functional configuration is by chance. Hazen et al. demonstrated this concept with examples from text sequences to digital organisms and biomolecules. Notably, they observed that in complex systems, functional solutions tend to form “islands” in the vast space of possibilities – most random changes do nothing, but occasional mutations jump to a new functional peak. This hinted at discontinuous increases in functional information – an observation aligned with the idea that complexity might increase in jumps (e.g. new innovations) rather than strictly gradually.
Beyond biology, others have attempted general complexity metrics. Physicist Murray Gell-Mann advocated “effective complexity,” which separates randomness from structured, regular information – only the latter counted as true complexity. Seth Lloyd catalogued over 40 definitions of complexity across fields, ranging from logical depth (time to compute a structure) to mutual information between parts of a system. While no single metric suits all needs, the trend has been to find measures that capture organized, functional, or adaptive complexity as opposed to mere randomness. These information-based approaches are inherently cross-disciplinary: one can measure information in DNA, texts, particle configurations, or ecosystems alike.
Relation to Hazen & Wong: Hazen & Wong’s entire framework is explicitly built on functional information, so it directly inherits this information-theoretic foundation. By using Hazen’s metric, they quantify the idea that as systems evolve, the amount of information specifying their functions increases. In fact, Hazen and Wong propose a “Law of Increasing Functional Information” as the missing principle to explain universal complexity trends. This is essentially saying that the functional bits in the universe’s systems keep accumulating under the right conditions. This idea aligns with earlier notions like “Biological information increase” via evolution, but Hazen & Wong extend it to any system (e.g. minerals, planetary systems) that undergoes selection for function. One contrast is with purely Shannon entropy views: a system can increase in Shannon information (become more unpredictable) without gaining new functional organization (e.g. chaos). Hazen & Wong’s law is specifically about functional info increasing – which implicitly filters out “useless” complexity. Thus, their framework adds a value-lens to information theory: it’s not just bits that increase, but bits that do something useful. This resonates with Szostak’s and Hazen’s earlier measures and stands in contrast to measures of complexity that ignore function (they would consider those less relevant to “organized complexity”). In summary, Hazen & Wong’s approach synthesizes the information-theoretic rigor (using probabilities and log-measures) with the concept of function to create a universal complexity metric.
Universal Darwinism and Selection Principles
Perhaps the most unifying mechanism for increasing complexity is Darwinian evolution by natural selection – and its generalization beyond biology. In biology, Darwin’s theory explains how populations accumulate adaptations: heritable variation combined with selection of fitter variants leads to the retention of information (in genes) that encodes complex functional traits. Importantly, Darwinian evolution is an iterative search algorithm that can, in principle, produce arbitrarily complex solutions given time. By the late 20th century, scholars proposed that Darwin’s mechanism is substrate-neutral and operates in many domains, an idea termed “Universal Darwinism.” This concept (articulated by thinkers like Donald Campbell and later Richard Dawkins) holds that wherever you have variation, selection, and inheritance, you will get an evolutionary increase in adapted complexity. Examples include cultural evolution (ideas/memes varying and selected in human societies), learning algorithms (Genetic Algorithms or neural networks “evolving” solutions), even certain physical processes. In essence, Universal Darwinism provides a universal engine for complexity: differential persistence (“selection”) of variants will progressively accumulate functional information in any system. As one reference notes, the aim is to “extend variation–selection–retention beyond biological evolution,” to domains from psychology and economics to cosmology.
A striking example is the concept of “chemical” or “mineral” evolution. Hazen (a geologist) himself pioneered the idea of mineral evolution – how Earth’s mineral diversity increased over time through sequential chemical “inventions” (e.g. new minerals arising as conditions changed). In the early Earth, only a dozen or so minerals existed; by today, thousands do. The increase came in stages: geochemical processes and later biological inputs (like oxygen from photosynthesis) created new mineral species. While minerals don’t reproduce, the planet produced many random crystal forms, with only some stable ones persisting – essentially a selection mechanism favoring stable configurations. This is an example of Darwinian logic applied to a geological system.
Even on the cosmic scale, Lee Smolin has hypothesized a form of Darwinism for the universe (black holes “reproduce” new universes with inherited constants, leading to selection of universes favorable to black hole production). While speculative, it shows the reach of Darwinian thinking as a general principle for complexification. In technology, one can view the evolution of machines or software as Darwinian (variations of designs compete in the market or lab, and successful features are retained in new generations). Genetic algorithms explicitly mimic Darwinian selection to evolve solutions to engineering problems, often yielding designs too complex for humans to intuit.
Relation to Hazen & Wong: Hazen and Wong’s Law of Increasing Functional Information is essentially a statement of Universal Darwinism. They propose that any system will evolve (increase functional complexity) “if many different configurations of the system undergo selection for one or more functions.” This is Darwin’s algorithm in a nutshell: many configurations (variation), selection for function (differential survival), repeated over time yields evolution. What Hazen & Wong add is a precise focus on functional information as the quantity that increases, and they broaden “function” beyond just biological fitness. In their framework, function = “that which is being selected for.” In biology, that’s survival/reproduction; but they point out that even non-living complex systems have functions in a sense – persistence or performance criteria that determine what lasts. They identify three generic kinds of function in nature (more on this in a later section) – including basic stability and higher-order novelty – to cover living and non-living cases. By doing so, Hazen & Wong explicitly align with Universal Selection Theory, while making it quantitative (via functional info) and inclusive (applying to minerals, planets, etc., not just life).
In short, Hazen & Wong’s law can be seen as a formal “law of Universal Darwinism.” It shares the fundamental similarity that selection is the driver of complexity in both views. A contrast might be that some versions of Universal Darwinism remain conceptual, whereas Hazen & Wong aspire to cast it as a natural law akin to thermodynamics, potentially verifiable in diverse systems. Their work, for instance, suggests that just as we have laws of motion or entropy, there may be a law that complex systems (living or not) tend to evolve complexity through selection processes. This is a bold claim, and not all scientists are convinced such a law can be rigorously tested outside of biology (we will note one critique when discussing Assembly Theory). Nonetheless, the consonance with Universal Darwinism is clear: Hazen & Wong provide the conceptual glue that unites Darwin’s mechanism with complexity increase across the universe.
Major Transitions in Evolution (Hierarchical Complexity)
Within the biological realm, John Maynard Smith and Eörs Szathmáry’s concept of the Major Transitions in Evolution provides a high-level narrative of how complexity has increased in a series of leaps. In their influential 1995 work, they identified a series of pivotal transitions in the history of life, each of which created new levels of biological organization and new ways to transmit information. These transitions include, for example: the origin of replicating molecules from chemistry; the grouping of genes into a chromosome; the emergence of eukaryotic cells from symbiosis of simpler cells; the evolution of sexual reproduction; the development of multicellular organisms from single-celled ones; the formation of social insect colonies; and the rise of human societies with language.
What’s common across these is that smaller units (which once could replicate independently) come together to form a larger integrated unit with a division of labor. For instance, formerly free-living bacteria became mitochondria inside a eukaryote cell, or individual cells became the specialized cells of a multicellular body. In each case, the new level has new “functional wholes” – a cell, an organism, a society – that operates as a unit of selection. Along with integration, new information channels emerge (genetic, epigenetic, linguistic, etc.) to coordinate the larger unit. These transitions demonstrate that evolution doesn’t just accumulate small changes, but occasionally jumps to higher complexity by forming new hierarchies. Each major transition can be seen as a qualitative shift that opens fresh pathways for further complexity (e.g. multicellularity allowed organisms to evolve vastly larger size and specialized organs).
Major transitions theory, while specific to Earth’s biosphere, has a conceptual universality in highlighting hierarchical complexity and innovation. It suggests a general pattern: complexity grows by integrating simpler units into a cooperative whole, often requiring mechanisms to suppress conflict among units (e.g. genetic loyalty within multicellular organisms, or rules in societies). Some have speculated that if life exists elsewhere, it might undergo analogous transitions (though perhaps not identical). The framework has even been metaphorically extended – people talk about “major transitions” in technology (e.g. the digital revolution as a transition in information processing) or in cosmic evolution (formation of galaxies, stars, planets as transitions in complexity of matter).
Relation to Hazen & Wong: Major transitions in evolution offer concrete instances of functional information jumps in Hazen & Wong’s terms. For example, the origin of the eukaryotic cell was a huge increase in the functional information of the system – new capabilities (a nucleus, organelles) and new complexity that was exceedingly unlikely without the “selection experiments” of symbiosis and subsequent integration. Hazen & Wong’s law would interpret each major transition as a case where many configurations were tried and a new, highly functional configuration was selected and then stabilized. Notably, Hazen & Wong emphasize “novelty generation” as a key function that evolving systems tend to maximize. Major transitions are exactly the kinds of novel innovations that push complexity to a new level. In Hazen & Wong’s framework, once a new higher-level individual emerges (say multicellular life), it becomes a new unit on which selection can act, allowing further increase in functional information at that higher level. This idea dovetails with the “levels of selection” notion in major transitions (e.g. after multicellularity, selection can favor traits at the organism level, not just the cell level).
One point of contrast is that major transitions theory is descriptive (a narrative of what happened in Earth’s history), whereas Hazen & Wong aim for a law-like generality (applicable to any evolving complex system). Hazen & Wong’s law would regard major transitions as expected outcomes of the ever-increasing functional info – essentially milestones when a system finds a configuration that unlocks orders-of-magnitude more functional possibilities (like the leap from single cells to an organism with billions of cells specialized). In the language of Hazen’s earlier work, these could correspond to those “information discontinuities” or stepping points in the plot of information vs. function. Another insight: Prigogine’s dissipative systems view has been applied to major transitions as well, suggesting that each transition can be seen as lower-level entities becoming functions in a larger dissipative structure. This aligns with Hazen & Wong’s notion that function is what gets selected – once cells become merely functional parts of an organism, selection operates at the organism level. In summary, Hazen & Wong’s universal law provides a broader canvas on which major transitions are specific painted scenes – highly important examples of complexity jumps explained by selection for new functions. The similarity is the mechanism (selection drives both), and the difference is mainly scope (major transitions are biology-specific and emphasize hierarchical unit formation, whereas Hazen & Wong apply to any system and emphasize a continuous increase in functional info, punctuated by such jumps).
Autocatalytic Sets and the Emergence of Complexity
Not all complexity in evolving systems comes solely from external selection; some emerges spontaneously from network dynamics. A key concept here is autocatalytic sets, introduced by Stuart Kauffman (originally in the context of the origin of life). An autocatalytic set is essentially a self-sustaining network of reactions: a collection of entities (molecules, for example) where each entity is produced by some reaction catalyzed by other entities in the set, so the set as a whole catalyzes its own production. In simpler terms, the members of the set collectively help create each other. Once such a set exists, it can be chemically self-replicating – if you split it into two containers, each half can regrow the full network given raw ingredients, much like a cell dividing. This property led Kauffman to suggest that autocatalytic networks could have formed the first self-organizing metabolisms, kick-starting life without a highly improbable pre-designed molecule. Importantly, he argued that autocatalytic sets will almost inevitably arise given a sufficiently diverse “soup” of molecules, because the number of possible reactions grows combinatorially and a giant catalytic network will form at a threshold (a phase transition in complexity).
Autocatalytic set theory has been generalized beyond chemistry. Researchers have drawn analogies to ecosystems (species facilitating each other’s existence), economies (industries and products catalyzing others), and other complex networks. The core idea is self-reinforcing feedback: once a closed-loop of positive feedback forming a self-sustaining cycle appears, it will persist and potentially grow. This is a form of self-organization leading to functional complexity – the set has the “function” of self-maintenance. Unlike Darwinian selection, which requires competition and differential survival, an autocatalytic set can emerge without external selection – it’s an intrinsic emergent order. However, after it emerges, it can undergo Darwinian evolution. For instance, once you have an autocatalytic molecular network, different sets might compete for resources, and those networks that are more efficient (or that expand to use new molecules) will outcompete others – introducing a selection dynamic after self-organization.
Autocatalytic sets provide a conceptual bridge between chemistry and biology, suggesting that complexity can start by self-organization and then be refined by selection. Kauffman often emphasized that life’s complexity is due not just to “selection tinkering on random parts,” but also to spontaneous order giving selection something rich to work on. Modern formalizations (like RAF theory – Reflexively Autocatalytic and Food-generated sets) have given mathematical rigor to these ideas and shown that even in abstract models, such sets can arise and exhibit increasing complexity under certain conditions.
Relation to Hazen & Wong: Autocatalytic sets highlight the role of function emerging from self-organization. In Hazen & Wong’s terms, an autocatalytic network possesses a critical function: it persists by reproducing itself, which is essentially the function of basic life. This would fall under the first two types of function Hazen & Wong discuss – stability and dynamic persistence. The set persists over time (dynamic stability) and thus is “selected” simply by virtue of being able to exist continually. Hazen & Wong’s law would count the formation of an autocatalytic set as a rise in functional information: out of all random chemical configurations, very few can sustain a catalytic cycle, so hitting on one is a big jump in function (hence high I(E)). However, Hazen & Wong’s emphasis is on selection filtering configurations, whereas autocatalytic sets emphasize spontaneous generation of a functional configuration. The two are not at odds – rather, they represent two sides of complexity’s coin: generation of novelty (by chance or self-organization) and selection of what works. Hazen & Wong explicitly include novelty as a driver, but they frame it as selection for novelty in evolving systems. One might ask: who “selects” an autocatalytic set? The answer can be phrased that the environment selects for persistence – once the set forms, it has a catalytic closure that makes it persist while other random molecular flukes fall apart. In that sense, persistence itself is the criterion of selection (aligning with Hazen & Wong’s notion that stability is a function that gets selected).
A subtle contrast: Kauffman at times suggested a certain inevitability or ubiquity of autocatalytic self-organization, which might sound less driven by rare selection events and more by general physical principles. Hazen & Wong’s framework could accommodate that by saying if the probability of a functional set emerging reaches a tipping point, then many configurations will meet the function and thus functional information isn’t large until you go past that. In practical terms, autocatalytic sets complement Hazen & Wong by highlighting the origin of new functions. Hazen & Wong’s law describes what happens once selection is operating; autocatalysis describes how a new self-sustaining function might arise, giving selection something to act on (for example, the first self-replicator that natural selection can then amplify). Both agree that once such a system exists, it will tend to persist and expand, leading to greater complexity (the autocatalytic set can grow or incorporate new reactions – a form of evolution of the network). In summary, autocatalytic set theory provides a mechanistic example of complexity emergence consistent with Hazen & Wong’s principles (persistence and function), but it underscores that not all complexity needs an external selector at first – some functions can emerge spontaneously and then subsequently undergo selection. Hazen & Wong’s broad framework is flexible enough to include that scenario under the umbrella of “selection for stability”: the autocatalytic cycle is “selected” by the physics of the environment because it’s stable once formed, and unstable arrangements disappear.
Open-Ended Evolution and Artificial Life
One of the grand challenges in understanding complexity is explaining open-ended evolution (OEE) – the kind of evolution that produces unbounded novelty and increasing complexity indefinitely. So far, the only unequivocal example we know is life on Earth, which over ~4 billion years has continually generated new forms, functions, and greater overall complexity (despite extinctions, the envelope of complexity has expanded). Artificial Life (ALife) researchers strive to create computer-simulated or artificial systems that exhibit similar open-ended evolution. They define OEE as the “continued ability to produce new adaptive traits or new kinds of entities, without hitting a ceiling”. In other words, an open-ended evolutionary system never runs out of novelty – it keeps inventing new things (new behaviors, new levels of organization).
In practice, achieving OEE in silico has proven very difficult. Pioneering ALife platforms like Tierra (Tom Ray, 1991) and Avida (Ofria, Adami, & Lenski, ~2003) demonstrated some evolutionary dynamics: digital “organisms” (self-replicating code) did undergo mutation and selection, sometimes producing surprising outcomes like coevolutionary “parasites” or complex logic functions evolving in genomes. For example, in the Avida system, digital organisms were rewarded for performing logical computations; over thousands of generations, populations evolved increasingly complex instruction sequences that could do more advanced computations (like the EQU logic operation) – a clear increase in functional complexity driven by selection. Avida experiments have even mirrored biological observations: most random mutations do nothing or are deleterious, but occasional mutations can increase function. This again underscores that innovation tends to come in rare, discrete steps – an observation consistent with the “islands of function” idea. However, while these systems showed complexity growth for a while, they often reached plateaus or the evolution tapered off. True open-endedness, where complexity keeps growing without bound, has not yet been fully realized in a closed digital system. ALife researchers have identified factors that might enable OEE: e.g. vast genotype space, rich environmental interactions, the ability for organisms to modify their environment, and perhaps multi-level selection. There are ongoing efforts to classify and measure open-ended innovation in simulations.
Nonetheless, ALife has been invaluable for testing evolutionary principles in a controlled setting. It provides evidence that Darwinian selection can indeed generate complexity in any information-bearing medium, not just DNA – a strong support for universality. Moreover, it has sharpened concepts like “adaptive novelty” (new useful traits) versus “complexity” (which might include non-functional elaboration). Ideally, an open-ended system produces adaptive complexity indefinitely. Some recent theoretical work ties open-ended evolution to ideas in theoretical computer science (e.g. infinite games, unbounded search spaces) and even to the concept of incompleteness (some argue that life’s evolution is an ever-extending process akin to a non-halting computation).
Relation to Hazen & Wong: Open-ended evolution is explicitly about the continuous generation of functional novelty, which is at the heart of Hazen & Wong’s third kind of function: novelty (innovation) selection. In their framework, an evolving system isn’t just trying to persist; it also explores new configurations that sometimes yield “startling new behaviors or characteristics”. Hazen & Wong cite examples like the evolution of photosynthesis, multicellularity, flight, and cognition as instances of novel functions that drove complexity leaps. This aligns perfectly with the ALife/ OEE perspective that ongoing innovation is what keeps complexity rising. In fact, Hazen and colleagues explicitly include novelty generation as a functional selection pressure – they suggest that nature has not just first-order selection for survival, but a kind of second-order selection for evolvability or exploration (a system that finds new solutions can outcompete one that stagnates, all else equal). This idea resonates with concepts like the evolution of evolvability in evolutionary theory and with novelty search algorithms in AI (which reward novelty for its own sake to avoid getting stuck).
One could view Hazen & Wong’s law as a broad description of open-ended evolution wherever it occurs: if selection is present and variation continues, functional information will keep increasing – i.e., the evolution is open-ended by default. However, Hazen & Wong also acknowledge that some systems have more evolutionary potential than others. For example, a simple mineral system might “evolve” new minerals until it exhausts chemical possibilities, then plateau; life, with its evolving innovation mechanisms, keeps going. They talk about concepts like “potential complexity” or “future complexity” being higher in systems that can generate more novelty. In ALife terms, this is like saying certain systems have a larger “adjacent possible” space they can continually access. Hazen & Wong’s emphasis on novelty as a function suggests that systems that incorporate novelty-seeking (or at least don’t punish novelty) will achieve greater complexity. This is quite similar to ALife insights that to get open-ended growth, the system must allow and reward exploration of never-seen-before traits.
One contrast is in actual demonstration: Hazen & Wong’s law is a theoretical generalization, whereas ALife is about concrete instances. So far, Hazen & Wong’s law can point to life’s history as proof-of-concept; ALife tries to reproduce that in silico. The difficulty ALife faces in achieving unlimited complexity ironically underscores Hazen & Wong’s point that it’s a special set of conditions that permit sustained complexity increase. Hazen & Wong would likely say those conditions are “many configurations + selection for function” – which sounds simple but in practice requires rich substrates and feedback (as ALife has shown). In summary, open-ended evolution is essentially the phenomenon that Hazen & Wong’s law attempts to capture. Both recognize that the key to continual complexity growth is continual innovation under selection. Hazen & Wong give a conceptual rule for it, and ALife provides experimental/testing ground – each informing the other. The similarities are strong, with Hazen & Wong essentially providing a theoretical validation of the notion that open-ended complexity (as seen in life) is a general principle, not an accident. The challenge remains to fully test this principle, something ALife and astrobiology will continue to pursue.
Assembly Theory and Universal Complexity Metrics
A contemporary development that approaches the question of complexity from a fresh angle is Assembly Theory, created by chemist Lee Cronin and astrobiologist Sara Imari Walker (and collaborators). Assembly Theory does not start with selection or function per se, but instead asks: how can we quantify the complexity of an object in a way that reflects its history (and possibly the influence of evolutionary processes)? The core concept is the Assembly Index (AI), defined as the minimum number of fundamental “assembly steps” required to build the object from basic building blocks. For example, consider a complex molecule: if it can be constructed by joining 10 simpler molecular fragments in the best possible sequence, its assembly index is 10. A simpler molecule requiring only 2 joins has AI = 2. The more steps needed, the more complex the object.
This theory crucially posits that objects with high assembly index are exceedingly unlikely to form randomly; they almost certainly are the product of an evolutionary or synthetic process. In effect, Assembly Theory connects complexity with a notion of “having been assembled via selection over time.” Cronin and Walker argue that if we find an object (say a molecule in a sample from an alien planet) with very high AI, it is a strong biosignature, evidence that some evolutionary process (life or technology) produced it. Assembly Theory has been demonstrated by analyzing mass spectra of chemical mixtures – they show that certain complex molecules (with many subcomponents) only appear in abundance if there was a biological source. It thus provides an experimentally tractable measure of complexity for real-world samples.
What makes Assembly Theory cross-disciplinary and high-level is that it aims to be a general framework for complexity in physics, chemistry, biology, and even artifacts. It reconceives objects in the universe as not just static structures, but as things that carry a record of temporal assembly. In Assembly Theory, time (number of steps) becomes an inherent property of the object’s description, linking to the idea that complexity arises through incremental processes. This approach doesn’t “alter the laws of physics” but shifts perspective to focus on how objects could be built, thereby implicitly accounting for whether an object is the kind that natural processes could assemble by chance or not. If not, then some selective/evolutionary process must be at play.
Relation to Hazen & Wong: Assembly Theory is quite complementary to Hazen & Wong’s functional information framework, though there are differences in focus. Both are trying to pin down a universal signature of evolved complexity. Hazen & Wong focus on function and the probability of achieving it, whereas Assembly Theory focuses on structure and the minimal steps to build it. Interestingly, Assembly Theory’s “assembly index” can be thought of as related to the amount of selection or evolutionary time invested – Cronin has even described it as “the total amount of selection necessary to produce the object” in an ensemble. A high assembly index implies many steps, which in an evolutionary context means many generations of gradual improvement or accumulation of parts. This echoes Hazen & Wong’s idea that high functional information implies many trials and selective “choices” have occurred (since a very specific outcome was reached out of many possibilities).
However, the approaches diverge in what they require and how testable they are. Assembly Theory deliberately avoids needing to know the object’s function or the environment’s selection criteria. You can calculate AI from the structure alone. Hazen & Wong’s functional information, by contrast, demands defining a specific function and assessing the fraction of configurations that meet it – which can be experimentally challenging except in controlled cases (like selecting aptamers in a lab). As Sara Walker noted, “assembly theory relies on a number (AI) we can measure, whereas the law of functional information is harder to test objectively”. She is somewhat skeptical of Hazen & Wong’s law because without a clear way to measure functional info in arbitrary systems, it’s difficult to falsify or confirm. Assembly Theory attempts to provide that metrology – a practical measure – which Hazen acknowledges is tough for functional info beyond simple systems.
Despite this, the goals intersect: both predict that complexity in nature is not random but constrained by past selection/assembly processes, and both seek universal metrics to identify such complexity. Assembly Theory can be seen as providing a possible tool to implement Hazen & Wong’s vision. For instance, if one equates “function” with “ability to exist in quantity,” Assembly Theory’s criterion that high-AI objects don’t occur abundantly without evolution is akin to saying “there is no high-functional-information object without selection”. Indeed, Assembly Theory has been described as showing “objects that could have emerged only via evolution”.
One might highlight a philosophical difference: Hazen & Wong frame their idea as a law of nature about how systems behave, whereas Assembly Theory is a framework or model for quantifying complexity. The “law” aspect of Hazen & Wong is broad – it implies directionality (complex systems will evolve higher FI given selection). Assembly Theory doesn’t explicitly say “complexity will increase,” but it provides a way to tell if it has (and implicitly, if selection is ongoing, we expect to see higher and higher AI structures appearing over time). In practice, if you monitored an evolving system with Assembly Theory, you should see the maximal assembly index of objects rise over time – that would be assembly theory’s version of Hazen & Wong’s law.
So, similarities: Both approaches treat complexity as something that accumulates via incremental steps constrained by selection or assembly, and they both cross physics-chemistry-biology domains (Cronin & Walker explicitly mention applications from molecules to technology). Differences: Hazen & Wong require a functional context and speak in terms of information and selection, whereas Assembly Theory requires only an object’s structure and speaks in terms of steps and assembly. Also, Hazen & Wong’s law is newer and more conceptual – still gaining empirical support – while Assembly Theory has already seen some experimental validation in chemistry (though it too is under active debate and development).
In summary, Assembly Theory provides a concrete, quantitative complement to the functional information law. It gives researchers a way to identify artifacts of evolution (complex assembly) even without knowing the exact function being selected for. Hazen & Wong’s framework would view a high-AI object as one with high functional information for whatever function it ultimately serves – assembly is evidence that some function-driven selection produced it. Both point to the same intuitive conclusion: if you see something very complex and specific in nature, it likely got that way through a history of iterative processes, not by random chance.
Hazen and Wong’s Functional Information Framework
Having surveyed the landscape, let’s concisely outline Hazen & Wong’s own framework and its place in this cross-disciplinary context. In 2023, Michael L. Wong and Robert M. Hazen, along with colleagues, published a paper formulating what they called a “missing law of nature” about evolution in complex systems. They observed that many complex systems – from minerals and planets to life and culture – appear to increase in diversity, patterning, and functional complexity over time. To explain this, they proposed the Law of Increasing Functional Information. In simple terms, this law states:
If a system can explore many different configurations and there is selection for one or more functions, then over time the system’s functional information will increase.
In other words, complex systems evolve toward states of greater functional complexity whenever functional selection is at work. Crucially, this is intended to apply whether the system is living or nonliving. It elevates evolution (in the sense of adaptive change) to a general phenomenon, not restricted to biology.
Functional information (FI) here is the metric introduced by Hazen (2007) we discussed earlier: essentially the -log₂ of the fraction of configurations that achieve a given function. So increasing FI means that the system finds configurations that are progressively more improbable by chance yet achieving higher function. Imagine, for example, the evolution from random polymers to an efficient enzyme: initially, an enzyme that’s, say, 1000 times rarer than random (F=1e-3) might be found (FI = ~10 bits). Later, an even more efficient enzyme that’s 1e-6 of random appears (FI = ~20 bits). FI increased as selection favored better function.
Hazen & Wong emphasize the role of selection for function. They identify three broad categories of “function” that nature selects for:
Stability (static persistence): Stable configurations tend to be selected simply because they last. For instance, in chemistry, a stable mineral or molecule remains while unstable ones break down. This is a kind of baseline selection present even in non-living systems – the environment “chooses” stable forms to persist (a rock formed under certain conditions will stick around if it’s stable under those conditions).
Dynamic persistence (activity with energy flow): Systems that are not static but can persist in a dynamic steady state are also selected. For example, a star is a dynamic system (fusion-powered) that persists for billions of years, or an ecosystem maintaining cycles. Hazen & Wong mention “ongoing supplies of energy” – configurations that can harness energy to maintain themselves (like a candle flame, or an organism’s metabolism) get selected to continue. This bridges to Prigogine’s dissipative structures: they are dynamically stable and hence survive.
Novelty (innovation): Perhaps the most distinctive, Hazen & Wong propose that evolving systems have a tendency to explore new configurations that sometimes confer radically new abilities. When these novel traits prove beneficial, they are retained – this is essentially selection for innovation. Biological evolution is rich with such novelties (e.g. eyes, flight, symbiosis). Even in non-life, one could consider that the Earth–mineral system “discovered” novel minerals when oxygen appeared, etc. Novelty is what prevents evolution from stagnating once basic functions are optimized.
These three are not mutually exclusive; they are layers of what Hazen & Wong call universal selection concepts. Darwinian survival is a mix of stability (don’t die) and dynamic function (get energy) in service of reproducing, and it often rewards novelty if it gives an edge.
Hazen & Wong illustrate their ideas with examples like mineral evolution (non-life) and life’s major innovations. They argue this law fills a gap: while physics had laws for energy, motion, etc., it lacked a general statement about complexity’s rise. In effect, they’re saying: whenever the universe sets up a situation with many possibilities and some criteria for success, the outcome is an increase in order/complexity tuned to that criteria. It’s a unifying principle connecting Darwinian evolution, chemical evolution, and even cosmic self-organization.
Of course, calling it a “law” invites scrutiny. As mentioned, one challenge is testing it quantitatively. How do we measure FI for “a planet with geology” or “a society’s culture”? Hazen & Wong acknowledge that measuring FI in complex real-world systems is often impractical. But they point to simpler testbeds (like digital life, or experimental chemical selection) as ways to validate the concept. Another point is whether this “law” is truly universal or has exceptions/boundary conditions. For instance, does FI always increase, or can it plateau or decrease? They likely would say if the preconditions (many configurations + selection for function) are sustained, it increases, but if the environment changes to remove selection pressure, it can halt (just like evolution can stagnate or regress in a stable niche).
Comparison to other frameworks: Hazen & Wong’s framework is in many ways a synthesis of the themes we’ve discussed:
It agrees with thermodynamics that you need energy flow (dynamic systems) for complexity, and it implicitly relies on the fact that the Second Law isn’t violated – complexity increases locally but at the expense of greater entropy exported (they don’t spell this out, but it’s understood).
It incorporates cybernetic ideas by framing things in terms of function (like a purpose) and implying feedback (selection is a feedback filter).
It quantifies information like Shannon/Kolmogorov but filters it through function, much as Orgel or Szostak wanted.
It is essentially an expression of Universal Darwinism (variation + selection universally yields adaptation).
It encompasses major transitions as just particularly large increases in FI due to new higher-level selection units emerging.
It values self-organization (stability selection) as a preliminary step of selection.
It explicitly celebrates open-ended innovation (novelty selection) as key to sustained complexity growth, aligning with ALife insights.
And it resonates with assembly theory in spirit by suggesting a lawlike trend that complexity accumulates via stepwise selection/assembly.
Perhaps the main contrast with others is boldness and scope: calling it a “Law of Nature” is a strong claim (most other frameworks stop short of that, treating evolution and complexity as phenomena arising from other known laws). Hazen & Wong are effectively elevating the principle of selection to a fundamental status. Another difference is focus: some frameworks like maximum entropy production or autocatalysis might suggest complexity arises as a side-effect of thermodynamics, whereas Hazen & Wong put function and selection at center stage (teleonomy, not just thermodynamics). This invites the question: could there be increasing complexity without selection? Hazen & Wong would argue no, not sustained – a view supported by many (random increases tend not to last or accumulate).
In conclusion, Hazen & Wong’s functional information framework stands as a high-level integrative theory tying together selection, information, and emergence to explain the natural “arrow of complexity.” It is not so much overturning earlier ideas as assembling them into a unified principle: Complexity grows because many parts are tried and the ones that work (by whatever criterion) are kept. Over time, this ratchets functional information upward, giving us atoms to galaxies, microbes to minds. The next step for this framework will be to further reconcile it with empirical measures (where Assembly Theory might help) and to test its predictions in domains like astrobiology (e.g., are there telltale signs of this law on other planets?). The fact that it aligns with so many independent lines of thought (from Darwin and Szostak to Prigogine and Kauffman) is encouraging – it suggests they’re all glimpsing pieces of the same puzzle.
Synthesis: Common Themes and Contrasts
Bringing these perspectives together, several common themes emerge regarding increasing functional complexity:
Energy and Entropy: All agree that you can’t get complexity without a flow of energy. Whether expressed as negative entropy consumption (Schrödinger/Bertalanffy) or as sustaining dissipative structures (Prigogine), complex systems require being open and far from equilibrium. Hazen & Wong implicitly assume this (their law applies only when many configurations can be realized, which in practice needs energy and matter flux).
Information and Function: There’s a shift from seeing complexity as just intricate structure to seeing it as information that does something. Hazen & Wong explicitly champion this shift, following Szostak and others. Many frameworks, even if not using the term “functional information,” effectively focus on “organized complexity” (e.g. effective complexity, specified complexity) versus randomness. The functional information metric provides a way to quantify that across domains.
Selection and Adaptation: Darwinian selection is a unifying mechanism recognized in various guises: natural selection in biology, “blind variation and selective retention” in ideas, selection of stable forms in chemistry, etc. Hazen & Wong’s law is basically selection writ large. Universal Darwinism and Hazen & Wong share the view that selection is the driver of increased order. Even self-organization theories often end up requiring a selection-like pruning (stable states persist, unstable ones don’t). The similarity is clear: without some criterion for success (fitness, stability, etc.), you just get diversity or chaos, not cumulative complexity.
Emergence of Novelty: Another recurring idea is that complexity increases not smoothly but by exploring novel combinations. Concepts like major transitions and open-ended evolution emphasize the appearance of new levels or new functions that allow further complexity. Hazen & Wong bake novelty in as a fundamental function. This addresses a potential criticism of a naive Darwinism: if you only select for the same function, you might just get optimization, not new complexity. The frameworks collectively suggest that systems that allow innovation (through recombination, exploration, etc.) will climb the complexity ladder further.
Hierarchies and Integration: From systems theory’s synergy to major transitions, the idea that complexity builds by creating higher-level units is widespread. Hazen & Wong’s law doesn’t explicitly mention “hierarchy,” but in practice selecting for function often leads to new emergent wholes (like the cooperative assemblies in major transitions). Each framework acknowledges that new emergent properties (like a cell’s function emerging from chemicals, or an economy’s behavior emerging from firms) characterize greater complexity.
Metrics and Evidence: Different approaches propose different ways to measure complexity: Shannon bits, algorithmic complexity, functional information bits, assembly index, energy rate density, etc. Each captures an aspect, but all struggle with fully capturing “the essence” of complexity. Hazen & Wong’s FI is one attempt, assembly index is another – interestingly, both grounded in counting something related to improbability due to history. Empirical evidence of increasing complexity comes from many sources: the fossil record (genomic complexity, body plans), the mineral record (mineral count rising over time), cosmic history (structure formation). The law of increasing FI attempts to generalize all those trajectories.
Regarding contrasts among the theories and with Hazen & Wong’s framework:
Some frameworks like maximum entropy production (MEP) or dissipation-based ideas imply complexity grows as a side-effect of systems trying to dissipate energy gradients. Hazen & Wong instead couch it in terms of function achieving selection. One could reconcile them by noting that dissipation could be seen as a “function” that gets selected (e.g. a system that dissipates energy well might persist, per England’s hypothesis). But Hazen & Wong don’t claim an inherent drive toward maximum entropy production; they claim a drive toward better function, which in living systems often correlates with efficient energy use, but in general is a broader concept.
Self-organization vs. Selection: Some might pit Kauffman’s self-organization or Prigogine’s spontaneous order against strict selectionism (the classic debate of “order for free” vs “order for purpose”). Hazen & Wong’s framework actually marries the two: selection for stability is basically nature harnessing self-organization. But Hazen & Wong would likely argue that to go beyond basic order (crystals, convection) to complex adaptive function (life, technology), selection (Darwinian style) is necessary. A crystal is complex but not functionally complex in the sense of performing a computation or metabolism. So Hazen & Wong implicitly distinguish simple ordered complexity (which self-organization can produce) from open-ended functional complexity (which needs selection and iteration).
Teleology concerns: By focusing on function, one must be careful not to imply conscious intent or a teleological “drive.” Hazen & Wong’s law is non-teleological – it doesn’t say the universe wants complexity, just that if function yields survival, complexity will result. Cybernetics and systems theory sometimes used goal-language (teleological metaphors), but as scientific theories they reduce it to mechanisms (feedback, etc.). Hazen & Wong continue in that spirit: function is just what works, not an imposed goal.
Scope limitations: Hazen & Wong assert universality, but some frameworks are narrower. Major transitions is biology-specific (though analogies exist elsewhere), Assembly Theory currently mostly about chemistry and biosignatures, ALife about digital organisms, etc. Hazen & Wong’s boldness is tying all these together. If, say, one day we find a planet with life or observe complexity emerging in some novel physics experiment, Hazen & Wong’s law would be put to the test – do those systems also show increasing functional info? If yes, it strengthens the case that it’s universal. If we found exceptions (e.g. a complex system that persists but never innovates or increases FI), we’d need to refine the law’s conditions.
In summation, all these cross-disciplinary ideas contribute pieces of a puzzle: complexity grows when there’s a way to preserve the results of exploratory processes. Thermodynamics gives the playground (energy flux), information theory gives the language (bits and probability), selection gives the algorithm (filtering and accumulation), and systems theory gives the context (interacting parts and emergent wholes). Hazen & Wong’s functional information framework sits at the intersection of these, proposing a unifying principle that captures the essence of what many of these theories indicate. It emphasizes that “functional complexity will accumulate” as a law-like trend, much as gravity makes masses accumulate – not inevitably everywhere, but reliably where the conditions allow (planets around stars for gravity; many configurations with selection for FI for complexity).
The idea of increasing functional complexity over time is thus supported by a rich tapestry of theories. They agree more than they disagree: all point to a cosmos in which, given the right conditions, simple beginnings can blossom into complex, information-rich systems – from stars and minerals to life and intelligence. Hazen & Wong’s framework crystallizes this understanding into a concise form, inviting further cross-domain research. Whether one approaches it through the lens of a biologist, a physicist, or a computer scientist, the message is similar: complexity emerges naturally, not miraculously, via universal principles of organization, information, and selection. And as our understanding deepens (through efforts like ALife experiments or new theories like Assembly Theory), we move closer to a unified science of complexity that can explain our origins and perhaps even predict complexities to come.
Sources:
Schrödinger, E. (1944). What is Life? (discussion of negative entropy)
Von Bertalanffy, L. (1968). General System Theory (open systems import free energy to build order)
Prigogine, I. (1980s). Order Out of Chaos (dissipative structures and self-organization in far-from-equilibrium systems)
Wiener, N. (1948). Cybernetics (feedback and control in complex systems)
Ashby, W. R. (1956). An Introduction to Cybernetics (law of requisite variety in complex control)
Shannon, C. (1948). "A Mathematical Theory of Communication" (information entropy measure)
Kolmogorov, A. (1960s). (algorithmic complexity formalization)
Szostak, J. (2003). (proposal of functional information for biopolymers)
Hazen, R. et al. (2007). "Functional information and the emergence of biocomplexity" PNAS (defines functional information metric; experiments with Avida)
Chaisson, E. (2001). Cosmic Evolution (free energy rate density as a complexity metric over cosmic time)
Maynard Smith, J. & Szathmáry, E. (1995). The Major Transitions in Evolution (hierarchical jumps in complexity)
Kauffman, S. (1993). The Origins of Order (autocatalytic sets and self-organization in origin of life)
Bedau, M., Packard, N., et al. (2000-2019). (open-ended evolution in ALife, ongoing novelty as criterion)
Cronin, L. & Walker, S. (2021). "Assembly Theory" Nature Communications (assembly index as complexity measure; biosignatures)
Hazen, R., Wong, M., et al. (2023). "On the roles of function and selection in evolving systems" PNAS (Law of Increasing Functional Information; selection for stability, persistence, novelty in all complex systems)
Wong, M. & Hazen, R. (2023). Interviews/Articles (Phys.org, Sci.News, Quanta) discussing the new law and its implications.