Evolutionary Game Theory: How Strategies Survive and Spread

Why do animals fight with ritualized displays instead of killing each other?

Why do bacteria cooperate sometimes and defect other times?

Why do humans feel guilt, reciprocate favors, and punish cheaters — even when it’s costly?

Evolutionary game theory answers these questions by applying game theory to evolution. Instead of assuming rational players, it asks:

Which strategies survive and spread over time?

The result? A framework that explains cooperation in nature, the evolution of social behavior, and even the dynamics of culture and technology.


From Rational Players to Replicating Strategies

Traditional game theory:

  • Rational players choose strategies
  • Players maximize utility
  • Equilibria are stable because rational players won’t deviate

Evolutionary game theory:

  • Strategies are encoded in genes (or memes, behaviors, etc.)
  • More successful strategies reproduce more
  • Equilibria are stable because unsuccessful strategies die out
%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% graph TD A[Classical Game Theory] --> B[Rational players] B --> C[Conscious optimization] C --> D[Nash Equilibrium] E[Evolutionary Game Theory] --> F[Replicating strategies] F --> G[Natural selection] G --> H[Evolutionarily Stable Strategy] D --> I{Same predictions
often, but different
mechanisms} H --> I style A fill:#4c6ef5 style E fill:#51cf66 style I fill:#ffd43b

No rationality required — strategies that work spread, strategies that fail disappear.

Key insight: Evolutionary dynamics can lead to the same equilibria as rational choice, but through blind selection rather than conscious optimization.


Evolutionarily Stable Strategy (ESS)

Definition: A strategy is evolutionarily stable if, once it dominates a population, no mutant strategy can invade.

Formal conditions (John Maynard Smith):

A strategy $S$ is an ESS if for any mutant strategy $M \neq S$:

  1. $\text{Payoff}(S, S) \geq \text{Payoff}(M, S)$ — the incumbent does at least as well against itself
  2. If equal, then $\text{Payoff}(S, M) > \text{Payoff}(M, M)$ — the incumbent does better against the mutant than the mutant does against itself

Intuition:

  • An ESS resists invasion by mutants
  • Even if a few mutants appear, they can’t spread
  • The ESS is stable against small perturbations
%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% graph LR A[Population: 99% Strategy S] --> B[Mutant appears: 1% Strategy M] B --> C{Is S an ESS?} C -->|Yes| D[Mutant fails to spread] C -->|No| E[Mutant invades population] D --> F[Population returns to 100% S] E --> G[New equilibrium emerges] style C fill:#4c6ef5 style D fill:#51cf66 style E fill:#ff6b6b

Relationship to Nash Equilibrium:

  • Every ESS is a Nash Equilibrium
  • But not every Nash Equilibrium is an ESS
  • ESS is a stronger stability concept

The Hawk-Dove Game: Why Animals Don’t Fight to the Death

One of the most famous models in evolutionary game theory.

Setup:

  • Two animals compete for a resource (worth $V$)
  • Each can play Hawk (escalate to dangerous fight) or Dove (display, retreat if challenged)

Payoff matrix:

Opponent: Hawk Opponent: Dove
You: Hawk $(V-C)/2$ $V$
You: Dove $0$ $V/2$

Where:

  • $V$ = value of resource
  • $C$ = cost of injury from fighting

Example: $V = 10$, $C = 20$

Opponent: Hawk Opponent: Dove
You: Hawk $-5$ $10$
You: Dove $0$ $5$
%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% graph TD A[Two Animals Meet] --> B{What strategy?} B -->|Both Hawk| C[Dangerous fight
Winner gets V
Loser pays C
Average: V-C/2] B -->|Both Dove| D[Ritual display
Split resource
Each gets V/2] B -->|Hawk vs Dove| E[Hawk wins immediately
Hawk: V
Dove: 0] style C fill:#ff6b6b style D fill:#51cf66 style E fill:#ffd43b

Analysis:

Is pure Hawk an ESS?

  • If everyone plays Hawk, payoff = $(V-C)/2 = -5$
  • If a Dove mutant appears, it gets $0$ (better than $-5$!)
  • No, pure Hawk is not stable when $C > V$

Is pure Dove an ESS?

  • If everyone plays Dove, payoff = $V/2 = 5$
  • If a Hawk mutant appears, it gets $V = 10$ (better than $5$!)
  • No, pure Dove is not stable

Mixed strategy ESS:

Play Hawk with probability $p = V/C$

In our example: $p = 10/20 = 0.5$

At equilibrium:

  • 50% of encounters involve Hawk
  • 50% involve Dove
  • No mutant can invade

Real-world interpretation:

  • Animals use a mix of aggressive and peaceful strategies
  • Explains ritualized combat in many species
  • Fighting is costly, so pure aggression doesn’t dominate
%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% graph TD A[Hawk-Dove Evolution] --> B[Pure Hawk Population] B --> C[Dove mutants invade
Avoid costly fights] C --> D[Mixed Population] D --> E[Hawk mutants invade
Exploit Doves] E --> D D --> F[Equilibrium:
p = V/C Hawks
1-p Doves] style F fill:#51cf66

Why animals don’t fight to the death: Fighting is too costly. Evolution favors a mix of aggression and restraint.


Replicator Dynamics: How Strategies Spread

The replicator equation models how strategy frequencies change over time.

Key idea: Strategies that earn above-average payoffs grow, strategies with below-average payoffs shrink.

Formula:

$$\frac{dx_i}{dt} = x_i \left( \pi_i - \bar{\pi} \right)$$

Where:

  • $x_i$ = fraction of population playing strategy $i$
  • $\pi_i$ = payoff to strategy $i$
  • $\bar{\pi}$ = average payoff in population

Interpretation:

  • If $\pi_i > \bar{\pi}$: Strategy $i$ grows
  • If $\pi_i < \bar{\pi}$: Strategy $i$ shrinks
  • If $\pi_i = \bar{\pi}$: Strategy $i$ stable
%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% graph LR A[Strategy Performance] --> B{Payoff vs Average} B -->|Above Average| C[Strategy spreads
More offspring] B -->|Below Average| D[Strategy shrinks
Fewer offspring] B -->|Equal to Average| E[Strategy stable
Frequency unchanged] C --> F[Replicator Dynamics] D --> F E --> F style C fill:#51cf66 style D fill:#ff6b6b style E fill:#ffd43b

Example: Hawks and Doves

Let $x$ = fraction of Hawks.

Hawk payoff: $\pi_H = x \cdot \frac{V-C}{2} + (1-x) \cdot V$

Dove payoff: $\pi_D = x \cdot 0 + (1-x) \cdot \frac{V}{2}$

Replicator dynamics:

$$\frac{dx}{dt} = x(1-x)(\pi_H - \pi_D)$$

Equilibrium: $\pi_H = \pi_D$, which gives $x^* = V/C$

Stability: Population converges to this equilibrium from any starting point.


The Evolution of Cooperation

Puzzle: Why does cooperation exist?

Cooperation seems irrational:

  • Helping others is costly
  • Selfish strategies should dominate
  • Yet cooperation is everywhere in nature

Evolutionary game theory provides answers:

1. Kin Selection

“I would lay down my life for two brothers or eight cousins.” — J.B.S. Haldane

Mechanism: Genes for cooperation spread if they help relatives (who share genes).

Hamilton’s Rule: Cooperation evolves if:

$$r \cdot B > C$$

Where:

  • $r$ = relatedness coefficient (0.5 for siblings, 0.125 for cousins)
  • $B$ = benefit to recipient
  • $C$ = cost to actor

Example: Bee workers sacrificing for the hive (sisters share 75% of genes due to haplodiploidy).


2. Direct Reciprocity (Repeated Games)

Mechanism: Cooperation pays off in repeated interactions.

Strategy: Tit-for-Tat

  • Start by cooperating
  • Then copy opponent’s last move

Robert Axelrod’s tournament (1980s):

  • Computer strategies compete in repeated Prisoner’s Dilemma
  • Tit-for-Tat won
  • Simple, nice, forgiving, and retaliatory
%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% sequenceDiagram participant A as Player A: Tit-for-Tat participant B as Player B: Strategy X Note over A,B: Round 1 A->>B: Cooperate B->>A: Cooperate Note over A,B: Both benefit Note over A,B: Round 2 A->>B: Cooperate (copy B's move) B->>A: Defect (betray A) Note over B: B gains, A loses Note over A,B: Round 3 A->>B: Defect (punish B) B->>A: Defect Note over A,B: Both suffer Note over A,B: Round 4 A->>B: Defect (B still defecting) B->>A: Cooperate (try to restore) Note over A,B: Round 5 A->>B: Cooperate (forgive) B->>A: Cooperate Note over A,B: Cooperation restored style A fill:#51cf66

Why Tit-for-Tat works:

  • Nice: Never defects first
  • Retaliatory: Punishes defection immediately
  • Forgiving: Returns to cooperation after punishment
  • Clear: Easy for others to understand and predict

Conditions for cooperation:

  • Interactions must repeat
  • Players must value future (discount factor high)
  • Strategies must be recognizable

3. Indirect Reciprocity (Reputation)

Mechanism: Help those with good reputations; gain reputation by helping.

Example: “Gossip” allows information about who cooperates to spread.

Image scoring:

  • Cooperators have high scores
  • Defectors have low scores
  • People preferentially help high-scorers

Evolutionary stability: Cooperators cluster and help each other, excluding defectors.


4. Network Reciprocity (Spatial Structure)

Mechanism: Cooperators form clusters in spatial networks.

Key insight: If cooperators are near each other, they benefit from mutual cooperation while defectors on the edge face other defectors.

%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% graph TD A[Evolution of Cooperation] --> B[Kin Selection] A --> C[Direct Reciprocity] A --> D[Indirect Reciprocity] A --> E[Network Reciprocity] A --> F[Group Selection] B --> G[Help relatives] C --> H[Tit-for-Tat in repeated games] D --> I[Reputation systems] E --> J[Spatial clustering] F --> K[Groups of cooperators
outcompete groups
of defectors] style A fill:#4c6ef5 style B fill:#51cf66 style C fill:#51cf66 style D fill:#51cf66 style E fill:#51cf66 style F fill:#51cf66

The Evolution of Fairness

Ultimatum Game from an evolutionary perspective:

Setup:

  • Player 1 (Proposer) offers a split of $10
  • Player 2 (Responder) accepts or rejects
  • If rejected, both get $0

Rational prediction: Proposer offers $1, Responder accepts (something > nothing).

Empirical reality:

  • Proposers typically offer $4-5
  • Responders reject offers below $3

Why?

Evolutionary explanation:

  • Humans evolved in small groups with repeated interactions
  • Reputation matters: Being known as a fair partner attracts future cooperation
  • Punishing unfairness signals that you won’t be exploited, deterring future low offers
  • Fairness norms are evolutionarily stable in such environments
%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% graph TD A[Why Do Humans Value Fairness?] --> B[Evolution in small groups] B --> C[Repeated interactions] C --> D[Reputation crucial] B --> E[Unfair individuals excluded] E --> F[Lost cooperation opportunities] D --> G[Fair strategies spread] F --> G G --> H[Modern humans inherit
fairness preferences] style A fill:#4c6ef5 style H fill:#51cf66

Cultural Evolution: Memes and Strategies

Evolutionary game theory applies beyond biology:

Cultural strategies (memes) also:

  • Replicate (through imitation)
  • Vary (through innovation)
  • Are selected (based on success)

Examples:

Language

  • Languages compete for speakers
  • More successful languages spread
  • Languages with network effects dominate (English as global lingua franca)

Technology

  • QWERTY keyboard vs Dvorak
  • VHS vs Betamax
  • iOS vs Android

Network effects create path dependence: the strategy that gains early adoption becomes entrenched, even if better alternatives exist.

%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% graph LR A[Cultural Evolution] --> B[Technology A] A --> C[Technology B: Better] B --> D[Early adoption: 10%] C --> E[Early adoption: 5%] D --> F[Network effects kick in] F --> G[More users join A] G --> H[A dominates market] E --> I[B has fewer users] I --> J[B dies out] H --> K[Lock-in: Even if B is better,
A is now entrenched] style C fill:#ffd43b style K fill:#ff6b6b

Social Norms

  • Driving on right vs left
  • Tipping customs
  • Business attire

Evolutionary stability: Once a norm is established, deviation is costly, so the norm persists.


Rock-Paper-Scissors Dynamics

Some evolutionary systems have no stable equilibrium — instead, they cycle.

Example: Predator-prey relationships, competing species, immune systems vs pathogens.

Rock-Paper-Scissors Game:

Rock Paper Scissors
Rock 0 -1 +1
Paper +1 0 -1
Scissors -1 +1 0

No pure ESS:

  • Rock dominates Scissors
  • But Paper dominates Rock
  • But Scissors dominates Paper

Mixed ESS: Each strategy played with probability 1/3.

But in populations, we see cycling:

  • Rock becomes common → Paper invades
  • Paper becomes common → Scissors invades
  • Scissors becomes common → Rock invades
  • Repeat forever
%%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#fff','secondaryTextColor':'#fff','tertiaryTextColor':'#fff','textColor':'#fff','nodeTextColor':'#fff'}}}%% graph LR A[Rock dominates] --> B[Paper invades] B --> C[Paper dominates] C --> D[Scissors invades] D --> E[Scissors dominates] E --> F[Rock invades] F --> A style A fill:#ff6b6b style C fill:#51cf66 style E fill:#4c6ef5

Real-world examples:

  • Lizard mating strategies (side-blotched lizards have three morphs that cycle)
  • Bacterial strategies (cooperative, defector, and enforcer strains cycle)
  • Immune system vs pathogens (virulence and resistance cycle)

Key insight: Not all evolutionary dynamics lead to stable equilibria. Cycles and chaos are possible.


Applications Beyond Biology

Business Strategy

Market dynamics resemble evolutionary processes:

  • Firms with better strategies grow (more market share, more resources)
  • Firms with worse strategies shrink or die
  • Innovation = mutation
  • Competition = selection

Example: Platform competition

  • Network effects create winner-take-all dynamics
  • Early advantage compounds
  • Explains dominance of Google, Facebook, Amazon

Algorithm Evolution

Genetic algorithms use evolutionary principles:

  • Generate population of candidate solutions
  • Evaluate fitness
  • Select best performers
  • Mutate and recombine
  • Repeat

Used for:

  • Optimization problems
  • Machine learning
  • Game playing (e.g., neural network weights)

Social Dynamics

Opinion dynamics follow replicator-like equations:

  • Popular opinions spread
  • Unpopular opinions shrink
  • Social influence = selection pressure

Explains:

  • Fads and fashions
  • Political polarization
  • Viral content

Key Takeaways

  1. Evolutionary game theory applies game theory to evolving populations, not rational individuals
  2. ESS (Evolutionarily Stable Strategy) — a strategy that resists invasion by mutants
  3. Hawk-Dove game explains why animals don’t fight to the death (mixed strategies evolve)
  4. Replicator dynamics — strategies with above-average payoffs grow, below-average shrink
  5. Cooperation evolves through kin selection, repeated games, reputation, and spatial structure
  6. Cultural evolution — memes and strategies evolve like genes (technology, language, norms)
  7. Rock-paper-scissors dynamics — some systems cycle instead of reaching stable equilibrium
  8. Applications — biology, business strategy, algorithms, social dynamics

Evolutionary game theory shows that sophisticated strategic behavior can emerge without conscious planning — through the blind process of variation and selection.


Practice Problem

Consider a population where individuals play Prisoner’s Dilemma repeatedly. Two strategies exist:

  • Always Defect (D): Always defect
  • Tit-for-Tat (TFT): Cooperate first, then copy opponent

Payoffs per round:

  • Both cooperate: (3, 3)
  • Both defect: (1, 1)
  • One defects, other cooperates: (5, 0)

Assume discount factor $\delta = 0.9$ (future rounds valued highly).

Can Tit-for-Tat invade a population of Always Defect? Can Always Defect invade a population of Tit-for-Tat?

Solution

Can TFT invade a population of D?

When TFT is rare:

  • TFT mostly meets D players
  • TFT cooperates first round, then defects forever
  • Payoff to TFT: $0 + 1 + 1 + … = 0 + \frac{1}{1-\delta} = 0 + 10 = 10$
  • Payoff to D: $5 + 1 + 1 + … = 5 + 10 = 15$

Result: D does better. TFT cannot invade a population of pure defectors (in random matching).

However: If TFT players can find each other (e.g., through clustering, reputation), invasion becomes possible.


Can D invade a population of TFT?

When D is rare:

  • D mostly meets TFT players
  • Against TFT: D defects, TFT punishes, both defect forever
  • Payoff to D: $5 + 1 + 1 + … = 5 + 10 = 15$
  • Payoff to TFT against TFT: $3 + 3 + 3 + … = \frac{3}{1-\delta} = 30$
  • Payoff to TFT against D: $0 + 1 + 1 + … = 10$

When D is rare (proportion $\epsilon$):

  • TFT payoff: $(1-\epsilon) \cdot 30 + \epsilon \cdot 10 \approx 30$ (for small $\epsilon$)
  • D payoff: $\approx 30 \cdot 0 + 15 \cdot 1 = 15$

Result: TFT does much better than D. D cannot invade a population of TFT.


Conclusion:

  • TFT is an ESS (resists invasion by D when common)
  • D is also an ESS (resists invasion by TFT when common)
  • This game has multiple equilibria
  • The outcome depends on initial conditions and population structure
  • Moral: Cooperation can be stable once established, but hard to get started (“cooperation trap”)

This post is part of the Game Theory Series, where we explore the mathematics of strategic decision-making. Evolutionary game theory reveals how cooperation, fairness, and complex strategies can emerge through the simple process of variation and selection.