In 1976, Robert Aumann published a two-page paper that established one of the most striking impossibility results in the theory of rational belief. The theorem states that if two Bayesian agents share a common prior and their posterior probabilities for some event are common knowledge — each knows the other's posterior, each knows the other knows, and so on ad infinitum — then their posteriors must be identical. They cannot agree to disagree.
The result is surprising because the agents may have observed entirely different evidence. One may have conducted extensive experiments; the other may have read a single report. One may know vastly more than the other. None of this matters. If both agents are Bayesian updaters with the same prior, and if their conclusions are mutually known at every level, then those conclusions must coincide. Disagreement, under these conditions, is a logical impossibility — not merely an empirical rarity.
Let Π₁ and Π₂ be their information partitions.
If the posterior probabilities q₁ = P(A | Π₁(ω)) and q₂ = P(A | Π₂(ω))
are common knowledge at state ω, then q₁ = q₂.
In Plain Language If two Bayesian agents with the same prior both know each other's
probability estimate for an event — and know that they know,
and know that they know that they know, … — then their
estimates must be equal.
The Key Concepts
Common Prior
The theorem assumes both agents begin with the same prior distribution P over a space of possible states of the world Ω. Before receiving any private information, their beliefs are identical. This is sometimes called the Harsanyi doctrine: differences in belief arise solely from differences in information, never from differences in initial outlook.
The common prior assumption is the most debated premise of the theorem. Subjectivist Bayesians, following de Finetti and Savage, hold that priors reflect personal judgment and need not agree. Agents with different life experiences, training, or temperaments may reasonably adopt different priors. If priors can differ, the theorem does not apply — and persistent disagreement is entirely rational.
Information Partitions
Each agent's private information is modeled as a partition of the state space. Agent i's partition Πᵢ divides Ω into cells such that, at any state ω, the agent knows which cell ω belongs to but not which specific state within that cell obtains. The cell Πᵢ(ω) represents everything agent i knows at state ω.
At state ω, agent i knows that ω ∈ Πᵢ(ω).
Posterior Probability Agent i's posterior for event A at state ω:
qᵢ(ω) = P(A | Πᵢ(ω)) = P(A ∩ Πᵢ(ω)) / P(Πᵢ(ω))
Different agents may have very different partitions. A doctor's partition might distinguish among specific diagnoses; a patient's partition might only distinguish "healthy" from "sick." The fineness of the partition reflects the detail of the agent's information.
Common Knowledge
An event E is common knowledge at state ω if both agents know E, both know that both know E, both know that both know that both know E, and so on through every finite level of iterated knowledge. Formally, let Kᵢ(E) denote the event "agent i knows E." Then E is common knowledge if E ∩ K₁(E) ∩ K₂(E) ∩ K₁(K₂(E)) ∩ K₂(K₁(E)) ∩ … all hold at ω.
Equivalent Formulation There exists an event C (a "common knowledge event") such that:
ω ∈ C ⊆ E, and for all ω′ ∈ C:
Π₁(ω′) ⊆ C and Π₂(ω′) ⊆ C
Common knowledge is much stronger than mutual knowledge. Two poker players may each know the rules of the game (mutual knowledge), but common knowledge requires that each knows the other knows, each knows the other knows that they know, and so on. In practice, common knowledge is typically generated by public announcements — events observed simultaneously by all agents.
The Proof
Aumann's proof is elegant in its brevity. The key idea is that common knowledge of the posteriors creates a common knowledge event on which both posteriors must agree.
Suppose at state ω, it is common knowledge that agent 1's posterior is q₁ and agent 2's posterior is q₂. Let C be the common knowledge event — the largest event containing ω that is a union of cells from both Π₁ and Π₂.
C is a union of cells of Π₁ and a union of cells of Π₂.
Agent 1's posterior is constant on C: For all ω′ ∈ C: P(A | Π₁(ω′)) = q₁
Since C = ⋃{Π₁(ω′) : ω′ ∈ C, P(A|Π₁(ω′)) = q₁}:
P(A | C) = P(A ∩ C) / P(C)
= Σ_{cells} P(A ∩ Π₁(ω′)) / Σ_{cells} P(Π₁(ω′))
= Σ_{cells} q₁ · P(Π₁(ω′)) / Σ_{cells} P(Π₁(ω′))
= q₁
By the same argument for Agent 2: P(A | C) = q₂
Therefore q₁ = P(A | C) = q₂. ∎
The crucial step is that if agent 1's posterior for A is q₁ throughout the common knowledge event C, then P(A | C) must also equal q₁ — because C is a union of partition cells that all yield the same conditional probability, and the law of total probability forces the aggregate to equal that common value. The same argument applies to agent 2. Since both posteriors equal P(A | C), they equal each other.
A Worked Example
Two analysts share a common prior over four equally likely states: Ω = {ω₁, ω₂, ω₃, ω₄}, each with probability 1/4. They are interested in event A = {ω₁, ω₂}.
Event A = {ω₁, ω₂}
Agent 1's partition: Π₁ = {{ω₁, ω₂}, {ω₃, ω₄}}
Agent 2's partition: Π₂ = {{ω₁, ω₃}, {ω₂, ω₄}}
Posteriors at Each State At ω₁: q₁ = P(A|{ω₁,ω₂}) = 1 q₂ = P(A|{ω₁,ω₃}) = 1/2
At ω₂: q₁ = P(A|{ω₁,ω₂}) = 1 q₂ = P(A|{ω₂,ω₄}) = 1/2
At ω₃: q₁ = P(A|{ω₃,ω₄}) = 0 q₂ = P(A|{ω₁,ω₃}) = 1/2
At ω₄: q₁ = P(A|{ω₃,ω₄}) = 0 q₂ = P(A|{ω₂,ω₄}) = 1/2
Agent 1 can distinguish perfectly between A and its complement. Agent 2 always gets posterior 1/2 regardless of the true state. Can their posteriors be common knowledge?
At ω₁: Agent 1 knows q₁ = 1. Agent 2 knows q₂ = 1/2. But agent 2 is in cell {ω₁, ω₃}, and agent 1's posterior differs across these states (1 at ω₁, 0 at ω₃). So agent 2 does not know agent 1's posterior. The posteriors are not common knowledge — and indeed q₁ ≠ q₂, consistent with the theorem.
If the agents were to communicate — agent 1 announcing "my posterior is 1" — agent 2 would learn that the state is not ω₃ (where agent 1's posterior is 0). Agent 2 would update: now knowing ω ∈ {ω₁}, agent 2's posterior becomes P(A | {ω₁}) = 1. After communication, they agree.
Dynamic Agreement: Geanakoplos and Polemarchakis
In 1982, Geanakoplos and Polemarchakis extended Aumann's static result to a dynamic setting. They showed that if two agents with a common prior take turns announcing their posterior probabilities, the process must converge to agreement in finitely many steps (when the state space is finite).
Round 1: Agent 1 announces q₁⁰. Agent 2 updates, announces q₂¹.
Round 2: Agent 2's announcement refines Agent 1's information. Agent 1 updates, announces q₁².
…
Round n: Both posteriors converge to the same value q*. Agreement is reached.
Each announcement is informative: hearing someone's posterior tells you something about their partition cell, which refines your own information. With a finite state space, there are only finitely many possible refinements, so the process terminates. At termination, the posteriors are common knowledge — and by Aumann's theorem, they must be equal.
This dynamic version makes the theorem more practically relevant. Aumann's original result might seem like a mere curiosity — when are posteriors ever common knowledge? The Geanakoplos-Polemarchakis result shows that the simple act of communicating beliefs drives agents toward the conditions under which agreement is guaranteed.
Implications and Interpretations
Rational Disagreement Is Impossible (Under the Premises)
The theorem's most provocative implication is that rational agents cannot disagree, at least not knowingly. If you and I are both Bayesian, share a common prior, and I tell you my probability for rain tomorrow is 70% while you tell me yours is 40%, at least one of three things must be true: (1) we do not share a common prior, (2) our posteriors are not truly common knowledge, or (3) at least one of us is not updating correctly.
This has deep consequences for how we interpret observed disagreements. When experts disagree — economists on inflation forecasts, climate scientists on sensitivity estimates, physicians on treatment efficacy — the theorem suggests that the disagreement must stem from different priors, different information that has not been fully communicated, or failures of rationality. It cannot be a stable equilibrium among ideal Bayesian agents with common priors.
Speculative Trade
In financial economics, the theorem implies the no-trade theorem (Milgrom and Stokey, 1982). If two traders have a common prior and the current allocation is Pareto efficient, then no mutually agreeable trade can occur solely on the basis of private information. If agent 1 wants to sell an asset, agent 2 should infer that agent 1 has bad news — and refuse to buy. Rational speculation is impossible under common priors.
This result is a challenge for financial theory, since massive volumes of speculative trade occur daily. The standard resolutions involve heterogeneous priors (traders genuinely disagree about fundamentals), institutional constraints (index rebalancing, regulatory requirements), or bounded rationality.
Epistemic Logic and Computer Science
Aumann's formalization of common knowledge using partitions became a foundational tool in computer science and distributed systems. The concept of common knowledge is essential for understanding coordination problems: processes that must act in concert need not just mutual knowledge but common knowledge of the relevant facts.
Two generals must attack simultaneously or not at all. They can send messengers, but messengers may be captured. General A sends: "Attack at dawn." General B receives it and sends confirmation. But B doesn't know if the confirmation arrived. So A sends a confirmation of the confirmation. But A doesn't know if that was received…
No finite number of messages can establish common knowledge of the attack plan when communication is unreliable. This impossibility result — closely related to Aumann's framework — explains why distributed consensus is fundamentally hard.
Challenges to the Common Prior Assumption
The common prior assumption is the theorem's most vulnerable premise, and debates about its reasonableness define much of the subsequent literature.
The Case For
Harsanyi (1967–68) argued that rational agents should have the same prior because probability is objective — it represents the "right" degree of belief given no information. Differences in posterior beliefs should arise only from differences in evidence, not from fundamental disagreements about how to weigh possibilities in the absence of evidence. On this view, the common prior is a rationality requirement, and agents who violate it are making an error.
The Case Against
Subjectivists argue that the prior encodes personal judgment and that two rational agents can legitimately disagree about prior probabilities. A physicist and a biologist may assign different priors to a hypothesis about quantum effects in neural processes — not because one has more evidence, but because they bring different theoretical commitments. Forcing agreement in priors is tantamount to requiring agreement in conclusions before any data are observed, which seems too strong.
Aumann himself acknowledged this tension but argued that the common prior assumption is reasonable in many game-theoretic contexts, where agents are drawn from a common population and face similar environments.
Extensions and Generalizations
Multiple Agents
The theorem extends immediately to any finite number of agents. If n agents share a common prior and their posteriors for an event are common knowledge, all n posteriors must be equal.
Infinite State Spaces
The original result was stated for finite state spaces. Extensions to infinite spaces require more careful measure-theoretic treatment but the core conclusion holds under standard regularity conditions. The partition-based framework generalizes to σ-algebras, and common knowledge is defined through the intersection of decreasing sequences of σ-algebras.
Approximate Agreement
When posteriors are "approximately" common knowledge — each agent has a good but imperfect estimate of the other's posterior — approximate agreement results hold. If both agents know the other's posterior to within ε, their posteriors cannot differ by more than a bound that depends on ε and the structure of the problem.
Agreement Without Common Priors
If the common prior assumption is dropped, disagreement can persist — but not without limit. If agents' priors are "close" in some metric, their posteriors given common knowledge must also be close. The further apart the priors, the wider the possible disagreement.
Historical Context
John Harsanyi publishes "Games with Incomplete Information," introducing the common prior assumption (Harsanyi doctrine) and transforming game theory. He would share the 1994 Nobel Prize in Economics for this work.
Robert Aumann publishes "Agreeing to Disagree" in The Annals of Statistics — a paper of just two pages that establishes the agreement theorem and introduces the partition model of knowledge to economics.
Geanakoplos and Polemarchakis prove that Bayesian agents who communicate their posteriors iteratively must reach agreement in finite time. Milgrom and Stokey derive the no-trade theorem as a corollary.
Scott Aaronson and Robin Hanson independently popularize the theorem's implications for rational discourse. Hanson argues that persistent disagreement among humans is strong evidence of irrationality or differing priors.
Robert Aumann receives the Nobel Memorial Prize in Economic Sciences (shared with Thomas Schelling) "for having enhanced our understanding of conflict and cooperation through game-theory analysis." The agreement theorem is cited prominently.
The theorem influences debates in epistemology, philosophy of science, prediction markets, and the rationality community. It serves as a benchmark against which observed disagreement is measured and diagnosed.
Connection to Bayesian Statistics
Aumann's theorem is a deep structural result about Bayesian reasoning. It shows that the Bayesian updating rule has a powerful built-in consistency property: agents who update from the same prior cannot reach different conclusions if their conclusions are transparent to each other.
Agent 1 observes evidence E₁, computes P(A|E₁)
Agent 2 observes evidence E₂, computes P(A|E₂)
If P(A|E₁) and P(A|E₂) are common knowledge → P(A|E₁) = P(A|E₂)
Intuition Learning someone's posterior is itself evidence.
Bayesian agents who know each other's posteriors have, in effect,
pooled their information. And pooled information yields a unique posterior.
This connects to a broader theme in Bayesian statistics: the merging of opinions. Even when priors differ, posteriors converge as data accumulate (under regularity conditions). Aumann's theorem is the extreme version: with a common prior, convergence is instantaneous once posteriors become common knowledge, regardless of the amount of data each agent has observed.
The theorem also illuminates the role of communication in statistical reasoning. In collaborative science, sharing not just data but conclusions (posterior beliefs) should drive convergence — provided the scientists share enough common ground in their priors. When convergence fails, the theorem directs attention to the source of disagreement: is it the prior, the evidence, or the updating process?
"If two people have the same prior, and their posteriors for a given event are common knowledge, then these posteriors must be equal. This is so even though they may base their posteriors on quite different information. In short, people with the same priors cannot agree to disagree." — Robert Aumann, "Agreeing to Disagree" (1976)
The theorem does not claim that disagreement never occurs. It identifies the precise conditions under which it cannot: common priors and common knowledge of posteriors. In practice, at least one condition almost always fails. Priors differ because people have different backgrounds and experiences. Full common knowledge is nearly impossible to achieve — we rarely know exactly what others believe, let alone know that they know what we believe, and so on. The theorem's value is not as a description of reality but as a benchmark that reveals why disagreements persist and what it would take to resolve them.
Example: Two Stock Analysts Disagree on a Merger
Two financial analysts at the same firm, Alice and Bob, are evaluating whether Company X will acquire Company Y. They share the same training, the same market data, and the same economic models — in Aumann's terms, they share a common prior.
On Monday, Alice learns from a regulatory filing that Company X has received antitrust clearance. She updates her probability of the merger to 85%. On Tuesday, Bob independently learns from an insider source that Company X's board has approved the deal. He updates his probability to 90%.
Before They Talk
Alice and Bob hold different posteriors (85% and 90%) because each has private information the other lacks. This is perfectly consistent with Aumann's theorem — the theorem requires common knowledge of posteriors, not just having posteriors.
After They Talk
At the Wednesday team meeting, Alice announces her 85% and Bob announces his 90%. Now they both know each other's posteriors. Since they share a common prior and each knows the other is a rational Bayesian updater, Aumann's theorem kicks in:
then their posteriors must be equal:
P_Alice(Merger | all information) = P_Bob(Merger | all information)
Learning that Bob's probability is 90% tells Alice that Bob has evidence she lacks — evidence strong enough to push a shared-prior reasoner to 90%. She must update upward. Similarly, Bob learns Alice's evidence only pushed her to 85%, which tempers his own estimate slightly. After exchanging posteriors (and reasoning through what those posteriors imply about each other's evidence), they converge to a single number — say, 88%.
In practice, financial analysts don't share common priors. One may weigh regulatory risk more heavily; another may have a different mental model of board dynamics. Aumann's theorem tells us that persistent disagreement after full disclosure of beliefs implies that the analysts are either irrational or — far more likely — operating from genuinely different priors shaped by different experiences and frameworks. The theorem doesn't eliminate disagreement; it diagnoses its source.