Conditional probability


In probability theory, conditional probability is a measure of the probability of an event occurring given that another event has occurred. If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P, or sometimes P or P. For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. The conditional probability that someone unwell is coughing might be 75%, then: P = 5%; P = 75%.
The concept of conditional probability is one of the most fundamental and one of the most important in probability theory. But conditional probabilities can be quite slippery and require careful interpretation. For example, there need not be a causal relationship between A and B, and they don't have to occur simultaneously.
P may or may not be equal to P. If P = P, then events A and B are said to be "independent": in such a case, knowledge about either event does not give information on the other. P typically differs from P. For example, if a person has dengue, they might have a 90% chance of testing positive for dengue. In this case what is being measured is that if event B has occurred, the probability of A given that B occurred is 90%: that is, P = 90%. Alternatively, if a person tests positive for dengue they may have only a 15% chance of actually having this rare disease because the false positive rate for the test may be high. In this case what is being measured is the probability of the event B given that the event A has occurred: P = 15%. Falsely equating the two probabilities causes various errors of reasoning such as the base rate fallacy. Conditional probabilities can be reversed using Bayes' theorem.
Conditional probabilities can be displayed in a conditional probability table.

Definition

Conditioning on an event

Kolmogorov definition

Given two events A and B, from the sigma-field of a probability space, with the unconditional probability of B being greater than zero, P > 0, the conditional probability of A given B is defined as the quotient of the probability of the joint of events A and B, and the probability of B:
where is the probability that both events A and B occur. This may be visualized as restricting the sample space to situations in which B occurs. The logic behind this equation is that if the possible outcomes for A and B are restricted to those in which B occurs, this set serves as the new sample space.
Note that this is a definition but not a theoretical result. We just denote the quantity as and call it the conditional probability of A given B.

As an axiom of probability

Some authors, such as de Finetti, prefer to introduce conditional probability as an axiom of probability:
Although mathematically equivalent, this may be preferred philosophically; under major probability interpretations such as the subjective theory, conditional probability is considered a primitive entity. Further, this "multiplication axiom" introduces a symmetry with the summation axiom for mutually exclusive events:

As the probability of a conditional event

Conditional probability can be defined as the probability of a conditional event
. Assuming that the experiment underlying the events and is repeated, the
Goodman–Nguyen–van Fraassen
conditional event can be defined as
It can be shown that
which meets the Kolmogorov definition of conditional probability. Note that the equation
is a theoretical result and not a definition.
The definition via conditional events can be understood directly in terms of the Kolmogorov axioms and is particularly close to the Kolmogorov interpretation of probability in terms of experimental data. For example, conditional events can be repeated themselves leading to a generalized notion of conditional event
. It can be shown that the sequence
is i.i.d., which yields a strong law of large numbers for conditional probability:

Measure-theoretic definition

If P = 0, then according to the simple definition, P is undefined. However, it is possible to define a conditional probability with respect to a σ-algebra of such events.
For example, if X and Y are non-degenerate and jointly continuous random variables with density ƒ then, if B has positive measure,
The case where B has zero measure is problematic. For the case that B = , representing a single point, the conditional probability could be defined as
however this approach leads to the Borel–Kolmogorov paradox. The more general case of zero measure is even more problematic, as can be seen by noting that the limit, as all δy approach zero, of
depends on their relationship as they approach zero. See conditional expectation for more information.

Conditioning on a random variable

Let X be a random variable; we assume for the sake of presentation that X is finite, that is, X takes on only finitely many values x. Let A be an event. The conditional probability of A given X is defined as the random variable, written P, that takes on the value
whenever
More formally,
The conditional probability
P is a function of X: e.g., if the function g is defined as
then
Note that
P and X are now both random variables. From the law of total probability, the expected value of P is equal to the unconditional probability of A''.

Partial conditional probability

The partial conditional probability
is about the probability of event
given that each of the condition events
has occurred to a degree
that might be different from 100%. Frequentistically, partial conditional probability makes sense, if the conditions are tested in experiment repetitions of appropriate length
. Such
-bounded partial conditional probability can be defined as the conditionally expected average occurrence of event
in testbeds of length
that adhere to all of the probability specifications
, i.e.:
Based on that, partial conditional probability can be defined as
where
Jeffrey conditionalization
is a special case of partial conditional probability in which the condition events must form a partition:

Example

Suppose that somebody secretly rolls two fair six-sided dice, and we wish to compute the probability that the face-up value of the first one is 2, given the information that their sum is no greater than 5.
Probability that D1 = 2
Table 1 shows the sample space of 36 combinations of rolled values of the two dice, each of which occurs with probability 1/36, with the numbers displayed in the red and dark gray cells being
D1 + D2.
D1 = 2 in exactly 6 of the 36 outcomes; thus P = = :
Probability that
D1 + D2 ≤ 5
Table 2 shows that D1 + D2 ≤ 5 for exactly 10 of the 36 outcomes, thus P = :
Probability that D1 = 2 given that D1 + D2 ≤ 5
Table 3 shows that for 3 of these 10 outcomes,
D1 = 2.
Thus, the conditional probability P = = 0.3:
Here, in the earlier notation for the definition of conditional probability, the conditioning event
B is that D1 + D2 ≤ 5, and the event A is D''1 = 2. We have as seen in the table.

Use in inference

In statistical inference, the conditional probability is an update of the probability of an event based on new information. Incorporating the new information can be done as follows:
This approach results in a probability measure that is consistent with the original probability measure and satisfies all the Kolmogorov axioms. This conditional probability measure also could have resulted by assuming that the relative magnitude of the probability of A with respect to X will be preserved with respect to B.
The wording "evidence" or "information" is generally used in the Bayesian interpretation of probability. The conditioning event is interpreted as evidence for the conditioned event. That is, P is the probability of A before accounting for evidence E, and P is the probability of A after having accounted for evidence E or after having updated P. This is consistent with the frequentist interpretation, which is the first definition given above.

Statistical independence

Events A and B are defined to be statistically independent if
If P is not zero, then this is equivalent to the statement that
Similarly, if P is not zero, then
is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in A and B.
Independent events vs. mutually exclusive events
The concepts of mutually independent events and mutually exclusive events are separate and distinct. The following table contrasts results for the two cases.
If statistically independentIf mutually exclusive
0
0
0

In fact, mutually exclusive events cannot be statistically independent, since knowing that one occurs gives information about the other.

Common fallacies

Assuming conditional probability is of similar size to its inverse

In general, it cannot be assumed that PP. This can be an insidious error, even for those who are highly conversant with statistics. The relationship between P and P is given by Bayes' theorem:
That is, P ≈ P only if P/P ≈ 1, or equivalently, PP.

Assuming marginal and conditional probabilities are of similar size

In general, it cannot be assumed that PP. These probabilities are linked through the law of total probability:
where the events form a countable partition of.
This fallacy may arise through selection bias. For example, in the context of a medical claim, let S be the event that a sequela S occurs as a consequence of circumstance C. Let H be the event that an individual seeks medical help. Suppose that in most cases, C does not cause S so P is low. Suppose also that medical attention is only sought if S has occurred due to C. From experience of patients, a doctor may therefore erroneously conclude that P is high. The actual probability observed by the doctor is P.

Over- or under-weighting priors

Not taking prior probability into account partially or completely is called base rate neglect. The reverse, insufficient adjustment from the prior probability is conservatism.

Formal derivation

Formally, P is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures.
Let Ω be a sample space with elementary events, and let P be the probability measure with respect to the σ-algebra of Ω. Suppose we are told the event B ⊆ Ω has occurred. A new probability distribution is to be assigned on to reflect this. All events that are not in B will have null probability in the new distribution. For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. The former is required by the axioms of probability, and the latter stems from the fact that the new probability measure has to be the analog of P in which the probability of B is one - and every event that is not in B, therefore, has a null probability. Hence, for some scale factor α, the new distribution must satisfy:
Substituting 1 and 2 into 3 to select α:
So the new probability distribution is
Now for a general event A,