# On Shannon's perfect secrecy

Claude Shannon defined perfect secrecy to mean an encryption scheme where the ciphertext did not leak any information whatsoever about the corresponding plaintext. Or, equivalently, an encryption scheme that remains secure even in the presence of an adversary with unlimited computational power. This notion is formalised with probabilities, but to understand that, one must first understand how (and why) do probabilities get mixed-in with crypto in the first place.

The basic principle, first put forth by Kerckhoffs, is that the safety of the cryptosystem rests solely on the key. That is, you have a plaintext message, you choose a key at “random” (more on this later), and obtain a ciphertext message; and the only way of obtaining the original plaintext from the ciphertext is to decrypt it with the same key.1

First, in any “real world” scenario, the plaintexts are not all equally likely: some will be more likely than others (e.g. considering the set of all strings over the Latin alphabet, the plaintexts that correspond to (for instance) valid English will be more likely than those that are just a random sequence of characters). This means that there exists a probability distribution over the set $\mathcal{M}$ of all possible plaintexts, which very likely is not the uniform distribution. Shannon calls this the a priori probability of a plaintext, and one must assume that this plaintext probability distribution is known to the cryptanalyst (remember we assume that the only thing the cryptanalyst does not know is the key—cf. above).

Now formally, a cryptosystem consists of three algorithms, $(Gen, Enc, Dec)$. The first generates a random key, the second is the encryption algorithm that takes a plaintext and a key and outputs a ciphertext, and the third is the decryption algorithm that takes a ciphertext and a key and outputs a plaintext. A cryptosystem is consistent if for all $m$ and $k$, $Dec(Enc(m, k), k)=m$ with probability $1$. The algorithm $Gen$ also implicitly defines a probability distribution over the set of possible keys, $\mathcal{K}$, and furthermore, one can always assume that that distribution is the uniform one.2 The probability distributions over $\mathcal{M}$ and $\mathcal{K}$, together with the encryption algorithm $Enc$, specify (again implicitly) the probability distribution over the set of possible ciphertexts, $\mathcal{E}$. The ensuing discussion assumes that the encryption algorithm is fixed.

Which brings us back to perfect secrecy. Shannon defined a cryptosystem as perfectly secret if, for every plaintext, the a priori probability was equal to the a posteriori probability; which is the probability that that plaintext was the one that originated the observed ciphertext. Mathematically we have the following

Definition 1 (Perfect secrecy). An encryption scheme is perfectly secret if for all $m_i \in \mathcal{M}$ and $e_k \in \mathcal{E}$, and for all probability distributions over $\mathcal{M}$, we have $P(m_i \vert e_k) = P(m_i)$3. End.

An elementary application of Bayes’ theorem shows that the previous definition is equivalent to having perfect secrecy if and only if $P(e_k \vert m_i) = P(e_k)$ (again for all $m_i$, $e_k$ and probability distributions over $\mathcal{M}$).

This is a rigorous definition, but when I first learned the concept, then read through the facts that stem from the definition, I felt some unease, due to the fact that for all the talk about probabilities, their space was never defined. So let’s do that now. This set is composed of all the triples $(m_i, k_j, e_k)$, where $m_i \in \mathcal{M}$, $k_j \in \mathcal{K}$ and $\left( e_k = Enc(m_i, k_j) \right ) \in \mathcal{E}$. There are $\left\lvert \mathcal{M} \right\rvert \times \left\lvert \mathcal{K} \right\rvert$ such tuples. The probability of $(m_i, k_j, e_k)$ is $P(m_i) \times P(k_j)$, because when the encryption algorithm is fixed, $e_k$ is fully determined by the plaintext and the key. Note that $\sum_{m_i \in \mathcal{M}, \, k_j \in \mathcal{K}} P(m_i) \times P(k_j)=1$, because $\sum_{m_i \in \mathcal{M}} P(m_i) = 1$ and $\sum_{k_j \in \mathcal{K}} P(k_j) = 1$, and multiplying both yields the initial summation.

This may seem as a pointless display of pedantry, but its value becomes obvious when one tries to understand (and calculate) probabilities like $P(e)$, where $e$ is a fixed ciphertext. (A remark about notation: values that are assumed fixed are not subscripted.) Intuitively, one could surmise it should be something like the summation of the probabilities for all possible plaintexts, each multiplied by the probability of choosing a key that encrypts that plaintext into $e$. The formalism of the previous paragraph allows us to verify this conjecture. Indeed, to calculate $P(e)$, just select all tuples where $e_k = e$, and sum their probability. We obtain

To see how this is equivalent to our intuitive guess, consider what happens if for a given $m_i$, there are two different keys (say $k_{j_1}$ and $k_{j_2}$) that encrypt it to $e$. Then we would have:

So we conclude that each $m_i$ is multiplied by the total probability of selecting a key that encrypts it to $e$—just as conjectured.

### Conditional probabilities

Both forms of Definition 1 are based on conditional probabilities. Let’s see what insight our formalism can provide on those events.

We can re-write the second summation in Equation 1 differently, by noting that each of the terms is just the probability that both $m_i$ and $e$ occur:

It is implicit that this is done for all keys that encrypt $m_i$ into $e$. This makes sense because the plaintexts form a partition of the probability space: $\Omega = \bigcup_{m_i \in \mathcal{M}} (m_i, \cdot, \cdot)$ (which is the same as saying that the sum of the probabilities for all plaintexts is $1$). On the other hand, for a given $m$, that is encrypted into $e$ by keys $k_{j_1}$ and $k_{j_2}$,

This is because $P(m \cap e)$ is the sum of the probabilities of the tuples of the form $(m, \cdot, e)$, and in our example there are two such tuples, and summing their probabilities yields $P(m) \times P(k_{j_1}) + P(m) \times P(k_{j_2})$. Dividing by $P(m)$ we get $P(k_{j_1}) + P(k_{j_2})$. This of course holds for more than two keys. But this means this sum is also equal to $P(e|m)$, and this in turn allows us to, yet again, re-write the probability $P(e)$ like so:

As far as I can tell, there is no description of $P(m|e)$ that is similar to Equation 2, because this depends on the probability distribution of the plaintext. The best we can write for $P(m|e)$ is the following, which is not simple at all…

* * *

### Back to perfect secrecy (again)

Despite all the talk about perfectly secret ciphers, the truth is that so far all the equalities shown are valid for any symmetric encryption scheme (not just for perfectly secret ones). The next two however, are only true if the cipher has perfect secrecy. First using conditional probabilities we prove another equivalent condition to (both forms of) Definition 1.

Theorem 1. A cipher has perfect secrecy, iff for any distribution over $\mathcal{M}$, any $m_1, m_2 \in \mathcal{M}$ and any $e \in \mathcal{E}$, we have $P(e|m_1)=P(e|m_2)$. End.

Proof. ($\rightarrow$) If the cipher has perfect secrecy, then \$P(e|m_1)=P(e)=P(e|m_2)$, for any$m_1$,$m_2$and$e$$. ($\leftarrow$) If $P(e|m_1)=P(e|m_2)$, for any $m_1$, $m_2$ and $e$, then P(e|m)$is just the value of$P(e|m_i)$, for an arbitrary$m_i$$, because it is always the same by hypothesis. QED.

Next we prove another necessary and sufficient condition for perfect secrecy. In the case of such a cipher, $P(e|m)$ has always the same value, for all $m$ (viz. $P(e)$). And because we can always assume that the keys are generated according to the uniform distribution, this means that, for a fixed $e$, and for any $m$, the number of keys that encrypt $m$ to $e$ is always the same. The next result shows the converse is also true.

Theorem 2. A cipher has perfect secrecy if and only if, having fixed a ciphertext, for any plaintext, the number of keys that encrypt it to that fixed ciphertext is the same (but note this number can vary for different ciphertexts). More formally, let $e$ be a fixed ciphertext as before, and let $K_{m \rightarrow e}$ be the set of keys that encrypt $m$ to $e$. Then a cipher has perfect secrecy iff $\left\lvert K_{m \rightarrow e} \right\rvert$ has the same value, for all $m$.4 End.

Note that the analog property when having fixed a plaintext is false: there can be more keys that encrypt that plaintext to one ciphertext than to another ciphertext, but the cipher can still be perfectly secret. We’ll see an example of this shortly.

Proof. ($\rightarrow$) We have argued this direction informally, based on the property that, for perfectly secret ciphers, $P(e|m)=P(e)$. But we can also use $P(m|e)=P(m)$. From (3), if the cipher is perfectly secret:

$P(m)$ is constant in the numerator summation, so it can be put outside it, and cancelled with the $P(m)$ of the right hand side. We thus obtain

Remember the denominator equals $P(e)$. What this means is that for a given (fixed) ciphertext $e$, and for all plaintext messages $m$, the number of keys that encrypt $m$ to $e$ must have probabilities that always sum to the same value. Given the assumption of a uniform key distribution, this is the same as saying the number of keys must always be the same.

($\leftarrow$) If for any ciphertext $e$, the number of keys that decrypt it to any plaintext is the same, then we immediately have that for any two different plaintext messages, $m_1$ and $m_2$, it must be the case that $P(e|m_1)=P(e|m_2)=P(e)$. Theorem 1 now yields the conclusion that the cipher is perfectly secure. QED.

### The question

After that lengthy introduction, we now finally come to the question that actually annoyed me enough to try to visualise the probability space in the way I’ve just described. That question is the following: does perfect secrecy imply the ciphertext distribution is uniform? I could not either prove it or refute it, but it turns out the answer is no. Here’s the counterexample: we have two bits of plaintext, $(b_0, b_1)$, four bits of key material $(k_0, k_1, k_2, k_3)$, and three bits of ciphertext $(c_0, c_1, c_2)$:

This encryption algorithm has perfect secrecy, because for any given ciphertext, there is the same number of keys that decrypt it to any plaintext (cf. Theorem 2). This is straightforward (if somewhat laborious) to see.

Consider the ciphertext $(0, 0, 0)$, and an arbitrary plaintext $(b_0, b_1)$. What are the keys that would encrypt the said plaintext into the said ciphertext? Given that $c_2$ must be zero, we have that $k_0 = b_0$. The same reasoning yields $k_1 = b_1$, because $c_1 = 0$. Finally, given that $c_0=0$ and $b_0 \oplus k_0 = 0$, it must be the case that $k_3 \land k_2 = 0$. This yields the following three possible keys for encrypting $(b_0, b_1)$ into $(0, 0, 0)$: $(b_0, b_1, 0, 0)$, $(b_0, b_1, 0, 1)$ and $(b_0, b_1, 1, 0)$. We denote this set as $(b_0, b_1, \{(0, 0), (0, 1), (1, 0)\})$.

Reasoning similarly, we can write the following table, listing the keys that encrypt an arbitrary plaintext $(b_0, b_1)$ into the ciphertext in the left column. An overline ($\overline{b}$) denotes the complement of the bit $b$.

CiphertextKeys
$$(0, 0, 0)$$$$(b_0, b_1, \{(0, 0), (0, 1), (1, 0)\})$$
$$(0, 0, 1)$$$$(b_0, b_1, 1, 1)$$
$$(0, 1, 0)$$$$(b_0, \overline{b_1}, \{(0, 0), (0, 1), (1, 0)\})$$
$$(0, 1, 1)$$$$(\overline{b_0}, \overline{b_1}, 1, 1)$$
$$(1, 0, 0)$$$$(b_0, b_1, \{(0, 0), (0, 1), (1, 0)\})$$
$$(1, 0, 1)$$$$(\overline{b_0}, b_1, \{(0, 0), (0, 1), (1, 0)\})$$
$$(1, 1, 0)$$$$(b_0, \overline{b_1}, 1, 1)$$
$$(1, 1, 1)$$$$(\overline{b_0}, \overline{b_1}, \{(0, 0), (0, 1), (1, 0)\})$$

Thus we can see that for any fixed ciphertext, there is the same number of keys that cause it to decrypt to any plaintext; thus the scheme has perfect secrecy. However, the ciphertext distribution is not always uniform: if both plaintext and keys are assumed uniform, then (for example) the ciphertext $(0, 0, 0)$ will be more likely to appear than $(0, 0, 1)$, because there are more keys that encrypt an arbitrary plaintext to it. In other words, although the cipher is perfectly secret for all plaintext distributions, there is at least one (viz. the uniform distribution) for which the ciphertext distribution will not be uniform.

Also recall the remark made after stating Theorem 2: in this cipher for a given plaintext, there are more keys that encrypt it to some ciphertexts rather than others—indeed that is the cause of the non-uniformity of the ciphertexts—but it does not prevent perfect secrecy, as this example illustrates.

### The converse question

And what about the converse? I.e. if the ciphertext is uniformly distributed, then is the cipher perfectly secret? As already mentioned, the ciphertext distribution is implicitly specified by the encryption algorithm, and key and plaintext distributions. Given that we can assume the key to be uniformly distributed, if the ciphertext is also uniformly distributed we have (notice that the first summation is just another notation for writing (1)):

Remember that $K_{m_i \rightarrow e}$ is the set of keys that encrypt $m_i$ to $e$, and $P(K_{m_i \rightarrow e})$ is the summation of the probabilities of those keys. If the ciphertext is uniformly distributed regardless of the plaintext distribution, then we must have $P(e) = \frac{1}{\left\lvert \mathcal{E} \right\rvert}$ . This is only possible if, having fixed an $e$, $P(K_{m_i \rightarrow e})$ as the same value for all $m_i$—we denote that common value by $P(K_{m \rightarrow e})$ (notice the subscript $i$ is gone). The uniformity of the key distribution now means for a fixed $e$, $\left\lvert K_{m_i \rightarrow e} \right\rvert$ has the same value, for all $m_i$. Theorem 2 now yields that the cipher is perfectly secret. Thus we can now state

Theorem 3. If a cipher has an uniform ciphertext distribution, regardless of the plaintext distribution, then it is a perfectly secret cipher. End.

The converse is false, as the above example cipher shows. An example of an encryption scheme where the ciphertext distribution is always uniform, is the One Time Pad.

Notice that in this case—uniform ciphertext distribution—as mentioned above, fixing $e$, the number of keys that encrypt $m_i$ to $e$ is the same, for all $m_i$. But in addition to that, because $P(e)$ has the same value for all $e \in \mathcal{E}$, so does $P(K_{m \rightarrow e})$. This means that for a fixed plaintext $m$, we have $P(K_{m \rightarrow e_k})$ has the same value, for all $e_k \in \mathcal{E}$. Or in words, for all plaintexts, the number of keys that encrypt any particular plaintext to any particular ciphertext is the same.

1. This assumes that given one specific plaintext and ciphertext, there is only key that encrypts the former into the latter (and vice-versa for decryption). This need not be the case (even for perfect secrecy!), as we shall see. But of course, for any secure cryptosystem, even if they do exist, it should be infeasible to calculate one such key.

2. Consider a cryptosystem $(Gen, Enc, Dec)$, in which $Gen$ is a really complicated algorithm, that outputs the key according to a very non-uniform distribution. This algorithm usually takes as input a “random tape”, which outputs symbols of the same alphabet, according to a uniform distribution. Then we can use this random tape as a new key generation algorithm $Gen'$, and incorporate the operations of $Gen$ in $Enc$ and $Dec$. Thus we obtain a derived cryptosystem $(Gen', Enc', Dec')$, with a uniformly generated key, but with the same ciphertext distribution—because the operations are still the same, they are just “in a different place”.

More concretely, consider a specific tuple $(m, k, e)$, with probability $P(m) \times P(k)$. Let $k'$ be the (uniformly random) key which $Gen$ used as its random tape to produce key $k$. This means that $P(k) = P(k') \times P(Gen_{k' \rightarrow k})$, where the last term is the probability that $Gen$ produced key $k$ when its random tape produced $k'$. Then, under the new cryptosystem, what is the probability of $(m, k', e)$? Remember that the previous $Gen$ has been incorporated into $Enc'$, which means that the probability is that of $m$ being selected, times the probability of $k'$ being selected, times the probability of $Enc'$ internally transforming $k'$ into $k$. I.e. $P(m) \times P(k') \times P(Gen_{k' \rightarrow k})$; but this is just the original value. In particular this means that for any $e$ and $m_i$, the value of $P(e \vert m_i)$ does not change (cf. footnote #4).

As an ending remark, note that if the only source of randomness for $Gen$ is the random tape, then $P(Gen_{k' \rightarrow k})=1$, i.e. $k'$ is always transformed in $k$.

3. To be rigorous, we would have to define random variables $M$ and $e$, and say that $P(M=m_i \vert e=e_k) = P(M=m_i)$, where $m_i \in \mathcal{M}$ and $e_k \in \mathcal{E}$. But such a level of rigour is not needed here.

4. To better illustrate what the assumption of uniformly generated keys means in this context, suppose that (for some perfectly secure cryptosystem), for a given ciphertext $e$, there exists a plaintext $m_1$, for which there are two keys that encrypt it to $e$, and a plaintext $m_2$ for which there are three such keys. Furthermore, suppose that the key distribution is such that $P(e)=P(e|m_1)=P(e|m_2)$. By the method of footnote #2, if we now produce an equivalent cryptosystem with uniform keys, the ciphertext distribution does not change, so neither do $P(e|m_1)$ or $P(e|m_2)$; but they can no longer be equal, because as there are more keys for $m_2$ than $m_1$, we must have $% $. This shows the original cipher could not have been perfectly secure.