Analysis of Boolean functions

The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.

are known as Fourier characters, and they form an orthonormal basis for the space of all functions over

The total influence can also be defined using the discrete Laplacian of the Hamming graph, suitably normalized:

according to one of the following two equivalent rules, applied independently to each coordinate: We denote this distribution by

, the noise operator can also be defined using a continuous-time Markov chain in which each bit is flipped independently with rate 1.

This Markov chain is generated by the Laplacian of the Hamming graph, and this relates total influence to the noise operator.

then Hypercontractivity is closely related to the logarithmic Sobolev inequalities of functional analysis.

is given by This measure can be generated by choosing each coordinate independently to be 1 with probability

The classical Fourier characters are no longer orthogonal with respect to this measure.

The Russo–Margulis formula is key for proving sharp threshold theorems such as Friedgut's.

One of the deepest results in the area, the invariance principle, connects the distribution of functions on the Boolean cube

Many of the basic concepts of Fourier analysis on the Boolean cube have counterparts in Gaussian space: Gaussian space is more symmetric than the Boolean cube (for example, it is rotation invariant), and supports continuous arguments which may be harder to get through in the discrete setting of the Boolean cube.

The Poincaré inequality for the Boolean cube (which follows from formulas appearing above) states that for a function

The bound given by the Kahn–Kalai–Linial theorem is tight, and is achieved by the Tribes function of Ben-Or and Linial:[8] The Kahn–Kalai–Linial theorem was one of the first results in the area, and was the one introducing hypercontractivity into the context of Boolean functions.

Combined with the Russo–Margulis lemma, Friedgut's junta theorem implies that for every

The invariance principle (in a special case) informally states that if

The invariance principle was the key ingredient in the original proof of the Majority is Stablest theorem.

It is not hard to show that the Boolean linear functions are exactly the characters

Blum, Luby and Rubinfeld[11] showed that if the test passes with probability

Bellare et al.[12] gave an extremely simple Fourier-analytic proof, that also shows that if the test succeeds with probability

Their proof relies on the following formula for the success probability of the test: Arrow's impossibility theorem states that for three and more candidates, the only unanimous voting rule for which there is always a Condorcet winner is a dictatorship.

Kalai[13] gave an alternative proof of this result in the case of three candidates using Fourier analysis.

A classical result in the theory of random graphs states that the probability that a

Friedgut's sharp threshold theorem[14] states, roughly speaking, that a monotone graph property (a graph property is a property which doesn't depend on the names of the vertices) has a sharp threshold unless it is correlated with the appearance of small subgraphs.

This theorem has been widely applied to analyze random graphs and percolation.

On a related note, the KKL theorem implies that the width of threshold window is always at most

Sheppard's formula gives the asymptotic noise stability of majority: This is related to the probability that if we choose

, then the majority stays the same: There are Boolean functions with larger noise stability.

[16] [17] Majority is Stablest implies that the Goemans–Williamson approximation algorithm for MAX-CUT is optimal, assuming the unique games conjecture.

This implication, due to Khot et al.,[18] was the impetus behind proving the theorem.