Parity learning

The samples are generated using some distribution over the input.

The problem is easy to solve using Gaussian elimination provided that a sufficient number of samples (from a distribution which is not too skewed) are provided to the algorithm.

In Learning Parity with Noise (LPN), the samples may contain some error.

Instead of samples (x, ƒ(x)), the algorithm is provided with (x, y), where for random boolean

The noisy version of the parity learning problem is conjectured to be hard[1] and is widely used in cryptography.