Sinkhorn's theorem states that every square matrix with positive entries can be written in a certain standard form.
[1] [2] A simple iterative method to approach the double stochastic matrix is to alternately rescale all rows and all columns of A to sum to 1.
[4] The following extension to maps between matrices is also true (see Theorem 5[5] and also Theorem 4.7[6]): given a Kraus operator that represents the quantum operation Φ mapping a density matrix into another, that is trace preserving, and, in addition, whose range is in the interior of the positive definite cone (strict positivity), there exist scalings xj, for j in {0,1}, that are positive definite so that the rescaled Kraus operator is doubly stochastic.
In the 2010s Sinkhorn's theorem came to be used to find solutions of entropy-regularised optimal transport problems.
[7] This has been of interest in machine learning because such "Sinkhorn distances" can be used to evaluate the difference between data distributions and permutations.