Vector logic assumes that the truth values map on vectors, and that the monadic and dyadic operations are executed by matrix operators.
Classic binary logic is represented by a small set of mathematical functions depending on one (monadic) or two (dyadic) variables.
A two-valued vector logic requires a correspondence between the truth-values true (t) and false (f), and two q-dimensional normalized real-valued column vectors s and n, hence: (where
This correspondence generates a space of vector truth-values: V2 = {s,n}.
The two basic monadic operators for this two-valued vector logic are the identity and the negation: The 16 two-valued dyadic operators correspond to functions of the type
The matrices that execute these dyadic operations are based on the properties of the Kronecker product.
Two properties of this product are essential for the formalism of vector logic: If A, B, C and D are matrices of such size that one can form the matrix products AC and BD, then Using these properties, expressions for dyadic logic functions can be obtained: The matrices S and P correspond to the Sheffer (NAND) and the Peirce (NOR) operations, respectively: Here are numerical examples of some basic logical gates implemented as matrices for two different sets of 2-dimensional orthonormal vectors for s and n. Set 1:
The resulting matrices for conjunction, disjunction and implication are:
In the two-valued logic, the conjunction and the disjunction operations satisfy the De Morgan's law: p∧q≡¬(¬p∨¬q), and its dual: p∨q≡¬(¬p∧¬q)).
For the two-valued vector logic this law is also verified: The Kronecker product implies the following factorization: Then it can be proved that in the two-dimensional vector logic the De Morgan's law is a law involving operators, and not only a law concerning operations:[6] In the classical propositional calculus, the law of contraposition p → q ≡ ¬q → ¬p is proved because the equivalence holds for all the possible combinations of truth-values of p and q.
[7] Instead, in vector logic, the law of contraposition emerges from a chain of equalities within the rules of matrix algebra and Kronecker products, as shown in what follows: This result is based in the fact that D, the disjunction matrix, represents a commutative operation.
Many-valued logic was developed by many researchers, particularly by Jan Łukasiewicz and allows extending logical operations to truth-values that include uncertainties.
, a scalar probabilistic logic is provided by the projection over vector s: Here are the main results of these projections: The associated negations are: If the scalar values belong to the set {0, 1/2, 1}, this many-valued scalar logic is for many of the operators almost identical to the 3-valued logic of Łukasiewicz.
Also, it has been proved that when the monadic or dyadic operators act over probabilistic vectors belonging to this set, the output is also an element of this set.
[6] This operator was originally defined for qubits in the framework of quantum computing.
[12][13] In vector logic, this operator can be extended for arbitrary orthonormal truth values.
Another interesting point is the analogy with the two square roots of -1.
Early attempts to use linear algebra to represent logic operations can be referred to Peirce and Copilowish,[15] particularly in the use of logical matrices to interpret the calculus of relations.
The approach has been inspired in neural network models based on the use of high-dimensional matrices and vectors.
[16][17] Vector logic is a direct translation into a matrix–vector formalism of the classical Boolean polynomials.
[18] This kind of formalism has been applied to develop a fuzzy logic in terms of complex numbers.
[19] Other matrix and vector approaches to logical calculus have been developed in the framework of quantum physics, computer science and optics.
Ramachandran developed a formalism using algebraic matrices and vectors to represent many operations of classical Jain logic known as Syad and Saptbhangi; see Indian logic.
[22] It requires independent affirmative evidence for each assertion in a proposition, and does not make the assumption for binary complementation.
George Boole established the development of logical operations as polynomials.
[18] For the case of monadic operators (such as identity or negation), the Boolean polynomials look as follows: The four different monadic operations result from the different binary values for the coefficients.
Identity operation requires f(1) = 1 and f(0) = 0, and negation occurs if f(1) = 0 and f(0) = 1.
For the 16 dyadic operators, the Boolean polynomials are of the form: The dyadic operations can be translated to this polynomial format when the coefficients f take the values indicated in the respective truth tables.
For instance: the NAND operation requires that: These Boolean polynomials can be immediately extended to any number of variables, producing a large potential variety of logical operators.
In vector logic, the matrix-vector structure of logical operators is an exact translation to the format of linear algebra of these Boolean polynomials, where the x and 1−x correspond to vectors s and n respectively (the same for y and 1−y).