However, in RAM or CPU registers, numbers are represented only as sequences of bits, without extra symbols.
Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other elaborations on such themes.
One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions.
[citation needed] One camp supported two's complement, the system that is dominant today.
Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent.
A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit.
Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits.
These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign–magnitude when the result was transmitted back to the register.
The electronics required more gates than the other systems – a key concern when the cost and packaging of discrete transistors were critical.
Ones' complement allowed for somewhat simpler hardware designs, as there was no need to convert values when passed to and from the math unit.
Using sign–magnitude representation has multiple consequences which makes them more intricate to implement:[5] This approach is directly comparable to the common way of showing a sign (placing a "+" or "−" next to the number's magnitude).
Some early binary computers (e.g., IBM 7090) use this representation, perhaps because of its natural relation to common usage.
Two's complement arithmetic, on the other hand, forms the negation of x by subtracting x from a single large power of two that is congruent to +0.
This can be seen as a slight modification and generalization of the aforementioned two's-complement, which is virtually the excess-(2N−1) representation with negated most significant bit.
Same table, as viewed from "given these binary bits, what is the number as interpreted by the representation system": Google's Protocol Buffers "zig-zag encoding" is a system similar to sign–magnitude, but uses the least significant bit to represent the sign and has a single representation of zero.
For instance, in 1726, John Colson advocated reducing expressions to "small numbers", numerals 1, 2, 3, 4, and 5.
In 1840, Augustin Cauchy also expressed preference for such modified decimal numbers to reduce errors in computation.