Bit

[2] Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two.

As a measure of the length of a digital string that is encoded as symbols over a 0-1 (binary) alphabet, the bit has been called a binit,[7] but this usage is now rare.

The field of Algorithmic Information Theory is devoted to the study of the "irreducible information content" of a string (i.e. its shortest-possible representation length, in bits), under the assumption that the receiver has minimal a priori knowledge of the method used to compress the string.

[10][11][12] He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit".

These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc.

Perhaps the earliest example of a binary storage device was the punched card invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov, Charles Babbage, Herman Hollerith, and early computer manufacturers like IBM.

When relays were replaced by vacuum tubes, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs by photolithographic techniques.

In certain types of programmable logic arrays and read-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit.

In optical discs, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface.

However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit.

The most common is the unit byte, coined by Werner Buchholz in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer[2][16][17][18][19] and for this reason it was used as the basic addressable element in many computer architectures.

[20] However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits.

In the early 21st century, retail personal or server computers have a word size of 32 or 64 bits.

The prefixes kilo (103) through yotta (1024) increment by multiples of one thousand, and the corresponding units are the kilobit (kbit) through the yottabit (Ybit).