Octal numerals can be easily converted from binary representations (similar to a quaternary numeral system) by grouping consecutive binary digits into groups of three (starting from the right, for integers).
[1] Octal became widely used in computing when systems such as the UNIVAC 1050, PDP-8, ICL 1900 and IBM mainframes employed 6-bit, 12-bit, 24-bit or 36-bit words.
So two, four, eight or twelve digits could concisely display an entire machine word.
This representation offers no way to easily read the most significant byte, because it's smeared over four octal digits.
Some platforms with a power-of-two word size still have instruction subwords that are more easily understood if displayed in octal; this includes the PDP-11 and Motorola 68000 family.
[11] Octal is sometimes used in computing instead of hexadecimal, perhaps most often in modern times in conjunction with file permissions under Unix systems (see chmod).
In programming languages, octal literals are typically identified with a variety of prefixes, including the digit 0, the letters o or q, the digit–letter combination 0o, or the symbol &[12] or $.
Newer languages have been abandoning the prefix 0, as decimal numbers are often represented with leading zeroes.
Octal numbers that are used in some programming languages (C, Perl, PostScript...) for textual/graphical representations of byte strings when some byte values (unrepresented in a code page, non-graphical, having special meaning in current context or otherwise undesired) have to be to escaped as \nnn.
Transponders in aircraft transmit a "squawk" code, expressed as a four-octal-digit number, when interrogated by ground radar.
The octal representation is formed by the quotients, written in the order generated by the algorithm.
Repeat the process with the fractional part of the result, until it is null or within acceptable error bounds.
The binary digits are grouped by threes, starting from the least significant bit and proceeding to the left and to the right.