Double dabble

In computer science, the double dabble algorithm is used to convert binary numbers into binary-coded decimal (BCD) notation.

[1][2] It is also known as the shift-and-add-3 algorithm, and can be implemented using a small number of gates in computer hardware, but at the expense of high latency.

[3] The algorithm operates as follows: Suppose the original number to be converted is stored in a register that is n bits wide.

Reserve a scratch space wide enough to hold both the original number and its BCD representation; n + 4×ceil(n/3) bits will be enough.

Essentially, the algorithm operates by doubling the BCD value on the left each iteration and adding either one or zero according to the original bit pattern.

Parametric Verilog implementation of the double dabble binary to BCD converte, 18-bit example.
Parametric Verilog implementation of the double dabble binary to BCD converter, 18-bit example. [ 4 ]