Petascale computing

It can be noted that there are alternative precision measures using the LINPACK benchmarks which are not part of the standard metric/definition.

[3][4] The petaFLOPS barrier was first broken on 16 September 2007 by the distributed computing Folding@home project.

By 2022, exascale computing had been reached with the development of Frontier, surpassing Fugaku with an Rmax of 1.102 exaFLOPS in June 2022.

[8] Modern artificial intelligence (AI) systems require large amounts of computational power to train model parameters.

OpenAI employed 25,000 Nvidia A100 GPUs to train GPT-4, using a total of 133 septillion floating-point operations.