[5] The lead time from order to delivery of H100-based servers was between 36 and 52 weeks due to shortages and high demand.
[6] Nvidia's AI dominance with Hopper products led to the company increasing its market capitalization to over $2 trillion, behind only Microsoft and Apple.
These areas have influenced or are implemented in transformer-based generative AI model designs or their training algorithms.
[8] In Nvidia's October 2023 Investor Presentation, its datacenter roadmap was updated to include reference to its B100 and B40 accelerators and the Blackwell architecture.
At the Graphics Technology Conference (GTC) on March 18, 2024, Nvidia officially announced the Blackwell architecture with focus placed on its B100 and B200 datacenter accelerators and associated products, such as the eight-GPU HGX B200 board and the 72-GPU NVL72 rack-scale system.
[12][13] Nvidia touted endorsements of Blackwell from the CEOs of Google, Meta, Microsoft, OpenAI and Oracle.
[19] As Blackwell cannot reap the benefits that come with a major process node advancement, it must achieve power efficiency and performance gains through underlying architectural changes.
[21] The reticle limit in semiconductor fabrication is the maximum size of features that lithography machines can etch into a silicon die.
Veteran semiconductor engineer Jim Keller, who had worked on AMD's K7, K12 and Zen architectures, criticized this figure and claimed that the same outcome could be achieved for $1 billion through using Ultra Ethernet rather than the proprietary NVLink system.
[26] The Blackwell architecture introduces fifth-generation Tensor Cores for AI compute and performing floating-point calculations.