[1] The first GeForce products were discrete GPUs designed for add-on graphics boards, intended for the high-margin PC gaming market, and later diversification of the product line covered all tiers of the PC graphics market, ranging from cost-sensitive[2] GPUs integrated on motherboards to mainstream add-in retail boards.
GeForce technology[vague] has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets.
GeForce GPUs are very dominant in the general-purpose graphics processor unit (GPGPU) market thanks to their proprietary Compute Unified Device Architecture (CUDA).
[3] GPGPU is expected to expand GPU functionality beyond the traditional rasterization of 3D graphics, to turn it into a high-performance computing device able to execute arbitrary programming code in the same way a CPU does, but with different strengths (highly parallel execution of straightforward calculations) and weaknesses (worse performance for complex branching code).
Launched in February 2001, the GeForce3 (NV20) introduced programmable vertex and pixel shaders to the GeForce family and to consumer-level graphics accelerators.
It had good overall performance and shader support, making it popular with enthusiasts although it never hit the midrange price point.
The biggest advancements included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds.
However, initial models like the GeForce FX 5800 Ultra suffered from weak floating point shader performance and excessive heat which required infamously noisy two-slot cooling solutions.
The seventh generation GeForce (G70/NV47) was launched in June 2005 and was the last Nvidia video card series that could support the AGP bus.
The design was a refined version of GeForce 6, with the major improvements being a widened pipeline and an increase in clock speed.
A 128-bit, eight render output unit (ROP) variant of the 7800 GTX, called the RSX Reality Synthesizer, is used as the main GPU in the Sony PlayStation 3.
[2] Based on the GT200 graphics processor consisting of 1.4 billion transistors, codenamed Tesla, the 200 series was launched on June 16, 2008.
It introduced significant improvements in performance, heat, and power efficiency compared to the Fermi architecture and closely matched AMD's flagship Radeon HD 7970.
At the end of May 2013, Nvidia announced the 700 series, which was still based on the Kepler architecture, however it featured a GK110-based card at the top of the lineup.
It featured the same advanced reference cooler design, but did not have the unlocked double-precision cores and was equipped with 3 GB of memory.
It is based on the same Turing architecture used in the GeForce 20 series, but disabling the Tensor (AI) and RT (ray tracing) cores to provide more affordable graphic cards for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations.
The RTX 3090 Ti is the highest-end Nvidia GPU on the Ampere microarchitecture, it features a fully unlocked GA102 die built on the Samsung 8 nm node due to supply shortages with TSMC.
This was primarily owing to the United States Department of Commerce beginning the enactment of restrictions on the Nvidia RTX 4090 for export to certain countries in 2023.
These GPUs are generally optimized for lower power consumption and less heat output in order to be used in notebook PCs and small desktops.
[43] After the nForce range was discontinued, Nvidia released their Ion line in 2009, which consisted of an Intel Atom CPU partnered with a low-end GeForce 9 series GPU, fixed on the motherboard.
[46][47] This may be different for the Nvidia Quadro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers.
[49] Basic support for the DRM mode-setting interface in the form of a new kernel module named nvidia-modeset.ko has been available since version 358.09 beta.
Traditional display interactions (X11 modesets, OpenGL SwapBuffers, VDPAU presentation, SLI, stereo, framelock, G-Sync, etc.)
[51] In May 2022, Nvidia announced that it would release a partially open-source driver for the (GSP enabled) Turing architecture and newer, in order to enhance the ability for it to be packaged as part of Linux distributions.
For example, as of January 2014[update] nouveau driver lacks support for the GPU and memory clock frequency adjustments, and for associated dynamic power management.
[63] However, as of August 2014[update] and version 3.16 of the Linux kernel mainline, contributions by Nvidia allowed partial support for GPU and memory clock frequency adjustments to be implemented.
[citation needed] The license has common terms against reverse engineering and copying, and it disclaims warranties and liability.
[67] When installing new drivers, GeForce Experience may force the system to restart after a 60-second countdown, without giving the user any choice.
New features include an overhauled user interface, a new in-game overlay, support for ShadowPlay with 120 fps, as well as RTX HDR[70][71] and RTX Dynamic Vibrance,[71] which are AI-based in-game filters that enable HDR and increase color saturation in any DirectX 9 (and newer) or Vulkan game, respectively.
The Nvidia App also features Auto Tuning, which adjusts the GPU's clock rate based on regular hardware scans to ensure optimal performance.