Serving as the introduction of Fermi, the GeForce 400 Series is a series of graphics processing units developed by Nvidia. Its release was originally slated in November 2009; however, after delays, it was released on March 26, 2010 with availability following in April 2010.
Architecture
Nvidia described the Fermi as the next major step in its line of GPUs following the Tesla used since the G80. The GF100, the first Fermi-architecture product, is large: 512 stream processors, in sixteen groups of 32, and 3.0 billion transistors, manufactured by TSMC in a 40 nm process. It is Nvidia's first chip to support OpenGL 4.0 and Direct3D 11. No products with a fully enabled GF100 GPU were ever sold. The GTX 480 had one streaming multiprocessor disabled. The GTX 470 had two streaming multiprocessors and one memory controller disabled. The GTX 465 had five streaming multiprocessors and two memory controllers disabled. Consumer GeForce cards came with 256MB attached to each of the enabled GDDR5 memory controllers, for a total of 1.5, 1.25 or 1.0GB; the Tesla C2050 had 512MB on each of six controllers, and the Tesla C2070 had 1024MB per controller. Both the Tesla cards had fourteen active groups of stream processors. The chips found in the high performance Tesla branding feature memory with optional ECC and the ability to perform one double-precision floating-point operation per cycle per core; the consumer GeForce cards are artificially driver restricted to one DP operation per four cycles. With these features, combined with support for Visual Studio and C++, Nvidia targeted professional and commercial markets, as well as use in high performance computing. Fermi is named after Italian physicist Enrico Fermi.
Current limitations and trade-offs
The quantity of on-board SRAM per ALU actually decreased proportionally compared to the previous G200 generation, despite the increase of the L2 cache from 256kB per 240 ALUs to 768kB per 512 ALUs, since Fermi has only 32768 registers per 32 ALUs, only 48kB of shared memory per 32 ALUs, and only 16kB of cache per 32 ALUs. Parameters such as the number of registers can be found in the CUDA Compute Capability Comparison Table in the reference manual.
History
On September 30, 2009, Nvidia released a white paper describing the architecture: the chip features 16 'Streaming Multiprocessors' each with 32 'CUDA Cores' capable of one single-precision operation per cycle or one double-precision operation every other cycle, a 40-bit virtual address space which allows the host's memory to be mapped into the chip's address space, meaning that there is only one kind of pointer and making C++ support significantly easier, and a 384-bit wide GDDR5 memory interface. As with the G80 and GT200, threads are scheduled in 'warps', sets of 32 threads each running on a single shader core. While the GT200 had 16 KB 'shared memory' associated with each shader cluster, and required data to be read through the texturing units if a cache was needed, GF100 has 64 KB of memory associated with each cluster, which can be used either as a 48 KB cache plus 16 KB of shared memory, or as a 16 KB cache plus 48 KB of shared memory, along with a 768 KB L2 cache shared by all 16 clusters. The white paper describes the chip much more as a general purpose processor for workloads encompassing tens of thousands of threads - reminiscent of the Tera MTA architecture, though without that machine's support for very efficient random memory access - than as a graphics processor.
2 Each Streaming Multiprocessor in the GPU of GF100 architecture contains 32 SPs and 4 SFUs. Each Streaming Multiprocessor in the GPU of GF104/106/108 architecture contains 48 SPs and 8 SFUs. Each SP can fulfil 2 single precision fused multiply–add operations per cycle. Each SFU can fulfil four SF operations per cycle. One FMA operation counts for two floating point operations. So the theoretical single precision peak performance, with shader count and shader frequency , can be estimated by the following, FLOPSsp ≈ f × n × 2. Total Processing Power: for GF100 FLOPSsp ≈ f × m × and for GF104/106/108 FLOPSsp ≈ f × m × or for GF100 FLOPSsp ≈ f × n × 2.5 and for GF104/106/108 FLOPSsp ≈ f × n × 8 / 3.
SP - Shader Processor, SFU - Special Function Unit, SM - Streaming Multiprocessor.
3 Each SM in the GF100 contains 4 texture filtering units for every texture address unit. The complete GF100 die contains 64 texture address units and 256 texture filtering units Each SM in the GF104/106/108 architecture contains 8 texture filtering units for every texture address unit. The complete GF104 die contains 64 texture address units and 512 texture filtering units, the complete GF106 die contains 32 texture address units and 256 texture filtering units and the complete GF108 die contains 16 texture address units and 128 texture filtering units.
All products are produced on a 40 nm fabrication process. All products support Direct X 12.0, OpenGL 4.6 and OpenCL 1.1. The only exception is the Geforce 405 which is based on the GT218 core only supporting DirectX 10.1, OpenGL 3.3 and no OpenCL Support On November 8, 2010, Nvidia released the GF110 chip, along with the GTX580. It is a redesigned GF100 chip, which uses significantly less power. This allowed Nvidia to enable all 16 SMs, which was previously impossible on the GF100 Various features of the GF100 architecture were only available on the more expensive Quadro and Tesla series of cards. For the GeForce consumer products, double precision performance is a quarter of that of the "full" Fermi architecture. Error checking and correcting memory also does not operate on consumer cards. The GF100 cards provide Compute Capability 2.0, while the GF104/106/108 cards provide Compute Capability 2.1.
Chipset table
Discontinued support
Nvidia announced that after Release 390 drivers, it will no longer release 32-bit drivers for 32-bit operating systems. Nvidia announced in April 2018 that Fermi will move to legacy driver support status and be maintained until January 2019.