The immense processing power of graphics cards

Dec 12, 2006 15:35 GMT  ·  By

We've seen that the motherboard, the CPU and the system RAM are crucial for your PC's performance, but there is one hardware component that is representative for the pregnant visual aspect of the Information Age. The graphics card is one component that has seen an interesting evolution and nowadays it even out-paces the CPU when it comes to transistor counts or new releases and technologies.

Meet the specimen In case you don't exactly know how a graphics card looks, you must've been somewhere in outer space for the last 20 years. Anyway, here are the major components of a graphics card:

1. Graphics Processing Unit (GPU) and the Heat Sinks and Fans 2. The Memory 3. RAMDAC 4. Motherboard Connectors 5. Display Connector 6. The Printed Circuit Board (PCB)

The gpu
Graphics memory
The ramdac
PCIe motherboard connector
DVI output connector

VGA output connector
All the above-mentioned components arranged on a PCB
The graphics card is that piece of hardware whose function is to generate and output images to a display. The images you see on your monitor are made of tiny dots called pixels. Present-day monitors are able to display more than a million pixels. The computer has to decide what to do with every pixel in order to create an image and to do this, it needs a translator - something to take binary data from the CPU and turn it into a visual representation on the monitor. This is where the graphics card intervenes. The CPU, working in conjunction with software applications, sends information about the image to the graphics card. The graphics card decides how to use the pixels on the screen to create the image. It then sends that information to the monitor through a cable. Let us now see how each component of the graphics card contributes to this translation process.

The legacy's allure The graphics card (also known as a dedicated graphics solution) is just like a miniature motherboard: it features a printed circuit board that houses a processor and RAM. It also has an input/output system (BIOS) chip, which stores the card's settings and performs diagnostics on the memory at startup. In the same way, the graphics processing unit (GPU) is similar to a computer's CPU. A GPU, however, is designed specifically for performing the complex mathematical and geometric calculations that are necessary for graphics rendering. Some of the fastest GPUs have more transistors than the average CPU. A GPU produces a lot of heat, so, in order to prevent overheating, graphics card makers usually add heat sinks and fans.

In addition to its processing power, a GPU uses special instructions to help it analyze and transform data. To improve image quality, the GPU designers implement technologies such as full scene anti-aliasing (FSAA), which smoothes the edges of 3-D objects or anisotropic filtering (AF), which makes images look crisper.

There also are integrated graphics chipsets, which are placed directly on the motherboard PCB. Integrated graphics solutions, or shared graphics solutions are graphics processors that utilize a portion of a computer's system RAM rather than dedicated graphics memory. Such solutions are typically far less expensive to implement in comparison to dedicated graphics solutions, but at a trade-off they're being far less capable and are generally considered unfit to play modern games as well as run graphically intensive programs such as Adobe Flash. Modern desktop motherboards may include an integrated graphics solution and have PCIe expansion slots available to add a dedicated graphics card later. The integrated graphics chipsets are more of a low-end solution and as soon as a mainstream or high-end dedicated graphics card is connected to the motherboard, the new card overrides the integrated GPU's processes and takes over.

The newest trend is to make GPUs compatible with the Stream Processing technology. This concept turns the massive floating-point computational power of a modern graphics accelerator into general-purpose computing power, as opposed to being dedicated solely to graphical operations. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. Companies such as ATi and nVidia have only recently begun to pursue this new market with an array of applications. ATi has teamed with Stanford University to create a GPU-based client for its Folding@Home distributed computing project that in certain circumstances yields results forty times faster than the conventional CPU's traditionally used in such applications.

The last years introduced a compromise solution. This one competes with integrated graphics in the low-end PC and notebook markets. The most common implementations of this are ATI's HyperMemory and nVidia's TurboCache architectures. These solutions are also known as hybrid cards, which are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. These also share memory with the system memory, but have a smaller amount of memory on-board than discrete graphics cards do in order to make up for the relatively high latency of the system RAM.

Internal complications Speaking of RAM, when a dedicated GPU creates images, it needs a place to store information and completed pictures. In this case, the card's RAM proves to be a reliable component, storing data about each pixel, its color and its location on the screen. Part of the RAM can also act as a frame buffer, meaning that it holds completed images until it is time to display them. Typically, video RAM operates at very high speeds and is dual ported, meaning that the system can read from it and write to it at the same time.

The RAM connects directly to the digital-to-analog converter, called the DAC. This converter, also called the RAMDAC, translates the image into an analog signal that the monitor can understand. Some cards have multiple RAMDACs, which can improve performance and thus, more than one monitor can be used at the same time to display a single image or different ones. The RAMDAC sends the final picture to the monitor through a cable. We'll look at this connection and other interfaces in the next section.

Graphics cards connect to the computer through the motherboard. The motherboard supplies power to the card and lets it communicate with the CPU. Newer graphics cards often require more power than the motherboard can provide, so they also have a direct connection to the computer's power supply.

Connections to the motherboard are usually through one of three interfaces:

* Peripheral component interconnect (PCI) * Advanced graphics port (AGP) * PCI Express (PCIe)

AGP is a high-performance interconnection between the motherboard chipset and the GPU, which enhances the overall graphics performance for 3D-intensive applications. AGP uses several techniques to achieve this enhanced performance:

Increased Bandwidth Exclusive Dedicated use to only the Graphics Card Pipelining Sideband Addressing Direct Memory execute

However, the AGP interface became quite obsolete and the PCI Express interface replaced it and further improved performance. PCI Express provides the fastest transfer rates between the graphics card and the motherboard and can also support the use of two or more interconnected graphics cards in the same computer.

As we have seen, the graphics card has to communicate with the display device so you can see something on the monitor. Most graphics cards have two monitor connectors. The current standard is to incorporate a DVI connector, which supports LCD screens, and a VGA connector, which supports CRT screens. Some graphics cards have two DVI connectors instead. But this doesn't mean that you cannot use a CRT screen. In this case, CRT screens can connect to DVI ports through a special adapter.

Most people use only one of their two monitor connections. People who need to use two monitors can purchase a graphics card with dual head capability, which splits the display between the two screens. A graphics card with two dual head connectors could theoretically support four monitors.

The latest graphics cards, especially those that integrate TV tuners or video processing technologies, may also feature TV-out or S-video connectors, ViVo or "video in/video out" connectors for older video cameras and FireWire or USB connectors for newer digital video cameras.

Visual redemption The graphics cards are specialized in processing two dimensional images, as well as three dimensional complicated scenes.

Modern GPUs use most of their transistors to do calculations related to 3D computer graphics. They were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons, later adding units to accelerate geometric calculations such as translating vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations supported by CPUs, oversampling and interpolation techniques to reduce aliasing (jaggy lines), and very high-precision color processing techniques.

In addition to the 3D hardware, today's GPUs include basic 2D acceleration and frame buffer capabilities (usually with a VGA compatibility mode). In addition, most GPUs made since 1995 support the YUV color space and hardware overlays (important for digital video playback), and many GPUs made since 2000 support MPEG primitives such as motion compensation and iDCT. Recent graphics cards even decode high-definition video on the card, taking some load off the central processing unit.

Creating an image out of binary data is a demanding process. To make a 3-D image, the graphics card first creates a wire frame out of straight lines. Then, it rasterizes the image (fills in the remaining pixels). It also adds lighting, texture and color. For fast-paced games, the computer has to go through this process about sixty times per second. Without a graphics card to perform the necessary calculations, the workload would be too much for the computer to handle.

We stop here for the time being. The next article will take a more in depth look at the GPU and its evolution.