Skip to main content

Everything you need to know about the GPU

GTX 1080
GTX 1080

The graphics processing unit (or GPU for short) is responsible for handling everything that gets transferred from the PC internals to the connected display. Whether you happen to be gaming, editing video or simply staring at your desktop wallpaper, everything is being rendered by the GPU. In this handy guide we'll be looking at exactly what the GPU is, how it works and why you may want to purchase one of our picks for the best graphics card for gaming and intensive applications.

You don't actually need a dedicated card to supply content to a monitor. If you have a laptop, odds are good it has an integrated GPU — one that's part of the processor's chipset. These smaller and less powerful solutions are absolutely perfect for the desktop environment, low-power devices and instances where it's simply not worth the investment on a graphics card.

Unfortunately for laptop, tablet and certain PC owners, the option to upgrade their graphics processor to the next level may not be on the table. This will result in poor performance in games (and video editing, etc.) and require owners turn down graphic quality settings the the absolute minimum in many cases. For those with a PC case, access to the insides and funds for a new card, you'll be able to take the gaming experience to the next level.

CPU vs GPU?

GPU

GPU (Image credit: Windows Central)

So why do we need a GPU if we already have a powerful central processing unit? Simply put, the GPU can handle so much more in terms of numbers and calculations. This is why it's relied upon to power through a game engine and everything that comes with, or an intensive application like a video editing suite. The massive number of cores located on the GPU board can handle processes all at a single point in time.

It's why Bitcoin miners rely on their trusty GPU for absolute power (this is known as GPGPU — general purpose graphics processing unit). Both the CPU and GPU are silicon-based micro-processors, but they're fundamentally different and are deployed for different roles. But let's not shoot the CPU down too much. You won't be running Windows on a GPU any time soon. The CPU is the brains of any PC and handles a variety of complex tasks, something the GPU cannot perform as efficiently.

Think of the CPU and GPU being like brain and brawn, the former will be able to work on a multitude of different calculations while the GPU will be tasked by software to render graphics and focus all available cores on a specific task at hand. The graphics card is brought into play when you need a massive amount of power on a single (yet seriously complex — because graphics and geometry be complicated) task. All the polygons!

The Players

AMD

AMD

Two big names dominate the GPU market: AMD and NVIDIA. The former was previously ATI and originally started with the Radeon brand back in 1985. NVIDIA came along and released their first GPU in 1999. AMD swooped in and bought ATI back in 2006 and now competes against NVIDIA and Intel on two different fronts. There's actually not that much that separates AMD and NVIDIA when it comes to the GPU — it has mainly been down to personal preference.

NVIDIA has kicked things up a gear with the newly released GTX 10 series, but AMD offers affordable competitors and is expected to roll out its own high-end graphics solution sometime in the near future. The companies generally run on a parallel highway, both releasing their own solutions to tackle GPU and monitor synchronization, for example. Other parties are in play like Intel who implement their own graphics solution on-chip but you'll likely be purchasing an AMD or NVIDIA card.

Inside the GPU

RX 480

RX 480

We've established the GPU as pretty much the most powerful component inside the PC, and a big part of its capability comes from the VRAM. These memory modules allow the unit to quickly store and receive data without having to route through the CPU to the RAM attached to the motherboard. The video RAM your graphics card utilizes is separate from the RAM your PC relies on.

They're similar, but totally different beasts. A system that supports DDR4 memory will be able to run a graphics card with GDDR5 RAM. The VRAM on a graphics card is used for storing and accessing data quickly on the card, as well as buffering to render frames for the monitor to display. The memory also helps with anti-aliasing to reduce the effect of "jagged edges" on-screen by approximating data and attempting to make images appear smoother.

Bumping up the resolution of a display combats this by making the pixels more numerous and smaller, and thus harder for the human eye to tell them apart (think Apple's "Retina") — unless you look really closely. Displaying content like games at a higher resolution would require more power from the GPU since you're essentially requiring the unit to pump out more data. And all this data requires cores or processors, and lots of them.

This is why modern graphics cards have hundreds and hundreds of these cores. The enormous core count is the main reason in which the GPU is substantially more powerful than the CPU with its limited number of cores when it comes to texture mapping and pixel output. While the cores themselves aren't able to power through the variety of calculations a CPU has to perform every second, they're leaders in their graphics trade.

Things get hot!

AMD Crimson

AMD Crimson

Leveraging all of this raw computational power means there's a lot of electricity running through the GPU, and lots of juice means lots of heat. The amount of heat produced by a graphics card (or processor for that matter) is measured in Thermal Design Power (or TDP for short) and watts. This isn't a direct measure of power consumption, so if you're looking at that shiny new GTX 1080 and spot its 180W TDP rating, that doesn't mean it will require 180W of current from your power supply.

You should care about this value simply because you need to know just how much cooling you're going to need in and around the card. Throwing a GPU with a higher TDP into a tight case with limited air flow may cause issues, especially if you're already rocking a powerful CPU and cooler that have pushed the case to its maximum. This is why you see massive fans on some GPUs, especially those that happen to be overclocked.

Speaking of which, yes your GPU can even be overclocked. So long as it supports this feature, has enough cooling to handle the increased heat production and you have a stable system.

Some Jargon

GPU

GPU

Architecture: The platform (or technology) that the GPU is based upon. This is generally improved by companies over card generations. An example would be AMD's Polaris architecture.

Memory Bandwidth: This determines just how efficiently the GPU can utilize the available VRAM. You could have all the GDDR5 memory in the world, but if the card doesn't have the bandwidth to effectively use it all, you'll have a bottleneck. Calculated by taking the memory interface into account.

Texture Fillrate: This is determined by the core clock multiplied by available texture mapping units (TMU). It's the number of pixels that can be textured per second.

Cores/Processors: The number of parallel cores (or processors) available on a card.

Core Clock: Identical to the clock speed of the CPU. Generally, the higher this value the faster a GPU will be able to operate. It's by no means a definitive comparison between cards, but is a solid indicator.

SLI/CrossFire: Need more power? Why not throw in two compatible GPUs and bridge them to render even more pixels? SLI and CrossFire are NVIDIA and AMD technologies that allow you to install more than one GPU card and have them work in tandem.

That's the GPU in a nutshell and we hope this small guide introduces you to the world of graphics processing. It's an important component that solely works on the same task at superb levels of efficiency to produce some wondrous views on-screen.

tl;dr

Graphics cards are best at solving graphics problems and other tasks the numerous cores are specifically designed for. This is why they're required for gaming and why more powerful cards let you game at higher fidelity and resolutions. They're more powerful than a CPU, but can only really be used for specific applications.

Rich Edmonds
Rich Edmonds

Rich Edmonds is Senior Editor of PC hardware at Windows Central, covering everything related to PC components and NAS. He's been involved in technology for more than a decade and knows a thing or two about the magic inside a PC chassis. You can follow him over on Twitter at @RichEdmonds.

30 Comments
  • Either the writer failed to include a clearly identifiable conclusion or my phone is chopping off the article for some reason. Posted via the Windows Central App for Android
  • There isn't a conclusion to make. This is an information piece, not a comparison or recommendation piece.
  • I see. I suppose my idea of writing is rooted in essay form from school days. I did enjoy the article. It answered some things Ive been curious about. Posted via the Windows Central App for Android
  • We'll have plenty of additional content going forward that will talk about comparisons and such :)
  • I was thinking the same thing
  • Is there any chances that one day CPU will have powerful graphics like GTX 10-series?
  • I think the question should be:
    Is there any chances that one day CPU will run efficient, very well optimized software so it could run at its full potential as high-end GPU?
  • GPUs are still evolving, and integrated GPUs in processors are following a bit behind. In the old days, CPUs were more limited, they couldn’t even perform floating-point arithmetic, meaning software had to work with integer numbers to process real numbers (with decimals).
    While this could be done in software, the math co-processors helped a lot by providing cores able to work natively with floating-point numbers. The i387 family of co-processors ended up being integrated into the main CPU, and every modern CPU contains the electronics to work natively with floating-point numbers. GPUs are basically the same story. Years ago, accelerated graphics meant to be able to draw lines and fill shapes without having the CPU compute and set each independent pixel, nowadays they are co-processors designed to work very efficiently with 3D matrix computation and pixels processing, and recently used for other processing tasks requiring simple but massively parallel processing. Everything a GPU can do, a CPU can do as well. DirectX WARP is an implementation of Direct3D running as software on a CPU and is used as the reference for DirectX GPU testing. The advantage of the GPU, like for the math co-processor before, is that it is faster at performing these computations by several orders of magnitude. Just like modern CPUs integrate the math co-processor, they also often include graphics co-processor on-die. Basically a small GPU architecture in the same chip as the CPU. This means modern computers really have 3 ways of working with DirectX code, they can run on the CPU using WARP, on the CPU using the integrated GPU cores, or using an external GPU.
    The evolution increases the power of the integrated GPU to a point where many games can run without a dedicated GPU, but GPUs are also improving, and requirements (for VR, extremely high resolutions, multiple simultaneous viewpoints for volumetric displays…) are still way beyond even what a single GPU can achieve. It is very likely that, in the future, the GPU will be separated from the display interface (GPUs doing the processing for GPGPU/DirectCompute and frames rendering, and separate display driver reading frames from shared video memory to send to appropriate displays). And the GPU already being integrated into the CPU, that part will evolve as well, but we’re still very far from the integrated GPU making the external GPU disappear. For power dissipation and modularity, we might see even more co-processors in the coming years, such as dedicated real-time computer vision and environment understanding (Microsoft Holographic Processing Unit), new physics co-processors (or new generations of GPU) to process environmental audio and physical interactions in VR and in mixed-reality, quantum-computing co-processors, artificial neural networks co-processors, and maybe a return of the multi-sockets to increase CPU cores beyond the sizes limitations of a single die.
  • Great information there Philippe. I think the biggest hinderance in the speedy improvement in APUs is the fact that most people don't need more power. The APU/integrated stuff is enough for most people and it doesn't make sense to raise prices for capability people don't need/realize. The average user is just streaming 1080p/4k and might play a couple of non-intensive games.  It's always a small enthusiast subset of the population that is playing graphics intensive games/applications/VR and in the future holograms. These people expect to and will willingly pay $100-600 for a card every 2 years. The integrated GPUs will also continue to be developed but only when these cutting-edge technologies go mainstream.
  • I think that's the reason intel has both HD Graphics and Iris Graphics.
    They'll have low-power GPU integrated in SOC solution, mainstream (HD Graphics) GPU integrated in mainstream CPUs, and performance GPU (Iris) integrated in high-end CPUs. Server CPUs (Xeon) will stay without integrated GPU at least until they have a market for mainstream server GPGPU, as they won't compete with high-end GPGPU for the foreseable future anyway. ​They'll definitely try to include a GPU that's Good-Enough® for the mass market to make their CPUs more attractive.
    High-end performances will stay in dedicated GPUs for now, but I definitely envision them moving away from the PCIe bus to a GPU-specific socket, even if electrically compatible, just to improve weight distribution, power distribution, and air-flow. Finally, DirectX 12 being able to make all GPUs available as a pool might make integrated GPUs interesting again, even as an extra to a dedicated GPU, by making it possible to distribute load across them.
  • R9 295 is a beast
  • You make the CPU sound lame. the CPU does a wide, flexible range of computations on any data, whereas the GPU can only do a narrow range of computations but on huge arrays of data at a time.
  • You're absolutely correct, the CPU isn't lame by any means and I'll look to edit that part to make it sound less so (should it appear that way to anyone who happens to not be me :-P).
  • By the way, great topic for an article. Occasionally focusing in on a topic people take for granted everyone has a grasp of, but doesn't really.
  • Well, that's the conclusion if you read it :)
  • It does end like the article is incomplete. It seems to need a "I hope this helps explain ...." to make it apparent the article is at the end.
  • I'd prefer a rolling banner gif with the words "The End"
  • Scrolling ticker would be sweet
  • Don't forget, it should flash in different colors!
  • That was a surprisingly good read even for people who don't particularly need introduction to graphics cards.
  • Thanks Richard. I enjoyed the article. Question: A friend of mine who does 3D Maya animation works, you know what I mean, said rendering such works destroys his GPU such that after doing this for say 4 to 5 times his card is totally drained/destroyed to the point he would have to change it, and the process goes on like that. Does 3D rendering progressively destroy GPUs? I argued it cos it goes against my understanding of how software affects hardware. I have been wanting to research this but felt I should ask you or anyone who can chip in. I know its a silly question for the well informed, please pardon my ignorance. :)
  • Software will not destroy hardware. Hardware can destroy itself if it is improperly setup or maintained while running demanding software BUT what your friend says should not be happening. For example, taking off or using an ineffective heat dispersion sink may (depending on intended design) cause computer chips to overheat and potentially fry themselves. Lastly, you can potentially use software to make hardware run harder than designed, causing faster natural degradation. Example is over clocking past intended limits.
  • Thanks Shadow 024, I thought as much. Your explanation makes perfect sense.
  • Sounds like he's doing it wrong
  • How is Nvidia 750Ti ? For designing/rendering purpose with i5,ssd,8gb ram ?
    Will it be worth it ?
  • I have the exact same set up. GTX 750Ti. It's a beauty.
    To help you out. I play Forza Motorsport 6: Apex at 1080p averaging 30fps with medium graphics settings (really not bad because it's a graphics DEMANDING game - really).
    I have a Core i5-4440, 8GB RAM, 256GB SSD.
    Designing/rendering won't be an issue :) Edit - You can consider the GTX 760 too. Slightly better. But I recommend to check online reviews of people in your field.
  • Hi Thanks for the information , main play is rendering and if get time then gaming. So just wanted to know for primary purpose i.e. Rendering , how is the set up :) Thank you :)
  • What prices can you get that 750Ti at? It's a decent budget card but it is 2 generations old. For low end you can consider RX 460, its like 30-40% more FPS in most games than the 750 and prices are just a tick over $100 now.
  • I am afraid to tell , its already bought :| The results of which are yet to be checked. By seeing this article, was curious to question , if anybody can tell , that for vray/rendering is it good or not.
  • One issue I had when I bought my gpu a couple of yeats back was go for a gpu with 256bit bus with gddr3 ram or a 128bit bus with newer gddr5 ram. Anyone know what would be better?