Skip to main content

AMD graphics cards see huge gains with DX12, Nvidia not so much

Upcoming strategy title "Ashes of Singularity" released benchmarking software a few days ago, marking the start of a global transition to DX12.

DirectX is a programming interface (API) that allows developers to tap into a device's hardware for producing complex 3D graphics. DX12 is the latest version and promises to bring more efficient CPU utilization, reduced driver overheads, and various other benefits. Lead developer Max McMullen said one of DX12's goals was to bring "console-level efficiency" to PCs. Console hardware is focussed entirely on games, where PCs are often doing other tasks even while running games.

In this context, DX12 hopes to bring significant gains to PC games across the board. However, Ashes of Singularity's benchmark is proving controversial for Nvidia's high-end cards, which so far aren't seeing the benefits.

This graph shows DX12's performance (blue) against the older DX11's performance (green) using Nvidia's GTX 980 and AMD's R 390X.

PC Perspective has put various hardware configurations to the test against the new benchmarking software.

Nvidia's GTX 980 is one of the company's most powerful GPUs, yet the benchmarking software seems to show that DX12 hinders performance slightly. AMD's R9 390X, however, shows massive gains, often upwards of 80% - pushing it ahead of Nvidia's GTX 980. The disparity is problematic for Nvidia, as the R9 390X is usually cheaper than the GTX 980.

Nvidia dismissed Ashes of Singularity's benchmark, tersely describing it as inaccurate:

..."We believe there will be better examples of true DirectX 12 performance and we continue to work with Microsoft on their DX12 API, games and benchmarks. The GeForce architecture and drivers for DX12 performance is second to none - when accurate DX12 metrics arrive, the story will be the same as it was for DX11."...

Ashes of Singularity's developer Oxide defended the code in a detailed blog post:

..."It should not be considered that because the game is not yet publicly out, it's not a legitimate test," Oxide's Dan Baker said. "While there are still optimisations to be had, the Ashes of the Singularity in its pre-beta stage is as or more optimised as most released games. What's the point of optimising code six months after a title is released, after all?"

At least in part, this data could show that AMD has historically poor DirectX performance when compared to Nvidia. However, Intel's integrated GPUs have seen similar gains. Check out this video below from Intel on the benefits of the tech for lower-end GPU setups, such as the Surface Pro 3:

DX12 is proving a controversial topic for various reasons, particularly if the disparity between Nvidia and AMD continues. DX12 is also heading to Xbox One, although it remains largely speculative if it'll hold any significant gains as consoles are already very optimized.

Are you a DirectX buff? How do you feel about the data? Hit the comments!

Source: PC Perspective

Jez Corden
Jez Corden

Jez Corden is a Senior Editor for Windows Central, focusing primarily on all things Xbox and gaming. Jez is known for breaking exclusive news and analysis as relates to the Microsoft ecosystem while being powered by caffeine. Follow on Twitter @JezCorden and listen to his Xbox Two podcast, all about, you guessed it, Xbox!

86 Comments
  • AMD performance going up more than Nvidia reminds me of Tim Cook talking about how Mac is destroying the PC because its growth is more.
  • Oh please. It's always been the case that AMD generally has superior hardware, but really suck at drivers. Whereas Nvidia's drivers are really, really good and were likely already squeezing most of the available performance out of their hardware. These benchmarks would seem to support that, although I agree that we need to see more results from other games. Still, there's a reason AMD hardware was chosen for both the PS4 and XB1. Both those platforms run their games right against the bare metal with hardly any driver code in the way, so they're obviously going to favor whichever chip has the better raw performance.
  • Nvidia drivers have been very bad in Win10. I was getting frequent BSOD, freezes, displays that were flashing, and so on. Looking through the forums the problem is that the Nvidia drivers simply cannot handle 4K displays. I unplugged my 4K display and now only use my 1080 display. Not one problem since then, but then again I am not using my 4K display. I am not too sure how ATI drivers are for Win10, but Nvidia ones have been far from good.
  • They were already bad with windows 8. One of their updates gave me frequent BSODs and it took them a lot of updates to fix that. In general I have to say I regret getting a nvidia card. While I don't necessarily think AMD is better, I like their open minded strategy more, whereas Nvidia seems more like Apple, locking everything down. Also I can totally imagine them using their money and influence to get competitors out of their way ... Same with Intel. I really hope AMD will not go bankrupt, because it would leave Intel as well as Nvidia with a monopoly and they are both companies I would not like to have a monopoly. Seems like there is only AMD left for me ...
  • Hell it started in vista rumors were NVIDIA paid off whql test labs to give even their mediocre hardware windows vista ready certification(home premium) when in fact most were only able to run basic & lets not forget the huge recall during that time.
  • AMD won't go bankrupt, they build great graphics cards for MacBook Pro premium 15" models (AMD Radeon R9 M390X), they also power the Xbox One graphics chips. On the other side, Intel makes great CPU's but their Intel Iris Pro graphics are not very popular, and everything else is crappy (for games).
  • My iMac has Iris Pro 5100, and it's great for gaming, Actually.  Raid in WoW @40+ FPS @ 1080p resolution on Balanced or higher settings.  Diablo 3 in group Rifts with stuff blowing up everywhere.  TESO.  GW2.  Civ 5, Deus Ex, etc.  They are great.  But AMD makes better integrated GPUs than Intel. Intel's iGPUs are nothing to scoff at, though, and I think they sort of benefit from the fact that they are paired with superior CPUs, as well.  Most games are going to bottleneck at the CPU before the GPU these days.  This is why Intel has had such a great advantage over AMD.  Intel really ramped up their Single Core Performance (More IPC, Hyper-Threading, etc.) while AMD focused on Multiple Cores.  Thing is, most games aren't designed for Multi-Core, so the AMD CPUs/APUs were sort of SoL. In DX12, Having More Cores seems to have a greater benefit than having fewer faster cores and higher clock speeds seem to improve things quite a bit as well (IPC differences notwithstanding).  So, AMD is going to be able to deliver better performance for DX12 Games/Applications at the lower end of hte hardware spectrum where they are selling Quad Core A8/A10 APUs while Intel is Selling Dual Core i3/i5 CPUs. The fact that thier APUs have better integrated graphics is going ot exacerbate that, and this is especially true for people who game on mainstream notebook hardware. At that point, the weaker battery efficiency of AMD's APUs are going to seem like a worthy trade-off. This was already a consideration with Mantle, and it's going to be a consideration on the Mac as well when El Capitan rolls out with Metal (since some Macs are Dual Core while others are Quad Core or more). A lot of people will have to upgrade their mainstream laptops for this, though.  But it's a nice flip on things.
  • I cannot agree at all. My AMD FX 8350 is at a mere 25 to 50 percent with a lot of games whilst my GeForce GTX 970 is at its limit out. Especially with modern games I found that they tend to focus on the GPU and ditch the CPU. A friend of mine has a 6 year old i7 and most modern games barely even get his CPU to 100% usage whilst the bottleneck with his PC is his graphics card.
  • AMD might not go bankrupt anytime soon, but they could very well start a slow descent into financial hell. The thing is, that they are definitely making losses. And losses mean less money for reasearch and less research means slowly falling behind until they do not matter anymore. So I definitely am worried! The problem with AMD is, that it is the sole competitor for Nvidia AND Intel. So if AMD breaks away we could very well face stagnation in both segments. Ok, that is not entirely true. Intel also has to face ARM processors, but at least when it comes to desktop chips there could be less innovation.  
  • a
  • Generally install AMD drivers are a pain in the butt but not idea what is going on with NVIDIA , the drivers are fine, but I was wondering, the Gtx 980 is a card designed for,dx12? We must recall that although s be the most powerful card, if is not intended to run on DX12 then will not perform well the test prove that
  • The GTX 980 is definitely intended to run on DirectX 12. What's more, its nominally a newer part than the 390X, which, while a newer SKU, is architecturally an older GPU.
  •    It's always been the case that AMD generally has superior hardware, but really suck at drivers.
    LOL! No it definitely hasn't been the case, also using the consoles as a reasoning shows your lack of knowledge o the subject. AMD are the only people who make APU's that are of any use, APU's are cost effective for consoles due to power requirements and compactness. Not to mention it being much simpler to have a single chip providing both graphics and processing. AMD hardware has been absolutely rubbish compared to nVidia UNTIL the 290x generation but even still they aren't quite up to nVidia's levels which is usually shown in power consumption and temperature tests.  
  • So the Tegra isn't an APU? I'm not sure what you're arguing for; my point still stands that AMD generally has the better hardware. If you're basing your comparison off benchmarks, those all run on top of the PC's drivers. Nvidia does some really scary stuff in their driver (e.g. detecting the game that's running and patching/replacing the game's shader code with their own) to get better performance. AMD does this too, but not to the degree Nvidia does, which is why Nvidia scores better with weaker (and thus more power efficient) hardware. But on a platform like a console, where AAA games bypass the driver and feed hand-crafted command buffers directly into the GPU, those tricks don't apply. I honestly don't favor one company over the other, but I think these benchmarks are telling a truth that's rarely seen/discussed.
  • with weaker (and thus more power efficient) hardware. Ok here I definitely start thinking you don't know what you are talking of. Power Efficiency is about the GPU's architecture and not about their "weakness" or "Strength".
  • You're misinterpreting what I'm trying to say. Performance is a combination of how powerful the hardware is, and how efficient the drivers are (or how well they can transform the workload to become more efficient). If you do poorly with the drivers, the only way you can make up for that is to brute force your way through the workload with hardware, which is going to consume more power. Nvidia can strike a better balance between performance and power consumption because they have better optimized drivers.
  • Yeah I can agree some of the stuff NVIDIA's oems do to to gpu before it ships doesn't make sense. I had problems running nfs undercover using newer reference drivers on a desktop. Dont know what msi did but it made for a problem when origin decided those drivers were to old to run such & such game. Glad I found a decent laptop to run games haven't had any issues with my aw's 7970's(aside from games with the NVIDIA physicx quirk which those ran fine on the Panther Point i7 extreme mobile)
  • Tegra is an ARM system on a chip (SoC), which is not the same as an APU.  Even then, he didn't say the Tegra SoC wasn't an APU (which using some loose definition, you could argue it is), but that "AMD are the only people who make APUs that are of any use."  If your only basis for comparison is Tegra, which has a laughable CPU compared to what APU has in thier APUs (especially the A8/A10 and FX lines), then I don't see where the disagreement is.
  • No. Tegra is not an APU. It is a Soc. In a stretch, we can say Tegra is also an APU but in that case we can call Qualcomm also into the equation. But that is not what we are discussing here. I think Intel is the only other company who is making things that are similar to APU with their Haswell, Broadwell, Skylake series.   Weaker hardware generally means one that cannot use the full potential efficiently. In that sense, a weaker hardware will always consume more and under deliver. I think you meant to say that AMD has superior hardware design than Nvidia. In that case, I would say most of the benchmarks are to be taken seriously. This is because, benchmarks kind of do a load test and get the result of how much load a GPU or APU can hold. Your point that a driver wins the benchmark is really true. Even in the real world scenario, how much juice you get out of a hardware defines the strength of the hardware. For this, drivers are also critical. It is not like a third party company or the game developers are writing driver for Nvidia/AMD. They write the drivers for the hardware they create. So they should make sure that the software will not be a bottleneck for their hardware.
  • You are out of your mind.  Gtx 680 released March 2012, amd 7970 ghz edition released June 2012:           http://www.anandtech.com/bench/product/1031?vs=1348 Initially the gtx 680 was better, due to drivers.  I have the 7970ghz edition.  It took a year but then, you can look at the 2014 bench and amd smashes the gtx 680, like its hardware should allow it to do. 384 bit bus vs 256, 1536 cores vs 2048, while you will say nvidia cores are stronger, the fact is they aren't.  Every time nvidia releases their product when amd catches up with drivers, it definitely mirrors the processor core count difference.  Same thing happened with 290x same thing with fury x.  The fury x has double the cores of the 7970 ghz edition.  Double.  It has 512gbs per second bandwidth, crushing nvidia.        
  • Really amd has done this each time since the 7970ghz edition.  So has nvidia.  Their gtx 980ti also has double the cores, and it's memory throughput has about the same disadvantage compared to the 7970ghz vs gtx 680. Amd always has had better hardware out and worse drivers until later.  
  • I mean really look at my link.  1536 is 75% of the cores as 2048.  Do the math.  Some games are clearly still better optimized for nvidia, but the delta difference nearly matches the core difference of 25%. So amd's higher memory bandwidth, more cores, definitely is the reason behind it.  In other words, the hardware rocked, the drivers sucked, even on the 7970 ghz edition.  
  • but here is the interesting one.  Go to fury x vs 980ti bench.  The fury x already has a decent gain by comparison.  This should be interesting when dx 12 comes into play especially because it's clear the more cores you have the bigger the gain, and given that amd cores are certainly close in power to nvidia cores. Now go to gpu bench 2015.  The 290x vs the gtx 780 ti is similar of a difference to the 7970 ghz edition vs gtx 780 ti    
  • I always thought it was because AMD was willing to sell the hardware for cheap so that console makers would make more money on hardware sales, rather than AMD having superior hardware. (And I guess partly because Nvidia decided to invest in other markets)
  • lol I here that. And i was just about to switch to nvidia for my next build
  • I'm still going to. Don't get me wrong, I've been very happy with my AMD cards, but Nvidia seems to be better with power consumption and they appear to have a leg up on setting up a triple monitor set up. I don't believe I'll notice enough of a difference in games to worry about the smaller difference Nvidia gets.
  • Best geekiest comment thread I've ever read, thanks guys :) !
  • > Are you a DirectX buff? How do you feel about the data? Hit the comments! This was done on "Alpha" software.  We really have no idea if NVidia is worse.  The only thing we know is that the AMD drivers have been far behind and they are just catching up and the reason that AMD did so well is probably because it was that far behind to begin with and comparisons are easy.  Since AMD is in the XBoxONe there is incentive for both AMD and MS to get this right. If MS wants Windows to sell to gamers there is inventive to make it work work well with both GPUs.  NVidia wants to sell GPUs so there is incentive for them to make it faster. All-in-all the data shows how far behind AMD has been versus anything special about NVidia.  I suspect with time and new releases the NVidia drivers will match the AMD numbers and since both cards can do some things better than their rival, in those aspects one GPU will do better than the other. I believe the tech media is hyperventaliting or looking for click throughs in a slow news cycle because there is really nothing to write about unless you are an AMD fanboy I am console gamer so I have no dog in this fight.  But, to me the numbers mean good things for DX12 and AMD fans in general.  For Nvidia, there is more to come and we do not know if anything has been optimized for Nvidia at this point.
  • You do have a small dog in this fight...  DX12 will affect (improve) Xbox One performance considerably for newer games.
  • After that nobody needs PC's because xbox will be better than any PC with dx12 /s
  • No, it won't improve performance considerably. Developers already have low level access to hardware on the Xbox One. It's specialized hardware after all.
  • Not full level access though, they get better optimization tools but they don't get magic bare metal. Dx12 om Xbox is gonna be fun to watch we will see how much access console games truly had before.
  • That's not true. They can feed raw command buffers directly into the GPU. That's about as bare metal as you can get. You probably won't see much of a bump for AAA games, it will mostly impact indie game that were coding on top of DirectX or using middleware like Unity.
  • Agree, it also won't eliminate the disparity in getting playable framerates at higher graphics settings for the One vs. the PS4, since the hardware between the two consoles differ as well.  Faster RAM in the PS4, the way the Kinect affects the One when it's connected, etc. Quite a bit of "DX12" was in the XBOne already, since the console launched.  This has a lot to do with bringing the PC API into parity with the Console, and improving both of them at the same time.  Similar to what Apple is doing with their move from OpenGL to Metal in OS X El Capitan (Metal is already in iOS).
  • Darkness690, its not about getting closer to the metal. You misunderstand, its about much better parallelism. Here is an article that explains things. Hope that helps :/ http://sapphirenation.net/directx-12-what-it-means-for-pc-gamers/
  • Good for AMD, i hope they start to improve their Gpu's to be honest. I use Nvidia myself but would like to see some good competition and more radical improvements for both Soft/Hardware. Soon a smaller node and HBM2 will improve their GPU, however it remains to be seen if their drivers will be good..
  • Prefer AMD over Nvidia, AMD has historically pushed the tech world forward (AMD64 vs Itanium) and continues to do so (freesync {Vesa adaptive sync} vs gsync {nvidia proprietary}). Not to mention the issue with gameworks titles on AMD hardware. So yes, without AMD in the competition...who knows what crap Nvidia is going to pull next.
  • You raise the right points about AMD, it has continuously led in chip innovations despite not having the capital pre-global foundries spin off to get nanometer based improvements in performance.  I believe what we are see​ing here is once again, as with the x86-64 instruction sets (which Intel aquiesed to as you alluded) GCN was and is the next generation of performance improvement through innovation.  I will forever be biased towards AMD products because they have led innovation and forced the industry to progress, and that is worth championing.  Nvidia is a company I love as well, as they had a significant role to play in chipsets which was the largest bottle neck in progressing performance in the overall system. Long story short, GCN bet is coming to fruition, whereas Nvidia has been able to produce cards that maximize performance on day one, albeit through closed innovation. 
  • Belief isn't fact. Show me where someone compares the 2 microarchitectures and shows GCN 1.2 beating Maxwell 2.0 for e.g.? Here is an article showing that even in general compute, nvidia wins more often:
    http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20   This is all about cores, and DirectX12s better use of:
    http://sapphirenation.net/directx-12-what-it-means-for-pc-gamers/  
  • Prefer ATi over Nvidia
      FTFY. You do know that AMD =/= ATi? True, AMD buy off ATi, but ATi is still ATi with little changes. Positive change is that AMD forced ATi developers to contribute in Open Source.
  • AMD GPUs are as fast as Nvidia GPUs in GPU limited situations and when at least i5 are used. AMD never lagged.
  • A little known fact is that because the node delay/jump both nVidia and AMD had to make drastic decisions on how to best to utilise their die space. nVidia sacrificed their compute power to get the gaming crown whilst AMD when for a more compute based approach which is why it was so popular during the BitCoin craze. Add into the fact AMD just doesn't have resources of nVidia when it comes to giving games developers money and libraries, and often those resources are a double edged blade, yes they improve the performance but only on nVidia hardware. It's also a indicator of what AMD was focussing on for the next generation of APUs combine this with AMDs experience with CTM software I think the next generation of APU/GPUs will be interesting now that they are able to take advantage of HBM2 and the next process node.
  • But even in general compute, Nvidia often wins more than AMD:
    http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20
  • You should use up to date benchmarks when citing things. http://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sa... Although the 980 is an older part, it's still Maxwell. Additionally, the 980 Ti came out pretty recently and AMD has swung on top in a lot of benchmarks. It's no home run for either company, but the Fury beats the 980 (and often the 980 To) in 5/9 of their benchmarks. Its not significant, but it's enough of a lead to indicate that AMD has a win there again. Additionally, both the Fury and Fury X have pretty raw drivers as of those reviews whereas both the 980 and 980 To are pretty mature. Even without DX12, Fury performance is likely to shoot up once drivers improve a bit more.
  • Isn't the Xbox one gpu and based? I wonder how this will affect performance?
  • It'll only have an effect if a game takes advantage of DirectX 12.
  • There are significant relative improvements to API calls that were utilized in DX 11, and the Windows 10 rewrite of the Xbox OS is an alignment to DX12 strategy which COMES from the Xbox team.  It won't be the whole relative improvement, but a majority of titles should have improved usage of the hardware.  As developers finally have a decade+ platform to build their engines on, if the strategy is to build off an engine (such as the Unreal Engine) and extend the lifetime of revenue of a game, they could push an update to the game engine that allows them to do more with the title in future content releases. I am not 100%, but I believe Destiny is an example of such a title, and DX12 has not only significant improvements for the gamer, but financially it improves the breadth and capability of the team to deliver larger and sexier content. I'm really excited to see what Halo 5 brings to the table, as that team surely has had access to DX12 pipeline and deliver mind blowing performance in both the front end and back end.  Another example would be CrackDown which is using the Azure cloud for real time physics calculations that cannot be reproduced even on the beefiest of computer setups, regardless of the lowest common denominator factor that exists in PC gaming.  
  • A lot of the innovation from DX12 is coming from Xbox development. The XB1 is already running a lot of what DX12 is bringing to the PC side of things. Bringing the full DX12 implementation to XB1 is going to bring negligible performance gains... because it is already using most of them. The big deal is bringing those optimizations to the PC, and even then most of the advantages are going to be 1) after DX12 drivers are released to take advantage of the new APIs, and 2) mostly show up in new games that specifically flag for it. That isn't to say that older titles won't see any improvement, because we are sure to see latency times lowered and made more consistent... but that is in the smoothness of operation; raw FPS performance is really not expected.
  • Brad @ Stardock disagrees with your "mild" theory. He says, twice as much: http://news.softpedia.com/news/DirectX-12-Doubles-the-Xbox-One-Power-Sta...   (and no, the Xbox One really isn't running any DX12 - you really are just making stuff up :) ) There is no such thing as DirectX drivers.  You need a WDDM driver for your GPU (on a PC) that will run a game, that calls DX APIs within the game code. The running OS handles those API calls to render the game screen (amongst many other duties including DX ones, like sound, input). So yes raw FPS is really expected. How much is yet to be seen, but it could be more than people realise. Lets hope so as gamers.  
  • Sorry, AMD based
  • Interesting, aren't the 390 and 390x rebadged 290 and 290x? So in a messed up sense the 290's out performs Nvidia's Gtx 980 in this benchmark. You can pick up 290s quite cheap these days as well - creating an even more bigger disparity lol.
  • The top of the line AMD cards are brand new, below that is rebrands with mostly minor changes.
  • No surprise at all, nvidia always had buggy and inefficient drivers
  • AMD has been behind for years now. But the last 6-8 months of driver releases from Nvidia have been pathetic. They can't get a driver right to save their life. Most people are stuck using drivers from February or earlier due to constant TDR's, throttling, and a host of other issues. Now they have a new issue with Windows 10 and their drivers literally breaking the displays of certain notebooks. It's ugly. One of the benefits of me being lazy I'd that I did not install Windows 10 or the new drivers on my Alien yet or I would be broken like the files in the link below. Anyway, AMD has been behind for a while, but Nvidia had been beyond pathetic lately including trying to claim after more than ten years of advertising and promoting overclocking, that it is a bug and was never meant to be possible. At this moment, I regret that I have supported them the last few years and spent alot of money on some top of the line gaming computers with their cards in them. http://forum.notebookreview.com/threads/windows-10-upgrade-warning-for-a...
  • No surprise given Xbox One runs on AMD hardware. They had plenty of time to optimize the drivers.
  • A very true point, since the Sony abandoned it's multi-billion dollar investment in the failed cell architecture, game developers with multi-platform releases are significantly influenced to build software for Xbox first, as it is the highest common denominator across all platforms.
  • xbox one uses dx11 not dx12 but you are not incorrect i think amd just finally has optimized drivers, while nvidia for once has nothing (they historically had better optimized drivers). my guesses would be: - since amd developed mantle which in large parts will end up in vulcan and definitely has played a big part in the creation of dx12 they had a head start - microsoft will be very interested to optimize dx12 for amds cgn since the xbox one runs on amd graphics. well performing amd under dx12 means more potential for the xbox one long term - amd historically does rather well in compute tasks but poorer in gaming tasks. the dx12 changes might just remove some barriers in terms of performance that favour amds approach more than nvidias ... but that is just speculation, i do read up on tech papers covering the design approaches but i am far form an expert and do not work in the industry either so frankly i am guessing here.
  • Good for them. Watch prices rise
  • Why would they rise? They need more sales and this is good advertising.
  • I don't think we'll see prices rise, unless demand completely exceeds production capabilities, but that is really contingent on the fabrication facilities, which both AMD and Nvidia rely on outside vendors (Global Foundries, and TSMC) to produce.  If demand is consistent, and forecastable, then the foundries will have the capability to satisfy market demands.
  • What a poor excuse from Nvidia. They are also very quit with showing off the Nvidia cards with a dx12 demo. Nvidia is getting worse and worse. I went from 3 x 1 monitors to 1 x 34" because they keep screwing up their drivers and remove good working features. Same thing with 3D, the only thing that keeps 3D up is because there are people fixing stuff for them.
  • I'm happy for AMD! Posted via the Windows Central App for Android
  • Well I dont like Nvidia as a company, but I do suspect that its all about drivers at this point, AMD probably has the drivers specifically optimised to run this benchmark. Lets also remember that AMD GPUs equivalent to nvidias use double the power. So, Intel also makes similar gains, well 3 FPS instead of 2 isnt going to help much...
  • Double?
  • In some cases, yes.
  • Would really love some sources to back that up
  • This is because AMD partnered with Microsoft throughout the development process ... Nvidia is always late to the game with driver and code because they don't partner until the last minute.
  • This is why Intel processors are ahead of AMD right now. While Intel has been working on processors with crappy graphics, AMD has been building some BA graphics solutions.
  • I don't get your comment
  • Integrated graphics either from Intel or AMD (APU) always are crappy. An example, my PC has Broadwell Intel Core i7 5500 CPU, it has Intel HD Graphics 5500, this chip can't play games at all, maybe you can run Diablo III or FIFA, but I'm talking about mainstream 2014, 2015 games, they simply don't get over 12FPS at 1024x768 which is not good. On other side, my PC comes with AMD Graphics R7 M270, which is able to play all games at low settings (>30 fps 1024x768 low textures) and some at medium (> 30fps 720p medium textures)
  • AMD process with their APU on just generates more heat than without it on, also NVIDIA present GPUS are slower even the implementation of MSAA -SSAA in world of warcraft makes an massive drop of frames if enable this options I have an modest, system, with. AMD A6 3,6Ghz, 4G Ram, and the graphics card is a Nvidia GTX 750 and I tell that, don't support dx12 the current graphics card from nvidia can be the fastest GPU but if, are not designed for dx12 then will have poor performance in DX12 games and in execution of the new dx12 instructions. I just noticed a improvement in the sound quality in on board sound cards (realtek) based ones only.
  • The fact that directx 11 still runs way faster on NVidia and that most games are directx 11 and will be for sometime now, I would still stick with NVidia.
  • As long as AMD can't find the way to reduce the power consumption, I will stick on NVIDIA. My big concern until now is their power consumption which is nearly twice as NVIDIA's GPU. That's why I can't choose AMD's solutions -- whether it's GPU or CPU or even APU -- it's because of its power consumption that is damn high... on a limited wattage capacity on my country, it's a turnoff for a cyber game center with 5 CPUs on a 1300W-sized electricity.
  • Holy crap that can't be right,i really hope it is at least half right though for AMD's sake,they need all the help they can get right now
  • This post reminds me about Stardock CEO great interview WindowsCentral wrote 2 weeks ago: http://www.windowscentral.com/stardock-ceo-talks-about-start10-windows-10-directx-12-and-more I'm going to try to download the benchmark Ashes of Singularity program to test DirectX12 on my AMD Radeon R7 M270, hopefully 8GB of RAM and Intel Core i7 5500 is enough to run the benchmark test , fingers crossed.
  • Wow
  • AMD basically gave up on DX11 to push Mantle, so I'm not surprised at the difference between DX11 and DX12 with AMD. Setting aside all the other factors... The take away here is that AMD had pretty awful DX11 performance, at times. Posted via the Windows Central App for Android
  • Looks good for the xbox one!
  • Actually, I don't care much about which card squeezes out a bit more performance. I care about quality drivers and I've never seen AMD do that.
  • Nvidia drivers are not yet optimized for directx12 they have said that even before the release of Windows 10.
  • Why not?  Why is that a good excuse?  It's still an Nvidia fail.
  • So, with DirectX 12 both GPUs produce about the same the same FPS. But the GTX 980 still uses about 33% less power, so in the end it's still advantage NVIDIA.
  • Not really because they have compared 1 of Nvidia's top tier cards to AMD's rebranded 290x which is nearly 3 years old
  • Yeah right.  If Nvidia was faster by the same amount as AMD is ahead in these benchmarks, you would be saying Nvidia is clobbering AMD. But since AMD is besting Nvidia, now the superior performance of AMD is nothing, completely dismissed.  All you are doing is rationalizing Nvidia's loss away.
  • DirectX 12 is mostly based on Mantle, a graphics API made by AMD and optimized for their chips. That was expected.
  • So much misinformation in these comments. AMD and nVidia went two different paths in hardware development not drivers. A driver software fix is not hard and AMD does not suck at them. nVidia promised a driver fix for the above DX12 asynchronous compute benchmark fail in Ashes going on 7 months ago. Does nVidia suddenly suck at driver development? Of course not. The problem is hardware support for the DX versions and here is the difference in the two companies. nVidia hardware and driver optimizations are as nearly as good as it gets for DX11 and that is the path they put much of their effort into. They have done very little to prepare for DX12. At the same time AMD started putting its energy in a DX12 future (6 years ago) expecting it to arrive sooner than it did. This is why AMD's 5-6 year old GCN architecture GPUs have such great DX12 support. AMD GPUs are already in the consoles that use low level APIs like DX12 to get that great performance with such minimum hardware. AMD wanted to focus on bringing that same hyper efficiency to the much more powerful PC platform. With PC gaming power finally unleashed future games will be a huge leap better as developers learn all of the new possibilities. AMDs vision is of a unified gaming industry where console ports are much more simple to do using similar low level APIs. With all using AMD GPUs ports are even easier. AMD has made all of the resources they used to achieve its unified gaming goals available to nVidia to include their customers in the unification. nVidia has and is doing everything it can for profits and market share harming just about everyone involved with PC gaming from AMD customers,to developers, and even their own past customers. nVidia is crafty and already they are hard at work hiding the truth. in there own words they will work with game makers to use only certain DX12 features while ignoring the most important ones they do not support (2006 Gears of War DX12 remake disaster and Fable Legends limited asynchronous compute use project now ended and developers fired and studios closed by Microsoft). nVidia was working with these developers and used these games as benchmarks to counter Ashes results. I don't think Microsoft liked how the collaboration made nVidia look good while limiting the ability of their DX12 baby. Microsoft nuked the project, closed two studios and sacked a hoard over it. nVidia is accepting diabolical plan C suggestions.