The dominant force in PC graphics has been the green juggernaut of nVidia where as AMD/ATI has been more a Bedford truck over the last few years. Even last years release of the Fury X with HBM memory didn’t “wow” with it been a very power-hungry and hot card and only had 4Gb of memory stifling it’s 4K potential.
At the end of this month (June 2016) AMD release the Radeon RX 480 amid much rumours and hype about this new line of cards. Earlier this month nVidia released the GTX 1080 as their new flag-ship card hoping to corner the market at the top end with this card costing over £600 pounds for the reference or as they call it the founders card. When the founders card went on sale it was sold out within a few minutes at many of the major suppliers, even with its premium price tag and standard configuration.
AMD with their Radeon RX 480 are taking a different approach and aiming their card at a more main stream market with the 8Gb version retailing at $229 according to a leak by AMD Poland. At this price point it could really shake up the market if the leaked benchmarks are true! AMD over the last few years with their present architecture have been known for hot running cards and have been trailing in performance against nVidia. The 480 on numerous leaked benchmarks has been seen at a very respectable (approx) 60ºC and the raw power similar to a GTX 980 which retails at over twice the price. The leaks and rumours leading up to the release of the GTX 1080 over the month before release where extremely accurate, so it’s not beyond the realm of possibility that the rumours about AMD card are accurate as well and are more marketing related than anything else. Both manufacturers have made power consumption a priority on the present architecture following the PSU push towards economy a few years ago with the release of 80+ gold, silver and bronze certificates.
Both manufactures have reduced the size of the transistors on the silicon dye which helps with the power consumption. Another move by both manufacturers is to use GDR5X memory which is readily available but not as cutting edge as HBM (High Bandwidth Memory.) HBM was used on the previous generation Fury X card but suffered due to only carrying 4Gb of RAM. With the increasing popularity of 4K monitors the move from supplying high-end cards with 8gb is important allowing present games to run on high detail levels at just (and I mean just) over the minimal frame rate of 25 fps. Ashes of the Singularity runs at 36 fps on 4k in DX12 on a GTX 1080. So even with this new generation architectures smooth 4K gaming is likely to still need SLI or Crossfire multiple graphics cards. HBM memory was not used in this generation of cards due to availability problems and only used on the Fury X to extend the previous architecture life by one more card. AMD have another archetuture release towrds the end of this year called Vega which is supposed to be using HBM 2.0 and could be the release that challenges nVidia’s dominance in the high end market.
I am hoping that AMD’s decision to go for a more mainstream market will pay off and help them with there present financial problems. The new cards getting them into position to release the Vega archetecture at the end of the year and have a serious contender for the most sort after graphics card. With compertition between nVidia and AMD the winner can only be the consumer with a push for better technologies, improved pricing and most importantly a continiueing choice of card manafacturers.
See you on the other side…