Nvidia GeForce GTX 200 vs. ATi Radeon HD 4800


graphics-pixels

Size isn't everything. The latest offerings from the two big players look very different, but the results are less clear cut

About three years ago, PC processors went through a technical – and somewhat philosophical – revolution.

The Intel Pentium D of 2005 was the first dual-core CPU and it signalled the end of the quest for ever higher clock speeds. Instead, the CPU industry shifted to a theoretically more efficient multi-core, parallel processing approach.

The first signs of an equivalent change in the graphics chip market appeared in 2007. First, Nvidia elected not to replace the GeForce 8800 GTX GPU with a large, significantly more powerful chip. Its alternative was a die-shrunk GeForce 8800, focusing on efficiency and cost savings rather than outright performance.

One step beyond

AMD's graphics subsidiary ATi took the same approach when it moved from Radeon HD 2900 to Radeon HD 3800 Series GPUs. In fact, it went one step further. ATi released the dual-chip Radeon HD 3870 X2 and announced that as far as it was concerned, the game was up for big graphics boards based on a single monolithic GPU die; the future would be multi-chip.

All of which sets up a rather intriguing backdrop for the introduction of a pair of new GPUs from ATi and Nvidia. Once again, ATi has focused on efficiency and affordability.

Nvidia, on the other hand, has gone old school and delivered a single-die graphics chip of truly mammoth proportions.

Nvidia's attempt

You might think that these new pixel pushers are not directly comparable, but the competition between ATi and Nvidia will be just as fierce. The difference now is that the contest is no longer a clean fight between two graphics chips. Instead, it's a battle between two distinct design philosophies and business models.

First out of the blocks is Nvidia's beastly new GeForce GTX 200 series. By any metric, it's a monumentally powerful – even intimidating – new graphics chipset. The GPU at its heart contains an incredible 1.4 billion transistors. That's literally double the number in GeForce 8800 series GPUs. Consequently, the GTX 200 series packs some seriously beefy specifications.

In terms of functional units, the shader count is up from 128 in the GeForce 8800 series to 240. Things get a little more complicated when it comes to comparing new and old texture processing and pixel outputs per clock. Suffice to say that with 32 render output units and 80 texture address and filter units, Nvidia has boosted the new GPU's functional heft in all parts of its architecture by at least 25 per cent – and usually much more.

Long memory

The final piece of the GTX 200 puzzle is memory technology. Here Nvidia has also taken the big iron approach by pairing established GDDR3 memory with a beefy 512-bit interface.

All told, the GTX 200 is a massive 576mm square chip. It's so big that no more than 100 can be squeezed onto the 65nm wafers used by Nvidia's production partner TSMC. To put that into context, Intel can cram approximately six times as many dual-core Penryn CPU dies or 25 times as many Atom processors into the same space. In other words, Nvidia's new GPU is an extremely expensive chip to manufacture.

Unsurprisingly, it's also extremely expensive to buy. At launch, two models are available, the GeForce GTX 280 and the GeForce GTX 260. The former is the full-fat offering with all of the abilities detailed above, core and shader clocks of 602MHz and 1,296MHz respectively and a memory frequency of 2.2GHz. It's yours for around £400.

The 260 model is comparatively cut down with 192 shaders, 64 texture address and filter units, and 28 render outputs. Operating frequencies are likewise somewhat suppressed at 576MHz, 1,242MHz and 2GHz for core, shader and memory respectively. The 260 must also make do with a 448-bit memory bus. The starting price for this slightly more modest GTX 200 is £250.

ATi's option

The GTX 200 series are big boards with suitably bulky price tags. At first glance, then, it certainly looks like ATi has opted not to compete.

Its new Radeon HD 4800 series is a more restrained effort weighing in at just 956 million transistors and 260mm squared, the latter figure aided by the use of a slightly finer 55nm production process. It's an altogether smaller, less costly GPU to manufacture.

Initially, there are two boards in ATi's new line up, the 4850 (read our review here) and 4870 (review here). Both share the same number of functional units and are specified with 512MB of graphics memory. The key differentiators are clock speeds.

Fast clocks

The 4850 runs a core clock of 625MHz and a memory frequency of 2GHz while the 4870 ups the ante to 725MHz and 3.6GHz. The 4870's startlingly fast memory speed is courtesy of the first ever use of GDDR5 memory on a consumer graphics card. As for pricing, the 4850 model starts at £125 while the 4870 is a £200 board.

Here's the twist, though. Despite the smaller die size, transistor count and pricing, the new 4800 family comes awfully close to Nvidia's big beast in terms of raw capability. In fact, it actually outpoints the GTX 280 for pure computational grunt, if not 3D rendering throughput. The top spec Radeon HD 4870 board is rated at a monstrous 1.2TFLOPs where the GTX 280 manages just 933GFLOPs.

The secret

To understand how ATi's new GPU pulls that trick off, consider the following facts.

Compared with the existing Radeon HD 3800 series, the 4800 family has nearly triple the number of shaders (800 to be precise, but note that ATi and Nvidia's shaders are not directly comparable) and texture units. And yet its transistor count has risen by just 44 per cent. In other words, what AMD has done with the 4800 series is far more sophisticated than creating an oversized 3800 core.

Every aspect of the chip's architecture has been overhauled with a view not just to performance but also to efficiency. That includes the welcome re-introduction of standard box-filter algorithms for anti-aliasing.

With the launch of the Radeon HD 2900 series early in 2007, AMD dabbled with a new 'adaptive' approach towards smoothing the jagged edges of rendered objects. In theory, it was more sophisticated. In practice, it was just plain slow.

Green flagship

A further benefit of ATi's focus on efficiency is power consumption. Nvidia's GTX 280 guzzles up to 236W, while the flagship Radeon HD 4870 consumes up to just 160W. Also, of the two big players in PC graphics, only ATi's GPUs have support for the latest 10.1 revision of Microsoft's all-powerful DirectX multimedia API.

Until ATi releases the upcoming dual-chip version of the HD 4870, we won't know exactly how this latest round of the ATi versus Nvidia battle is going to play out. But after nearly two years of Nvidia dominance, things are already looking much more promising in the ATi camp.