Gigabyte gv-ntitan-6gd-b review

GIGABYTE, the world’s leading premium gaming hardware manufacture, is pleased to introduce the latest super gaming graphics card, GV-NTITANBLKD5-6GD-B. Built with the highly anticipated GeForce® GTX TITAN BLACK GPU, GV-NTITANBLKD5-6GD-B utilizes the latest 28nm fabrication process, features 6GB GDDR5 memory and adapts 2880 CUDA cores. GIGABYTE GV-NTITANBLKD5-6GD-B not only supports Microsoft DirectX® 11.2 and PCI-Express 3.0 architecture, but also supports NVIDIA® 3D Vision Surround, G-Sync™, PhysX®, and GPU Boost 2.0 technology. With the use of GIGABYTE OC GURU II software, GV-NTITANBLKD5-6GD-B can increase the overclocking capacity by unlocking the hardware limitation and brings hardcore gamers an ultimate gaming experience base on the GPU Boost 2.0 technology.

OC GURU II, liberates the overclocking limitation

According to the latest NVIDIA® technology “GPU BOOST 2.0”, GV-NTITANBLKD5-6GD-B provides gamers an unlimited overclocking capability by using GIGABYTE OC GURU II to adjust maximum core voltage and synchronize the GPU temperature and power consumption. These main features not only increase the overclocking ability but also deliver an extremely performance to make gamers enjoy the amazing gaming experience.

4K Ultra HD Gaming Experience

GV-NTITANBLKD5-6GD-B can support 4K ultra HD monitors or multiple monitors gaming with high-speed double precision and 6GB of frame buffer memory. Besides, with NVIDIA 3D Vision™ Surround. Usrs can experience broader, richer gaming environments with the latest technological advancements. Bring games to life with NVIDIA Surround multi-monitor gaming on a single card, supercharged PhysX and 3D Vision, plus the unbeatable power of SLI.

Bring Supercomputing power to your PC with the world’s fastest single-GPU graphics card. Take it to the extreme with 3-way NVIDIA SLI® technology, NVIDIA TXAA™, NVIDIA PhysX™ and NVIDIA® 3D VISION™ to drive the most extreme gaming PCs on the planet.

Gigabyte gv-ntitan-6gd-b review

  • OC GURU II
    Gigabyte gv-ntitan-6gd-b review
    Brand-new instinctive user interface, easier to monitoring and adjusting all important settings.
    Gigabyte gv-ntitan-6gd-b review
  • 6GB MEMORY
    Gigabyte gv-ntitan-6gd-b review
    Integrated with the first 6144MB GDDR5 memory and 384-bit memory interface
  • NVIDIA® GPU Boost 2.0 Technology

    Technology for intelligent monitoring of clock speed, ensuring that the GPU runs at its peak and the game is at its highest frame rate possible. It offers new levels of customization, including GPU temperature target, overclocking, and unlocked voltage. Nvidia's Kepler architecture debuted a year ago with the GeForce GTX 680, which has sat somewhat comfortably as the market's top single-GPU graphics card, forcing AMD to reduce prices and launch a special HD 7970 GHz Edition card to help close the value gap. Despite besting its rival, many believe Nvidia had planned to make its 600 series flagship even faster by using the GK110 chip, but purposefully held back with the GK104 to save cash, since it was competitive enough performance-wise. That isn't to say people were necessarily disappointed in the GTX 680. The 28nm part packs 3540 million transistors into a smallish 294mm2 die and delivers 18.74 Gigaflops per watt with a memory bandwidth of 192.2GB/s, while it tripled the GTX 580's CUDA cores and doubled its TAUs – no small feat, to be sure. Nonetheless, we all knew the GK110 existed and we were eager to see how Nvidia brought it to the consumer market – assuming it even decided to. Fortunately, that wait is now over. After wearing the single-GPU performance crown for 12 months, the GTX 680 has been dethroned by the new GTX Titan. Announced on February 21, the Titan carries a GK110 GPU with a transistor count that has more than doubled from the GTX 680's 3.5 billion to a staggering 7.1 billion. The part has roughly 25% to 50% more resources at its disposal than Nvidia's previous flagship, including 2688 stream processors (up 75%), 224 texture units (also up 75%) and 48 raster operations (a healthy 50% boost). In case you're curious, it's worth noting that there's "only" estimated to be a 25% to 50% performance gain because the Titan is clocked lower than the GTX 680. Given those expectations, it would be fair to assume that the Titan would be priced at roughly a 50% premium, which would be about $700. But there's nothing fair about the Titan's pricing – and there doesn't have to be. Nvidia is marketing the card as a hyper-fast solution for extreme gamers with deep pockets, setting the MSRP at a whopping $1,000. That puts the Titan in the dual-GPU GTX 690's territory, or about 120% more than the GTX 680. In other words, the Titan is not going to be a good value in terms of price versus performance, but Nvidia is undoubtedly aware of this and to some extent, we'll have to respect it as a niche luxury product. With that in mind, let's lift the Titan's hood and see what makes it tick before we run it through our usual gauntlet of benchmarks, which now includes frame latency measurements – more on that in a bit.

    Titan's GK110 GPU in Detail

    The GeForce Titan is a true processing powerhorse. The GK110 chip carries 14 SMX units with 2688 CUDA cores, boasting up to 4.5 Teraflops of peak compute performance. As noted earlier, the Titan features a core configuration that consists of 2688 SPUs, 224 TAUs and 48 ROPs. The card's memory subsystem consists of six 64-bit memory controllers (384-bit) with 6GB of GDDR5 memory running at 6008MHz, which works out to a peak bandwidth of 288.4GB/s – 50% more than the GTX 680. The Titan we have is outfitted with Samsung K4G20325FD-FC03 GDDR5 memory chips, which are rated at 1500MHz – the same as you'll find on the reference GTX 690. Where the Titan falls short of the GTX 680 is in its core clock speed, which is set at 836MHz versus 1006MHz. That 17% difference is made up slightly by Boost Clock, Nvidia's dynamic frequency feature, which can push the Titan as high as 876MHz. By default, the GTX Titan includes a pair of dual-link DVI ports, a single HDMI port and one DisplayPort 1.2 connector. Support for 4K resolution monitors exists, while it is also possible to support up to four monitors screens.