Tek Wisdom

Intel’s Core i7-7700k: The Return of the 5 GHz OC

Dateline: 1-11-17

For all of the complaints about Kaby Lake, there is one thing which can be said in its favor, it is no slouch in performance. Recently, Intel has released its 8th generation Core processor, code named Kaby Lake and taking the 7000 series numbering pattern. We got our hands on the flagship, the i7-7700k, a four core, eight thread, 4.5 GHz (turbo) addition to the current offerings in the 14nm process. On paper, this CPU shouldn’t exist. Intel usually conforms to the tick-tock release pattern, alternating die shrink with architecture revisions. Recently, they have switched to a three part release cycle called PAO: Process, Architecture, Optimization. This is the first chip to be released in the Optimization cycle of this new scheme.

We didn’t really know what to expect from Kaby Lake because no one had ever seen a release from the Optimization cycle of this new scheme. The idea is instead of dropping to a smaller die, like the 10nm Intel had planned on being at based on Moore’s Law, they stay at 14nm and tweak their settings for the best speeds. Thumbs up, Intel, thumbs up. While we would have loved to see what a 10nm chip could have done, this may hold us over until the release of Cannon Lake and Ice Lake.

That being said, Kaby is aimed mainly at mobile in its innovation. The bulk of the changes benefit mobile clients more than desktop users, but two changes have a big impact on performance minded people. First, this chip overclocks like a champion. In our testing, we got our sample up to 5.2 GHz, requiring a 1.4 volt increase to the VCore and reaching 82 degree Celsius temps. Next, there is an upgrade to the chip’s 2D video encode and decode capability. This is to bring proliferation of 4k to desktop, where it has been a struggle without add-on video processing, and difficult even with it.

The Kaby chips have a dedicated fixed-function media engine built to do traditionally processor or GPU intensive functions. The two of the most interest to users are the 4k HEVC and VP9 codecs. The CPU is so powerful it can decode eight standard 4k HEVC streams simultaneously at up to 120Mbps bit rate each. Ridiculous. The VP9 codec is used by Google, who owns YouTube, and all of their high quality video is encoded in it. So basically any time you watch any high def YouTube, you use the VP9 codec. I’m sure no one out there falls into that category.

Media companies are beginning to feel safer having a dedicated encoder/decoder on the CPU itself. There is talk from Netflix about finally letting desktop users stream 4k video, something that only phones and TVs have been able to do so far. Intel claims to have added DRM updates to the CPU which would certify it as a platform for 4k Netflix streaming. It is already certified for Sony’s 4k movie and television streaming service known as ULTRA, which up until this point was restricted to Sony TVs.

The last note on video is about ultra-high definition, which is becoming more common place on TVs these days. Kaby has some tweaks to enable high dynamic range (HDR) and the extended color gamut known as the rec.2020 spec. Currently, HD TVs use the rec.709 spectrum, which is slightly below the P3 color space used in digital movie theatres. HDR brings deeper blacks mixed with the more vivid colors of rec.2020 (particularly the reds), making Kaby more desirable for video enthusiasts using high end panels.

With this new chip comes a new chipset. Luckily, no new sockets (we’re on LGA 1151, by the way), but Z170 is dead, now replaced by Z270. Z270 has some worthwhile updates to it, like 24 lanes of PCI Express 3.0, 10 USB 3.0 sockets, and support for Thunderbolt 3, all native to the 200 series chipset. Kaby Lake and Skylake are forwards and backwards compatible in all LGA 1151 mobos, but make sure you have the latest BIOS revisions for stability.

The rule for upgrading still applies: if you don’t absolutely need one of the new features and are within one or two generations of Kaby, save your money. If not, you can pick up the 7700k for $350 online.


.

The Two Faces of nVidia: the GeForce GTX 1080

GTX 1080

Dateline: 6-25-16

As exciting as the new releases from Intel are, this really is Christmas is July thanks to nVidia releasing its new flagship video card, the GTX 1080. Like most flagship models, this card is flouted as the bee’s knees in terms of performance at any price. It’s really just all about crushing the competition and benchmarking numbers through the ceiling. Spoiler alert: that’s what you’re going to get with this card.

The GTX 1080 is the fastest single-GPU card in existence right now. Not even the Titan X with its absurd price tag can’t outperform nVidia’s new baby. This is also the first semi-affordable solution to 4k gaming. Nothing does 4k at high settings with anti-aliasing on, but with it turned off you can average 60 fps with settings on high. At 1080p, you don’t even want to know. We didn’t see numbers below 80 on FRAPS. 

Let’s talk a little about the technology behind the 1080. It’s powered by a 16nm FinFET Pascal GPU. This is a brand new lithography from nVidia, and the one that has been delayed for what seems like forever to get down under 28nm. There is new memory onboard, what is being called GDDR5X. It has higher frequency, capacity per chip, and better power efficiency. There are 8 GB of it on the 1080, running at 10,000 MHz on a 256-bit memory bus. The TDP is slightly higher on this card vs. the 980, drawing about 15 more watts, and it gets hotter faster due to the fan spinning at lower RPM’s. That said, it’s still fairly cool with an internal temp limit of 82°C. 

Everything else is the same, but more or faster. The transistor count is up to 7.2 billion, there are 2,560 CUDA cores, 64 ROPs, a core clock of 1,607 MHz, which boosts to 1,733 MHz, outputting through either a DisplayPort 1.4 or HDMI 2.0(b). But this is not the perfect card, or even the best of what nVidia is capable of. 

Let’s start with the elephant in the room. This isn’t the uninhibited Pascal. What we have here is a pared-down version released to consumers from the much pricier Tesla P100 GPU compute cards. This is the GP104 from the outgoing Maxwell, not the GP100 from the Tesla. We assume nVidia is keeping is full-on GP100 for future cards, like the GTX 1080Ti or updated Titans. That’s not a bad thing for prices. The Tesla is clear into the 5 figure range and no one playing Far Cry is shelling out that kind of cash for smoother pixels. 

The other thing that bothers us is the lack of HBM 2.0. We saw the first iteration of High Memory Bandwidth on AMD’s last generation of top-end cards and it is a great advancement of what we were hoping would be the standard for memory interface from here on out. While the 256-bit bus on the GTX 1080 is sufficient, it would have been really comforting to get the newer HBM 2.0 for a little future proofing. 

The GTX 1080 is available now for $599 (or $699 if you want the founder’s edition, which you don’t because it’s a huge waste of money). If you have the last generation of cards from nVidia, wait and see what the price drops will look like in 6 months and buy one for Christmas, or just hold out until next summer when the next generation will be released and you will see the biggest difference.


_

The Broadwell-E Breakdown

Intel Broadwell-E

Dateline: 6-20-16

In the event you have had your head stuffed in your case for the last few weeks, Intel has stepped its game up with its their latest release in the high end CPU market, Broadwell-E. These are the chips we were waiting for to replace the aging Haswell-E line, roughly 2 years old. This is a tick in Intel’s tick-tock release schedule, which means it shouldn’t be too much of a deviation from previous architecture. It’s not, but there are a few big changes we wanted to share with everyone. 

First, Intel changed their lithography and now produces 14nm silicon. It took them awhile to get it right, and as we have seen with graphics card manufacturers, this is getting more touch and go the closer we get to encroaching on the atomic scale. There is a limit to how much smaller the space between the transistors can get before they start bleeding into once another, causing errors. The good news is 14nm is doing very well, allowing more speed with less power draw and less heat. 

The most powerful chip in the lot is the i7-6950X. Usually the top end desktop CPU is an 8-core, $1000 beast that is slower in clock speed but makes up for it in multi-threaded application. That is still partially true, at least the part about it being great at multi-threaded application. The new i7-6950X is the world’s first consumer 10-core processor. It’s also the most expensive release in the top end market, coming in at $1,500 dollars. That’s a lot of cheddar. 

As awesome as 20 threads are, they come in at a base of 3 GHz and boost turbo of 3.5 GHz which is just as fast as the outgoing i7-5960X, which was $500 less at release. Speaking of similarities, it’s also the same 140W TDP and same 40 PCIe lanes. So what costs the extra $500 besides the cores? The base DDR4 memory support going to 2400 MHz from 2133 MHz. Big whup. Oh, did we also mention it’s harder to overclock? We topped out at 4.3 GHz when our 5960X went to 4.4. Somehow the 6950X still won the single-threaded though. Weird. 

But there is better news. Usually the enthusiast line of processors has 3 offerings at the $450, $650, and $1000 price tiers. The new Broadwell-E release has an extra chip at the top, so there are actually 4 here. The others are: a $1000 i7-6900k, which is duplicate of the 5960X but faster; a $600ish i7-6850k six-core; and a fascinating $400 i7-6800k that is so cheap we are hoping the Skylake 6700k will get cheaper to avoid cannibalization.

These are all still very new and being tested as we speak, so there will be more to come. Speaking from experience at this point, if you are on the “I have to have the latest and greatest” bandwagon, nothing will stop you from upgrading, which is your God-given right as Americans. But you probably won’t see any noticeable differences except in benchmarking software. Benchmarking, like Facebook, isn’t real life. If you are more than two generations old in your CPU, you will feel a difference, and quite the difference at that. If you decide to upgrade from Haswell-E, you are in luck, because Broadwell-E is a drop-in to the LGA 2011v3 socket with nothing more than a BIOS update for good measure. Best of luck to Broadwell-E during its release!


So You May Have Noticed Our Body (of work) Has Been Going Through Some Changes…

Dateline: 4-1-16

We have just launched the official debut of our revamp of the FoxTek site! Yay! We have had a long and trying experience here getting everything perfect for our customers. In the beginning, the first site was built by a single contractor and was completely proprietary. It worked pretty well but was difficult to service without going through him every time. We updated from that to a firm who built the backbone functionality of the site and let us build some of the more basic pages, but gave us our first look at a content management system, Joomla. We have been with that for awhile now and needed something that was 100% in-house for full control. Enter WordPress, our Joomla replacement CMS. 

We have been liking WordPress much more than Joomla, and ease of use has not been as bad as we were worried about. What you see on your end is slightly different styling cues on the pages and a change in the builders that let you customize rigs. There is much more than meets the eye, however. 

We did do a complete rewrite of our text, as the original text was years old and needed refreshing. We imported some new pics as well, since the tech changes so fast. But our main focus has been on the construction pages and their functionality. WordPress has wonderful add-ons that allow for complex items to be built and bundled with compatibility checks. One is called Composite Product Builder, and we’ve been getting our money’s worth out of it. 

For our customers, composite product builder is a great thing. First, we have reduced inventory even more, implementing even stricter standards on our parts, so when you click on those combo boxes, you’ll be given simpler and better options. When you choose an option, you can now see a picture of that item and a “more info” link that will take you right to the manufacturer’s page for that item. Next, as you are selecting parts, if one item conflicts with a previous selection, you will be asked to fix the conflict before purchasing, saving both of us time and frustration (before this an expert builder reviewed your order and if there was a conflict, had to contact each person individually seeking resolution. Fun times.) Finally, we had said experts put together four example builds that, while still allowing customization, are ready for one-click purchasing. They are constantly updated with the best combination of technology that is available, and can be compared side-by-side by clicking on the “GAMING” button above. 

So, our loyal customers, we hope you like the new site and functionality. Let us know what you think by writing to admin@foxteknology.com with comments or suggestions. Enjoy!


 

High Apple Pie in the Sky-lake

intel-skylake-core-i7

Dateline: 9-15-15

Now that Skylake is here, Broadwell looks like that president who only served a short time in office before contracting an illness and dying. Its reign was short and sweet. We apologize to anyone who bought into it just to see your technology go obsolete in record time. Patience, young padawan. But to everyone who held out, you can pull your wallets out of their lock-boxes now, its time to use them.

Skylake comes to us in the two standard varieties of the Intel budget line: a hyper-threading 4 core unlocked $300 something i7 and its non-hyper-threading i5 $200 something little brother. In this case, their names are the core i7-6700k and core i5-6600k, priced at $339 and $242, respectively. Their speeds are 3.5GHz (3.9 turbo) for the 6600k and 4.0GHz (4.2 turbo) for the 6700k, both carrying Intel HD 530 graphics and pulling 91 watts of TDP. They only differ in L3 cache, with the 6700k hauling 8mb to the 6600k’s 6.

Intel has changed very little from previous generations, with the exception of removing the FIVR (fully integrated voltage regulator) from the CPU die and exporting it to the motherboard. This is a great move because not only did the FIVR make the chip hotter and harder to overclock, but motherboard manufacturers generally handle voltage regulation much more accurately than the CPUs themselves. This gives space to high end mobo manufacturers to provide better regulation than entry level equipment, making the entry level cheaper and giving some sort of justification to more expensive motherboards.

Speaking of motherboards, you’ll need a new one if you choose to upgrade to Skylake. It’s not just a new socket (LGA1151), but a new chipset (Z170) with new features. Storage is set to make leaps and bounds in the upcoming year, and Z170 is ready for them. It comes equipped with 20 lanes of PCIe for graphics load as well as the fancy new m.2 drives with the NVMe protocol, giving us access to speeds at more than 4 times what we see from the fastest SSDs nowadays. Also, support has been added for RAID 0, 1, and 5 on the PCIe bus, letting users stripe m.2 drives for even more speed, reaching up to 3500MB/s.

There continues to be support for six SATA 6Gb/s drives, 10 USB 3.0 ports, and 14 USB 2.0 ports. Unfortunately, there is no native support for USB 3.1, but most mobos we’ve seen so far have at least one 3.1 and a type C connector.

We also finally get a piece of that sweet DDR4 memory that Haswell-E has been keeping to itself. Z170 supports up to 64GB of memory, from 2400MHz to a blistering 4000MHz, and whatever else memory manufacturers can produce. And more good news, since DDR4 has been ut since October of last year, prices are more reasonable than the arm and leg demanded before.

But what about performance? Skylake overclocks like a champion. No, this won’t be the chip that gets us to 5GHz, but we got 4.7 stable, and we’ve seen CPU-Z shots of guys with 4.8. The nice part about overclocking is we return to a completely unlocked bclock increment, unlike previous generations. Before, we were tied to either the standard 100MHz, 125, or 166, but now we can adjust by 1 MHz increments if our hearts so desire. We achieved our OC using 100 x 47, but we could have gone custom loop liquid cooling and stepped up to 101 x 47 to see if we could creep closer without losing stability.

Even at stock speeds Skylake is roughly 10% faster than Devil’s Canyon, making it 20% faster than Haswell, and 30% faster than the 3770k…you get it. Intel stays in step with 10% increases per generation, which means unless you are using the trusty old 2500k, you won’t feel pressed to upgrade unless you want a new m.2 boot drive or DDR4 memory, which we totally do. But the inclusion of the features highlighted in Z170 makes Skylake a high priority on our Christmas list this year, and the new memory with pair nicely with the accompanying m.2 drives for stocking stuffers! We’ll meet you guys at the Amazon checkout line.


 

So Many GPU’s, Which Do I Choose?

Dateline: 9-15-15

Christmas is coming people, and we need to make some decisions about what we want, so we can inform our loved ones, before they make the digital meat thermometer mistake of ’06 all over again. The big choice we will mostly be between is Skylake, or a new GPU. That being said, which GPU is the best these days, and what if I don’t want to make mortgage payments on it for 20 years? We pause and evaluate here.

Before we do, we must decide how to split the field into catgories we can narrow down. It used to be by price tier, which is fairly obvious, but not so useful anymore. That is because people who spend money on high end PCs usually use them for gaming, and most game at the native resolution of their monitor, which they have had for years. We are not here to discuss “best resolution” since even whispering that phrase leads to hours-long office discussions that divide the company into warring factions. What we will do is divide the market into which cards are best for which resolution, and compare that way.

Looking at what is available now, AMD has the most recent releases, but nVidia has longer dominance with their existing offerings. We will not likely see any major changes from either side until late 2016, because the next step is a lithography drop from the existing 28nm, past 20nm, all the way down to 16nm FinFET, which is computer-eese for 3D transistor. nVidia will call their next architecture Pascal, and AMD’s will be called Arctic Islands.

Pascal plans to offer double the transistor count as the current GM200 chip has, which can equate to almost double the speed, depending on what other technologies they come up with between then and now. AMD is promising a 2x performance-per-watt target compared to what is has now using the same 3D transistor technology. Both companies will move to a new memory, called High Bandwidth Memory (HBM) 2.0, which will allow up to 8Gb of memory per die and stacks 4-8 dies on top of eachother. Doing the math, that gives us cards with 16 or 32GB frame buffers. Hello 4K gaming.

So, let’s start with the elephant in the room: 4K. If this is your resolution, there are two cards to consider, the GTX Titan or the 980Ti. The cheapest you can spend is around $650, but your monitor was probably twice that much, so we’re sure you’re fine with that. Going lower than these cards is not recommended because anything smaller than a 6GB buffer won’t hold up for future titles. Even the high-end of AMD’s line top out at 4GB buffers, and despite using an early HBM 1.0, they can’t cover enough ground to catch nVidia’s best and brightest.

Stepping down to 2560 x 1600 (1600p), the field opens up a bit more. Of course the 4k cards cover this resolution with ease, but what if you don’t have half a grand to spend on a grahics card? AMD has the Fury X, the Fury Tri-X, and nVidia has the GTX 980. Now, with AMD, if you are ever confronted with the option of the higher or lower card, go with the lower. It usually performs within 5% of the more expensive card, but at $100 cheaper. There is no difference here; the Tri-X performs almost equally with the Fury X, $90 cheaper. Comparing it to the GTX 980, the results are a wash. We would go with the AMD over the nVidia in this case due to the age of the nVidia and AMD’s use of the new memory standard in the Tri-X.

At 1920 x 1080 (1080p), we have the Radeon R9 390X, the vanilla R9 390, and the GTX 970. Same story as the 1600p resolution, the two Radeon cards perform almost equally, except the standard 390 is $105 cheaper than the 390X. The 970 has a weird problem that causes it to lose to the 390, its frame buffer is split, totalling 4GB as 3.5GB + .5GB. But in games that require all 4GB, it isn’t cutting it, lagging in tests that show 3.5 + .5 isn’t the same as just 4GB.

To go any lower than this is limiting your options to the AMD R9 380. The nVidia GTX 960 also exists in this tier, but its 2GB buffer essentially eliminates it from conversation as a future-looking purchase. As soon as you install this card, you’re looking for your next one. The 380 also has 2GB varients out there that you have to watch out for, but a 4GB version is fairly comfortable at 1080p, but performs well below the 390. There is a reason it is $100 cheaper.

In conclusion, if you are in the market and a plan to use your new video card for gaming, take our advice: unless you game at 4k, go with the less costly card from AMD. It won’t outperform its nVidia counterpart, but it will come close enough to justify the price difference and then some.


 

Max Power with Maxwell

NVIDIA-Maxwell-GM204-GPU-Features2-635x354

Dateline: 11-1-14

There has been much movement from nVidia in the last month, we have tasked out the coverage to three people on staff: one to cover Maxwell architecture (this article), another for the GTX 980, and the last for the GTX 970. Maxwell has so many new goodies, we couldn’t do combined piece on both chip and the cards. But we digress; let us discuss nVidia’s new jewel: the GM204.

The GM204 is the new 28nm process, as the successor to the GK104, which is like a smaller GK110 that powers the previous flagships GTX 780 and Titan. We are all waiting on the 20nm chips to blow our minds, but Taiwan can’t get the production stabilized yet. While we anticipate the pot at the end of the rainbow that will succeed the GK110, nVidia gave us a very satisfying appetizer. If you are worried that a chip built off the GK104 couldn’t surpass the GK110, you’d be wrong, as nVidia doesn’t know how to build anything that isn’t better than the previous generation. Before we talk about performance, we should highlight some features which accentuate the power of the GM204.

Without talking about the specifics of the 980 and 970, since both are powered by Maxwell, they share some features and stats. Besides the GM204 chip, there is 4Gb of memory clocked at 7Ghz pushed across a 256 bit bus and 64 ROP’s. the GTX 980 uses the streaming microprocessors, which drive all the rest of the performance stats, besides the core clock of the chip itself. The 970 scales down the SM’s by 3, so the rest of the stats are reflected in this 20% reduction.

Besides hardware there are numerous software tweaks to aid performance and efficiency. To offset the narrow 256-bit memory bus, nVidia developed a delta color compression (DCC) algorithm which improves bandwidth efficiency by 25% (average). Multi-Frame Sampled Anti-Aliasing (MFAA) rotates a pixel’s sampling points from one frame to the next, so that two points can simulate four whose locations remain static. It is claimed to be up to 30% faster than the visually equivalent level of Multi-Sampling Anti-Aliasing (MSAA). Voxel Global Illumination (VXGI) is the next step for ambient occlusion, which was previously pushed by the driver to aid in light reflection off of surfaces. VXGI bounces light surfaces in real time by allowing developers to choose how many cones of light they want to use and the degree of bounced light resolution. Dynamic Super Resolution (DSR) combines super sampling with a custom filter, crunching resolutions higher than your monitor can display, shrinking the result back down to the resolution of your monitor, giving you 4K-quality graphics on an HD screen.

To get onboard with the upcoming virtual reality headsets, nVidia added a few features specific to VR requirements for gaming. Its VR Direct initiative reduces latency from 50ms to 25ms using a combination of code optimization, MFAA, and Auto Asynchronous Warp (AAW). AAW displays frames at 60fps even when performance drops below that. Since 60fps is the minimum VR headsets need to display per eye (120fps total) and most hardware cannot hold that, AAW fills the gap to prevent the wearer from getting nausea. Auto Stereo is for games built without VR headsets in mind. It essentially upsamples games to render stereoscopically when they weren’t originally intended to.

All of these innovations and advancements taken into account, Maxwell pushes the envelope pretty far forward with the GTX 980 and 970, all while staying in the 28nm process. It’s very surprising to see the GTX 680’s big brother not just keep up with, but outperform the Kepler’s GTX 780 while coming in at a lower price point. We’ll save that discussion for the 980 review, but we look forward to all the cards that will feature the new Maxwell architecture.


 

nVidia’s Mighty Max: the GTX 980

NVIDIA-GeForce-GTX-980-17

Dateline: 11-1-14

The first card we see to use in new Maxwell architecture is the GTX 980 flagship, and my oh my are we impressed. If you do not know about Maxwell read up on all the tricks nVidia put into the 28nm process to coax some extra performance out of it. Besides the specs for all Maxwell cards, the GTX 980 has 2048 CUDA cores from 816 streaming microprocessors, a clock speed of 1126 MHz boost to 1216 MHz, and pulls 165 watts TDP through the 6 pin power connector. The MSRP for the reference board is $550.

This would be a long article if there were many intricacies needed to justify a $550  price point or if the performance was give and take from its predecessor or rival AMD cards, but none of that is true. In fact articles talking about flagship nVidia products are usually short because we can sum up the whole article in one sentence: it’s the best. But because we know you want a little more than that, here is some comparison data.

Looking at a reference GTX 980 and 780Ti and a Radeon R9 290X, the scores are split between the 980 and 780Ti, with the 980 taking Tomb Raider, Metro: Last Light and 3D Mark Fire Strike by wide margins. The 780Ti wins Batman: Arkham Origins, Unigine Valley and Unigine Heaven, but only by 3 fps, actually tying on the Heaven benchmark. The reason this is actually an all-out win by the 980 is twofold: it’s $100 cheaper than the 780Ti (which is now discontinued) and it’s only the reference board which displays this closeness. Its factory OC’d card from any vendor (we used one from MSI) wins all categories. The only time you run into problems is at 4K resolution, when larger memory sizes leave the 4 Gb 980 behind.

Don’t be hesitant about the reference board though. Two 6-pin power connectors are easier to come by then the twin 8-pins needed by most modified PCB’s and though their aftermarket cooling is excellent, the reference cooler will keep all temps under 80 o C. The only reason to be cautious is the 980’s little brother the GTX 970, but that is for another article. As for the flagship, it’s sitting pretty at the top of the hill, putting its metaphorical foot on the skulls of previous generation cards. All hail.


..

The GTX 970, a Silent Assassin

11252_big_w_600

Dateline: 11-1-14

The second card to use nVidia’s new Maxwell architecture is the GTX 970, or in this case a Gigabyte GTX 970 G1 because nVidia won’t be putting out a reference 970 to the public. Don’t be dismayed. This decision was for the best. If you’ve being hiding under a rock somewhere, you might have not heard of the GM204 chip from the Green Giant called Maxwell. If that is the case, read up on it to see how hard nVidia has been working to be the best. Once you’re caught up, witness the surprising application of this new chip in the GTX 970. We say surprising because this is not the $500 plus GTX 980 flagship, but the $390 middleweight punching way above its weight class.

The reference specs for the GTX 970 are: 1664 shaders from 13 SM’s, a 1050 MHz base clock (boost to 1178 MHz) and a 145 watt TDP. Gigabyte pushes the clocks to 1329 MHz with the boost on and keeps it cool with a triple fan Windforce heatsink. Pushing this card only nets a reading of 58o C with the fans inaudible, despite being 4 feet from our case. To recap that’s 113 MHz above the top speed of a GTX 980, except 22 o cooler and quiet as a church mouse.

But that’s not all! Seeing as how this is a card built to be overclocked, we figured we’d put it in our two cents as well. We pushed the clocks to almost 1500 MHz (stable) and kept temps under 65 o C. Seeing as how easy it was, with ran our benchmarks at our OC instead of the factory OC to see just how close to the 980 we could get.

We tested at 4k resolution to really put the pressure on these cards. What we observe is the 970 equal to or surpassing the 980 on some tests, and both beating out the next best thing from AMD, the R9 290X. The only reference board was the GTX 980 because the AMD we used was the Sapphire Tri-x OC, and we double OC’d our 970. If you put in the work like we did, the performance of a stock 980 could be yours for less than $400.

In response to this performance, AMD did the only thing it could, slashed prices. They took $100 off the R9 290X to put it at $450 which they felt was enough to overcome the performance gap with the 980. What they didn’t count on was the 970 coming from underneath and overtaking the R9 290X after the price cut. How embarrassing. This still may put people in a tough spot between a 980 and 970. This is over $100 difference between the two, and there really isn’t $100 dollars of performance gap. But some people just need to have the best. AMD really shouldn’t come into the picture unless there is even more of a price cut to redeem them.

This is why they GTX 970 is the Silent Assassin. Nobody would expect the most interesting offering from nVidia would be its flagship, but its midrange which performs like its flagship. As more vendors release their custom boards, things may continue to get better, as the odds of the first GTX 970 to be released being the best is low. That only makes us more excited for the future graphics wars. May we all reap the benefits of nVidia’s quest to destroy benchmarks every 6 months – Hazaa!


….

AMD Sweeps the Legs with the R9 285

ASUS-285-STRIX-11

Dateline: 11-1-14

With all the hype on the nVidia side of the graphics fence over Maxwell, AMD has quietly overtaking the entire bottom end of the video verse with the R9 285. For over a year now, AMD hasn’t had a shot at a king of the hill videocard, so they have settled for price/power ratio. Unfortunately, two can play at that game and nVidia abandoned their dreams of building a nuclear powered GPU and started putting all their TDP’s on a diet. It worked. Every GK104, GK110 and GM204 powered card sipped electricity and barely put out heat.

AMD wasn’t doing badly though. The R9 290, R9 290X and 295X  are great price/performance cards, despite being small space heaters. While the new GTX 980 and 970 are uncontested at their price tiers of $550 and $400, respectively, drop under $400 and you are in AMD’s waters. If the R9 290 was too much card for you, there was always the R9 280 and 280X. Not anymore. Now the step under the R9 290 is the R9 285, displacing both previous models.

The R9 285 is a nice little package. The TDP is 190 watts, powered by 2 6-pin adaptors, there are 2 Gb of memory clocked at 550 MHz moving through a 256- bit bus, it’s 10. 6 inches long and the stock clock is 918 Mhz.  It outputs to both types of DVI, HDMI, and DisplayPort.  MSRP is $250, putting it some distance away from the GTX 970 and hopefully low enough to compete with the future GTX 960. The GPU code name is “Tonga” which is the successor to Hawaii chips in the 290 and 290X.

Performance is not lacking at all, despite the modest price tag. To test, we have an Asus Strix R9 285, with a factory OC of 954 MHz base clock, and aftermarket cooling via twins fans send and an all-metal shroud. We see it 285 beat the 280, and the outgoing GTX 760 in all tests. Against the 280X ($60 more expensive) it wins in all categories except Hitman: Absolution, Unigine Valley 1.0, and we suspect that the Hitman score was low due to drivers. 3D Mark FireStrike had the 280X over the 285. But while the delta looks large, you have to remember the scores vary widely for that benchmark, which is why in-game fps testing is the real litmus test of graphics cards.

Heat, which has been an AMD problem of late, is not on this card. The Asus cooling did an excellent job, turning the fans on over 60o, and even at load keeping the card at 72 o quietly. This is more like the AMD we remember, before they threw away the book on any energy efficiency to try to catch the nVidia GTX 780. If they keep going in this direction with Tonga, good things are in the future.

As far as buying is concerned, everything is in state of up upheaval due to nVidia releases. AMD is cutting costs on almost all models and nVidia has discontinued anything that doesn’t start with a ‘9’. All new cards are at their peak expensiveness until competition comes out to force a price drop.  That’s usually the time to strike. Keep this card in mind until we see the GTX 960 after which will be the time to invest.


———-

Kick Your Budget Build in the Pants Kick Your Budget Build in the Pants

Dateline: 9-10-14

Getting overwhelmed with the Haswell-E news is a quiet killer from Intel, the Pentium G3258 Anniversary Edition. What is this eloquently named chip you ask? It’s a dual core (no NT) Haswell CPU, fully unlocked running at 3.2 gHz that cost just under $70 dollars. Please hold all applause until the end of this bulletin.

Why is this exciting? How many people are professional photo, video or music editors? How about engineers that use CAD software? No? Then this is your chip. There are limited applications out there that utilize 4 cores and even fewer where there is a noticeable performance difference with Hyper threading turned on. Unless you are in one of the aforementioned career paths, you should be building with this chip.

But how can a 3.2 gHz dual core compete in this environment? it overclocks like a champion, that’s’ how. We got this chip to 4.7 gHz on air. That’s on par with the i7- 4790K “Devils Canyon” chip Intel released not too long ago, specifically built to be overclocked. And its $340 dollar, with no heatsink. We slapped a $30 dollar Cooler Master 212 EVO on the G3258 and we’re in business to perform excellently on 90% of applications in the market.

This chip is the answer to all those builds you want to keep on under $1500 without compromising in more than one area. The savings in this CPU are applicable to a bigger SSD and a faster  GPU, instead of one or the other. We strongly suggest you ask yourself what PC you use that would require more horsepower than this chip has, and then go by one for your next PC.


Intel’s Doc-Oc Finally Released

Haswell-E

Dateline: 9-5-14

Haswell-E is finally upon us, as is its sidekick X99, and we couldn’t be happier. Hitting digital shelves as we speak are the three new workhorses of Intel’s Enthusiasts family; the i7-5820K, i7-5930K and the i7-5960X.

At the entry level price point is the 5820K, six-core chip. For $389, you get a 3.3 gHz base clock, boost to 3.6 gHz, and15 Mb L3 cache. The next step is the mid-level 5930K. At $583 the clocks go to 3.5 gHz, boost to 3.7 gHz. To go all the way, you need to spend $1000 but you get the Cinderella of chips, the 5960X. A base clock of 3 gHz, boost to 3.5 gHz, 8 cores, 16 threads and 20 Mb L3 cache.

All three chips share 140 watt TDP; quad-channel DDR4-2133 memory support; Hyperthreading; the new LGA 2011- v3 socket, supported by the X99 PCH, and are built on the 22nm trigate process.  Good news about the socket, it supports the same heatsink footprint as LGA 2011, so all the old coolers will work.

How about the bad news? Well there is some. First, the memory.  It’s great, it’s fast, reliable and unfortunately, very expensive. It’s now 288 pin, sold as quad-channel kits, bottoming out well over $200 per kit. That’s on sale. To see kits at $300-something plus is not unusual. But hey, the kits clock in the mid 2000 mHz range, some over 3000 mHz, and the new ceiling will be 128 Gb since they can now fit 32 Gb of chips per stick. So there’s that.

The PCI lanes are something else. Instead of disabling cores like on Sandy Bridge-E, the 5820K has only 28 lanes of PCI express. Both the 5930K and 5960X have the normal 40. All the slots on boards populated with the 5820K will function, but not as quickly as those populated with its bigger brothers. This will frustrate anyone trying to do full speed SLI or CrossFire X, since the minimum lanes required to do that is 32. My apologies on behalf of Intel. High end mobos will likely include third-party chips to help alleviate this.

X99 is not the letdown that X79 was. It comes packed with 6 native USB 3.0 ports, and10 SATA 6 Gbs ports. Unfortunately, no native SATA Express or M.2. That’s kind of a mixed bag because there are no SATA Express hard drives yet. No big deal you say? Remember this chipset has to last 18 – 24 months and we’ve already reached the limits of SATA 6 SSD’s.  Eventually there will be a SATA Express drives, and we predict they will spread like the common cold through the storage world. Without a native controller top speeds won’t be reached on an X99 mobo.

Despite all this, the news is dominantly positive on the performance side. Even with limited PCI lanes, a 5820K will outperform an i7-4770K. Worried about the 5960X’s low bclock? Don’t be, it outperformed an i7-4960X running at 4.7 gHz in multi-threading applications. Haswell-E outperforms Haswell by 40 – 50 percent, and Ivy Bridge-E by 20 – 25 percent. All of those numbers are without overclocking the Haswell-E part which we can get up to 4.5 gHz (stable). #Winning.

When we move to the non-multithreaded drag race, as always, clock speed wins. However, if you build in the Enthusiast class, you probably need the cores for your applications which can take advantage of them. On light workloads performance is comparable across all chips.

What do you do? If you are in this category spend, the money only if you are more than 2 generations back. Or if you are in a profession that can make use of 8 cores. You will not be disappointed in the 5960X, its everything we could have hoped for. If you don’t meet those two requirements, you probably should save.

To sum up, congrats to Intel for building the fastest consumer chip ever. It’s awesome. There are some drawbacks, but nothing that turns us off from the building with any of the three Haswell-E parts.  We’re looking forward to posting some ridiculous benchmarks once we get some ROG mobos in here!


NZXT G10 GPU Bracket

Dateline: 3-20-14

We recently got our hands on the least technological piece of technology to ever come through our doors: a metal bracket. It’s not a particularly intricate bracket either; just four screws, four bolts, and one bend – mostly for looks. This bracket is the NZXT G10, and it does something very novel: it can anchor any Asetek-based closed loop cooler to a graphics card.

At first glance, that doesn’t seem so spectacular. It’s definitely a good idea, but since the bracket is $30 dollars as does not come with a cooler, most people would just ask why? For $30 bucks, you can buy a factory overclocked GPU with windforce fans on it that comes with a warranty. Or, you could spend $30 dollars on a bracket that will probably void your warranty, just to have to spend almost $100 dollars more on a CLC liquid cooler.

A good argument, but short-sighted, and a credo at Fox Tek is, “We’re in it for the long game”. The reason we love this is because, 1 – its reusable for life, and 2 – it works like a charm. Looking at investment over time, we probably are not on the last GPU we will ever buy. And most people don’t buy reference these days, with AMD selling cards that have to throttle due to heat (we are looking at you 290X). Ponying up another $10 – $20 dollars for aftermarket fans and a little OC is like tipping your waiter. But how many times have we done that? Three? Four? How much money has that added up to, just because I didn’t want my card to scream at me through a reference fan during those late night sessions?

The NZXT G10 is the last time I get nickel and dimed by vendors for modified reference designs. Looking at the bracket, I don’t mind at all. It comes in black, white and red and bolts directly onto late generation AMD and nVidia cards. The G10 accepts all Asetek-based CLC’s, like the ones branded by companies such as Thermaltake, Corsair, Zalman and Antec. The install process is as simple as pulling the reference cooler off, attaching the cooler to the bracket, and attaching the bracket to the card with four long screws with nuts. A fan mounts to the bracket for additional air cooling, and comes with two fans headers to manage the onboard fan and the cooler. The bracket receives power from an internal USB header. In application, we strapped a G10 with an old Corsair H50 cooler to a R9 290X, and we saw load temps of 55oC. This on a card that has to throttle down to 94oC, and we didn’t hear a peep out of the H50’s fan.

Realistically, we are looking at less a $100 dollar investment. We paid $30 for a black bracket, and looked around any deal site we could find for an older Asetek-based cooler. We found a few early models, like the H50 from Corsair, for $50 bucks since it’s outdated. If you’re not a fan of the H50, there were more models out there for similar prices. We were just over $90 with tax and shipping and it will pay off quickly, and here’s why: we don’t lose $20 – $30 per card when we upgrade. Now, we can buy reference cards, which are usually released first, at low price points since vendors don’t mark them up. With our cheaper reference board, we simply transfer the bracket over, and use the included software to gain OC speed. Now, we can buy cheap graphics cards quickly and get nice OC’s out of them (we saw 10% on our 290X), without any additional expense or noise or heat. For life.

You don’t need to own $500+ GPU’s to experience the benefit of cooler, quieter cards, but that is where the greatest difference is felt. If you are at the $100 – $150 entry-level price point, this is an expensive move that might not be felt all that much. If you own one of nVidia’s hotter earlier cards, or a newer space heater by AMD, this is the best money you’ll spend that comes with the uncharacteristic benefit of outlasting the technology it’s attached to. We’ve forgotten how nice that is.