The AMD Radeon RX 590: Something Old, Something New, Something Borrowed, Something Blue
Any bride would recognize the subject line of this article, stemming from the requirements for good luck at a wedding. That’s not what AMD got as they went to press with their newest addtion to their GPU lineup. The model we got our hands on was by Sapphire, and that is our something new (the RX 590) and our something blue (the metal fan shroud is actually baby blue).
Its also, at the same time as being new, our something old. As in, just another rebadge of the RX 480, for what feels like the tenth time. Here’s the rub with that, the RX 480 was actually a great card. But it was a great card two generations ago. The Model T was a great car, but no one wants to drive one today, and for good reason.
Listen to the specs of our RX 590 and listen for familiarity: 12nm lithography, 5.7 billion transistors, 2,304 shaders, clocks @ 1,469/1545 MHz, 8 GB DDR5 memory, 256-bit bus, pulling 225W TDP. The only difference is that this card is slightly faster in 1080p gaming, which is its target. And it costs $280 dollars.
Since you are reading this article, PC gaming is probably your bread and butter, so you would be forgiven for not knowing AMD owns console gaming. They provide the GPU for both Xbox and PlayStation. Knowing that, it seems like that’s where all the effort is going and not their desktop offerings. Of course, this is only speculation, but warranted speculation due to the fact that we haven’t seen a legit graphics card come from the red team in two years.
So why are we even reviewing this card then? Let’s say you are designing a new system and it want a GPU under $300. Sadly, this would be your best option. nVidia doesn’t have a 2000 series card for less than $300 (the RTX 2060 is closest), and AMD’s *best * card is right under that. Its a tough time for GPUs, people. The good news is if you game @ 1080p (60% of people do), it performs well. At the highest graphics settings, we saw strong framerates from Middle Earth and Rise of Tomb Raider. Warhammer II and Ghost Recon Windlands didn’t do as well, but were still pulling in the mid to high 40’s.
The truth is there isn’t any good GPU options currently available. Strong performance is available from nVidia, but they are doing a whole ray tracing thing now, which has jacked the price of their cards up stratospherically for slightly better performance than last generation’s models. AMD flat out hasn’t innovated since 2016, so your paying new money for old tech. The only light on the horizon is the *real* next gen cards from AMD, due out late 2019/early 2020, or nVidia’s next gen 3000 series, due at the same time, and hopefully much more affordable than the last bunch of releases they have had. Save your money and wait it out.
Every year, right after the holidays, we do our annual FoxTek predictions of what the upcoming year will bring. We think this is important for two reasons 1) anyone about to buy hardware should know what the ecosystem looks like so they know how much they should invest, and 2) people should know if their purchases are about to be eclipsed by emerging tech. That’s not to say our predictions are 100% accurate, but we like to give generalized, educated guesses at where it looks like major companies are headed, and we have a pretty good track record at it. Let’s get going!
We hadn’t had a year like 2018 for CPUs in a long, long time. AMD coming out strong with their architecture and bottomed-out price points have really dealt Intel a serious blow and forced them to release products way ahead of schedule, as well as drop prices to boot. 2019 looks like more of the same. Expect to see an increase in core counts for the entry level parts on both sides, with prices starting at $99. Intel seems to have the lock on per-core performance, but AMD will be hot on their tail with more attractive options at a lower price. We would love to tell you all that this will be the year of 10nm procs, but sadly we expected delays again, with Intel lucky to be able to deploy transistors at that size by Christmas, and historically, if anyone is going to do it, Intel will go first. The high end desktop parts will continue to get more ridiculous, with core counts possibly as high as 48 on non-server parts. If you want to rival the rendering power of Pixar in a your living room, this may be your year.
There couldn’t be a harsher divide between the advancement of CPU parts compared to the total disappointment on the GPU side. In the last year, all AMD has managed to do is rebadge old tech, twice, and make virtually no innovative progress whatsoever. nVidia tried to overcompensate on progress and developed “ray tracing technology” which is still in too much of its infancy to be useful. Oh, and they charge through the nose for it. For the first time in our collective memory, there really are no new, good graphics cards. There are two places we may find hope. First, AMD should actually release a card around September that isn’t a rebadge of the RX 480, which will be a first in some time. Its aimed at the midrange and may be enough to pull down some of that stratospheric nVidia pricing. We are also still waiting on the RTX 2060 from team green, which just might be reasonably priced enough for the performance gain to justify purchase. This may be the only time you ever read this on this site: if you want to buy a GPU in 2019, buy an old nVidia card at a discount. Invest the money you save on something new mid-2020.
In a fairly large miscalculation from last year’s predictions, nothing has changed in 2018. m.2 has been the big winner and it doesn’t look like any contender will unseat it in the upcoming year. Two technologies are on the horizon, second-gen 3D Xpoint from Intel and Z-NAND from Samsung. Both promise reduced latency and denser chips, but we don’t think either will be ready by the end of the year and anything that is ready will be prohibitively expensive and aimed at enterprise, and not consumer markets. 2019 will continue to reduce prices for existing offerings with small speed increases as we progress. Great news considering we are already at an all-time low.
PCIe 4.0 is coming, for real this time. It was supposed to be here last year, since it has been on the books for 2 years now. Fashionably late? Sure. But it really will make a difference, and not for our terrible graphics card offerings. PCIe 4.0 doubles bandwidth from 8 to 16 GT/s per lane. The current transfer rate for PCIe 3.0 is 3.2 GB/s which is also the throughput of the Samsung 970 Pro. With a doubling of bandwidth in PCIe 4.0 to 6.4 GB/s, m.2 drives will have the overhead they need to grow to ludacris speeds. Ryzen 3 and any Intel 10nm parts should support the new standard.
So all and all, not a terrible year coming up. It certainly seems not just to be the year of the CPU, but the *years* of the CPU. Other categories don’t seem to be fairing as well, but we will look back and see this wave in CPU advancement as nothing short of amazing. Everybody else is just playing catch up, but they will get there eventually, and they will find Intel and AMD eagerly waiting for them.
Happy New Year, everyone.
Octo-Core For All: the i9-9900K
Intel has released its 9th-gen processor family with a surprise for us all: a mainstream CPU with 8 cores. This goes against conventional logic at Intel, where 8 cores are for enthusiast processors that cost more money and need more expensive motherboards, and the rest of us are relegated to mostly quad-core chips. That hasn’t been all bad, since 99% of us don’t use software capable of utilizing all those cores anyway. More on that later.
The 9900K has got some impressive stats, including a price to go with them. It is an 8 core/16 thread CPU built on 14nm lithography (again?!), clocking in @ 3.6/5.0 GHz. Yea, you read that right, this chip can hit a 5 GHz turbo, stock. You’ll pay for it though, $530 dollars worth of paying for it. It supports DDR4-2666 memory native and slots into the same LGA 1151 socket as its predecessor, however, a motherboard update has been released as Z390 to pair with the 9th gen parts. You can still use the Z370 boards with a BIOS update.
There are some interesting caveats with this new product, as well as the whole generation and some cannibalistic tendencies from Intel. First, this is the only hyperthreading-enabled chip in the new line. All of the other 9000 series have hyperthreading, but its disabled. Next, a core count this high only exists because Intel is sick of being embarrassed by AMD and their multi-core chips at bargain prices. It worked – this CPU is faster in both single and multi threaded performance than the Ryzen 2 2700X. As such, AMD has announced the release of a 2800X, we suspect, just to compete with Intel. Lastly, this puts Intel in a precarious position, because this new CPU is *so* good it matches or bests the 7900X, a 10-core, LGA 2011 enthusiast chip that costs over $200 more.
The question is, does a mainstream CPU need this many cores? As a mainstream part, most people won’t utilize those cores, but now are forced to pay for them. The speed increase is appreciated, but we would have liked to see the mainstream capped at 6 cores, simultaneously capping the price. However, if you are an enthusiast platform resident, your life just got much better because you may be able to get the performance you need from the mainstream bracket, which has more affordable parts.
In the end, Intel has done a good thing for many people. We received a welcome speed bump, and power users have a new part which requires serious consideration for the price. Intel has also forced AMD into a release for the sake of competition. Throw caution to the wind though, the rest of the family may be in for a rough road of artificial performance caps for the sake of cannibalization.
Turing With the New nVidia RTX 2080 and 2080Ti
This is the season for rapid changes in the PC world. Intel has just unveiled a new line of mainstream CPUs and nVidia has done just the opposite: release two new top-level GPUs – the RTX 2080 and 2080Ti.
Obviously the first change is in the naming scheme, since these are not GTX cards. The new RTX moniker comes from some brand new innovations from nVidia, namely, ray tracing. More on that in a moment. First, a side-by-side look at the new cards.
Both cards share the new Turing TU104 GPU, 12nm lithography, and GDDR6 spec memory for wider bandwidth.The bigger 2080Ti is slower, with a clock speed of 1350/1635 MHz (Founder’s Edition), but has a greater memory bandwidth of 616 GB/s. The vanilla 2080 is faster in speed @ 1515/1800 MHz (FE) but slower in memory bandwidth @ 448 GB/s. The Ti has the greater quantity of CUDA cores, numbering 4,352 to the plain 2080’s 2,944. Thermals are high on both, with the 2080 pulling 225W from the wall and the Ti drawing about 25 more at full tilt.
There are two new pieces of wizardry in these cards. The first is RTX, which is short for “ray tracing”. We saw some of real time ray tracing in nVidia’s demo machine, the DGX Station, which was a real crowd pleaser, ever if it costs as much as a luxury car ($69,000). nVidia wants to bring that to the mainstream in a pared down version which is slightly more affordable, but not much. The ray tracing is not all rays, but selective ray tracing using new RT cores. It can be switched off, much like anti-aliasing, and it is only available on games which are written to support the feature. As of now, 6 games support ray tracing, the most prominent of which is Battlefield V. The benefit of ray tracing is creating realistic shadows, highlights, reflections, and refractions. The issue is your frame rates will take quite a hit, even at 1080p resolutions.
Real time ray tracing may be the biggest thing to come out of this generation of GPU because it does have a stunning effect on visuals. Mapping the movement of changing light in a digital environment is extremely difficult, to put it lightly, usually left to supercomputers. To do this at the consumer level is quite groundbreaking, but the issue is the buy-in of gaming studios. If its not programmed in, it will all be for naught.
The second piece of tech to debut here is DLSS (Deep Learning Super Sampling). This is an artificial intelligence built into the GPU to reduce noise and upscale content. There are two flavors: the baseline DLSS which upscales 1080p content to 4k using a trained network to deliver similar quality to 4k, plus anti-aliasing at higher frame rates. DLSS 2X focuses purely on anti-aliasing using a network trained with 64x super-sampling images to achieve better AA results than TAA and MSAA. Again, this must be programmed in by game design studios to function, and it is currently available in 17 titles, some of which are popular; like Ark, Final Fantasy XV, Hitman, PUBG, and We Happy Few.
Here’s the rub: no one knows whether or not these two big features will ever catch on, and if you buy these cards, you are paying a hefty premium on “yes”. Arguably, some aspect of these may have legs, but even if they do, history is not on your side. Historically, if these features do get big, nVidia will release a next generation card which is cheaper and better at RTX and DLSS, making your card look like an overpriced hunk of precious metal.
The best argument for these cards are their speed. And for the 2080, its not really a great argument. The 2080 is around $750 depending on availability, and the old GTX 1080Ti is under $700 in most places. The 2080 is about 3% faster than the old 1080Ti, yet is 10% more expensive. Not a good price point for nVidia. The 2080Ti has a much better argument, being the new King of the Hill, but you will pay $1200 for that title. How much of that $1200 went to speed, and how much went to new, questionable innovation? This card would have done better at a $999 price point, and even that is prohibitively expensive for all but the top 1% of gamers. Some people need to have the fastest tech, and for those people, this is your card.
Looking at strict performance metrics in light of pricing, this was a loss for nVidia. Right now, these cards are faster than the last generation, but not enough to justify the HUGE price jump seen here. What nVidia looks like its hoping for is a mass-adoption of new tech, which is rare, at best. And even if we do see a wide adoption rate, the benefactors will be those people who wait for cards like the RTX 2180, 2170, and 2160, if not the 2200-series.
Its tough to recommend these cards to people who already spent the money on the GTX 1080 and 1080Ti. They wouldn’t be getting much for the investment they would be making, unless they play one of the games listed for RTX and DLSS and like what they see. Most gamers pay between $200-300 for GPUs, and right now, AMD has that price tier wrapped in a bow if you don’t want to buy old tech like the GTX 1060.
Unfortunately, now is not the best time to be building or buying PCs. That sounds like a quick piece of advice to tank our own company, but you can’t argue with the truth. The big problem right now is cost. Not that powerful PCs are ever cheap, but now their prices are artificially high. There are two reasons why this is happening: memory chip manufacturers, and crypto-currency miners.
With smart phones taking over the world like Skynet, manufacturers have changed their production output to maximize profit. Imagine that. Long story short, it’s now way more profitable to make chips designed to go in phones rather than desktop PCs. This means anyone buying RAM right now is subject to triple the prices regularly enjoyed. Before we would build rigs with 16gb of memory without breaking a sweat. Now that same 16gb kit means sacrificing somewhere else to make budget. Bummer.
Then there are the miners. These people are the scourge of the gaming world. In case you haven’t heard, Bitcoin is kind of a big deal right now. But Bitcoin isn’t the problem; most of it has been mined, and the remaining coins are being mined by supercomputers in the cloud. The problem is emerging crypto-currencies being mined by average joes trying to get rich. If you are one of those people stop now. It won’t work, and you’re ruining it for the rest of us. In case you’ve never mined coin, the thing to know is it is 100% dependent on graphics power. So these amateur miners are buying up entire warehouses of GPUs to fuel their greed, leaving nothing but out of stock and ridiculously inflated inventories for the rest of us. You’ll currently have better luck securing a seat on the first commercial trip to space on Richard Branson’s rocket than finding a high-end GPU either in stock, or at its MSRP.
What this means is building a gaming PC is not cheap or easy right now. Memory is easy to find but tickle-me-Elmo costly, and everyone needs it because your PC won’t run without it. You could technically go without the GPU, but unless it’s an office PC that doesn’t do any heavy lifting, you’ll need a graphics card, and even the entry level stuff that used to cost $150 has now more than doubled.
Our advice is to build said office PC and recycle an old graphics card, or wait until people realize coin mining isn’t the get rich quick scheme silicon valley makes you think it is. Until then, hold ’em if you got ’em, because this may take a minute.
The Death of Net Nuetrality
Yesterday, a historic piece of legislation was passed under Trump. In a 3-2 party-line vote, the regulations keeping the internet free and open, which were put in place under Obama, were repealed. It won’t happen immediately, because not only does it still have to go through Congress, but also because its being resisted by a group of people under the umbrella of a New York official suing for the decision. However, not many people know what net neutrality even is or why it effects you. This is what you need to know if you plan on using the internet in the future.
Net neutrality keeps things on an even keel. It ensures that all your traffic is at whatever speed you pay for, that you cannot be spied on (to a certain degree) without permission, and that you are free to go anywhere you want on the internet using however much bandwidth you want with no consequences. Get ready for all of that to change.
The new rules allow for some very concerning legal allocations to take place at the ISP level, i.e., Verizon, Comcast, Cox, etc. From now on, they own your internet. For example, if my ISP is Verizon, and I pay $100 per month for a cable and internet package that allows a 50MB down 10MB up speed, it used to mean I could use as much data as I wanted, go anywhere on the internet, and have equal speed access to any website I want.
This is what can now happen, *completely legally*. Now, Verizon gets to decide whether or not my 50MB download speed holds for the entire month. Much like my cell phone, if I stream too much YouTube, I may get throttled to a slower speed for sucking up so much bandwidth. Speaking of YouTube, I may not even get my 50MB on that site, because now Verizon decides how fast my connection will go depending on where I am on the internet. If Verizon is bought by Jeff Bezos, Amazon Prime may go full speed, but as a deterrent to keep my from going to a rival’s streaming service, YouTube may go much slower for me while using Verizon.
I also am a fan of wiki leaks. I like the fact that stuff the government said they weren’t doing to me illegally (Google “Prism” the NSA’s totally illegal spying algorithm that has since been retired for violating the rights of the entire country) has been exposed and they have had to make changes for breaking the law. Verizon no longer has to allow me to go to wiki leaks if they don’t want. I may be totally blocked from sections of the internet Verizon considers “undesirable.”
So, in summary, the repeal of net neutrality gives the people who own the ISPs the ability to decide for you what parts of the internet you get to visit and at what speed. They also are allowed to take any personal information they want from you and sell it to companies who want to target you with ads and services. They are the gatekeeps of the internet, and up until now, they have had to play by a set of rules. Under Trump, they now get to make up the rules that govern themselves. Legally. If you are as concerned as we are about this, please write your congressman immediately..
The State of Security
I don’t think it news to anyone that the internet can be an unsafe place. The recent hacks of Yahoo, Uber, and a plethora of high visibility celebrities are evidence that just about anyone can be hacked at any time for any reason. Staying safe isn’t difficult for the informed user, but its really just a comfort blanket; there is no foolproof defense for your data.
This becomes more clear with with announcement a fatal flaw in WPA2. WPA2, up until now, was the goto for wireless encryption. When you set up your home wireless, that’s the option we, as professionals, tell people to select for maximum safety. We were wrong.
Researchers from the University of Leuven in Belgium have found a problem in the foundation of WPA2. It turns out, when we initiate the handshake (that’s the part where our computers try talking for the first time), WPA2 uses an arbitrary number to generate the encryption key. As with all wireless tech, we experience frequent dropouts, so WPA2 uses the same number repeatedly to generate encryption keys.
Any time a hacker wants to break into something, repetition is a dead giveaway. Knowing this vulnerability exists, hackers exploit it by sending frequent handshake requests, obtaining the number, and reverse engineering the key itself, leaving the data wide open.
Patches were distributed months ago by Microsoft and Apple, but because this exists, it will never totally go away. What makes it kind of OK is that for it to be used against you, the person has to be in wireless range. At your home, this is unlikely, but at Starbucks, all bets are off. If you are using second layer security, like HTTPS anywhere are encrypting your data before it is sent, you will be fine.
What do we recommended? Cat5 cable. I personally do not use anything wireless except my smart phone. Not even my keyboard or mouse. I also use browser extensions like AdAware, HTTPS anywhere, and NoScript to keep myself safe online, outside the normal firewalls and antivirus software. My entire hard drive is encrypted with VeraCrypt. I’m not kidding around when it comes to my privacy. For more info, I suggest reading The Art of Invisibility by Kevin Mitnick, one of the best and most recent texts on protecting yourself, written for non technical people, by who is arguably one of the most prolific hackers of all time. Perfect stocking stuffer for the holiday season, now available on Amazon.
Coffee Lake: Intel’s New Caffeinated Champion
My oh my, what a little healthy competition does to the hardware landscape. After years of Intel having its way with CPU manufacturing, AMD has certainly given the resident tech giant reason to be on its toes of late. While defections have been contemplated by many long time fanboys, Intel is trying to give users reason for staying on team blue, like its new 6-core i7-8700K Coffee Lake flagship, and its accompanying Z370 motherboards.
Coffee Lake is an updated Kaby Lake, which is itself an update of Skylake, since they all are based on Intel’s 14nm architecture. To differentiate, Intel adds + sign suffixes, meaning Coffee Lake is actually a 14nm++ process. However, there is a modest frequency boost of the highest offering as well as those two additional cores to compete with AMD’s Ryzen.
This is the first time a 6-core chip has been offered in Intel’s entry-level chipset and socket, showing just how pressed Intel is to display they too can do muti-threaded application cheaply. And cheap it is, all while being quite fast. The 8700K clocks in at an astounding 3.7 GHz base clock, while spooling up to 4.7 GHz on a single core using Turbo Boost Technology 2.0. Intel prices the MSRP at $359, which is $30 cheaper than the next cheapest i7-7800X on the LGA 2011 socket. It seems a little close, but also remember those LGA 2011 mobos cost a lot more to purchase than Z370.
Other than the cores and speed, there is no difference in this and previous generation Kaby Lake. Both still use Intel HD 630 integrated graphics (now called UHD 630, despite lack of any increased functionality) and pull 95W from the wall. The Coffee Lake will support DDR4-2666, as compared to the previous 2400.
The line so far, is quite robust. We see at least 6 chips slated so far, ranging from the top i7-8700K to the entry i3-8100, a 4-core, 4-thread, $120 desktop mass market. An interesting footnote is the i3-8350K, an unlocked i3, which is also a 4C/4T that has a base frequency of 4 GHz. For those who don’t need the threads (almost everyone), that little beauty will do nicely with an AIO liquid cooler and an aggressive OC. We predict it will be tough to beat in single threaded performance for a third of what the 8700K will be selling for.
The new mobo is the Z370 chipset, yet still the LGA 1151 socket. However, don’t think because its still an 1151 it will be a drop in for older CPUs. In fact, there is nothing backwards compatible with any other CPU. The new Coffee Lake works *only* in Z370 motherboards, and even though it will fit in other LGA 1151, it will not function. Similarly, an older Kaby Lake will fit in a Z370 mobo, but will not function. The reason for this is that even though the pin layout is the same, which pin does what has been reassigned, so there is no compatibility. Thanks, Intel. I didn’t want to keep any of my old, hard earned equipment anyway. Other than that, no changes from 270 to 370.
How does all of this stack up to what AMD already has? Not well. Basically, if you are buying the 8700K because you use multi-threaded applications, your money would be better spent purchasing a Ryzen 7 1700, an 8C/16T chip with a slightly slower clock speed of 3.75 GHz. But you really don’t care as much about clock speed as much as getting as many cores as your money can buy. To find a 6-core chip sold by AMD to be more fairly compared to the 8700K, we go to the Ryzen 5 1600X. It is 6C/12T, and 4.1 GHz, costing $219. Honesty, its not really even a contest; Intel blows this thing out of the water. But to get this one sided outcome, you have to spend an extra $130, which isn’t really close enough to be tempting. You would *really* have to want that clock speed to spend that kind of money.
The Case Against Threadripper
Recently, AMD released its newest, biggest, fastest piece of technology to date: Threadripper. In short, its amazing. Its everything anyone could ever want for high end, multi core compute processes. Its $1000, but for a chip which has 16 cores, 32 threads, and overclocks to 4 GHz, that is really a bargain compared to Intel’s new Core i9. However, that’s exactly what I am here to argue against.
Right now, we have four lines of CPUs and accompanying motherboards with which they slot into. Intel and AMD both have a “high end” and a “low end,” with chips and boards to populate each. My argument is, its completely redundant. Let me tell you why.
The “low end” or “entry level” stuff being put out right now is nothing short of fantastic. If you are a gamer or run of the mill desktop user, Intel chips can’t be beat for speed at low cost. They are faster than anything AMD is putting out at that level, despite having less cores, which are completely useless to the average user/gamer. For 90% or so of the people out there, the Pentium G4600 Anniversary Edition chip will do everything you need it to, and then some. And its $80. You heard me.
What about those creators who use Lightroom, ProTools, CAD software, or Final Cut? You people need all the cores you can afford. That being said, AMD has your back. While Intel has more cores with speed than AMD, its i9 costs $2000, so unless your paycheck is signed by Pixar, you probably need something more affordable. Enter the “entry level” Ryzen chips. For $500, you can get your hands on the AMD 1800X; an 8 core, 16 thread piece of silicon which trounces anything Intel puts out under $1000. This little beauty can handle anything you can throw at it for a cost which doesn’t mean you have to put off buying another car for a few years. There is no reason to go any higher than Ryzen for the average creator not working for Industrial Light & Magic. Almost no one needs all those cores, or the drain on their bank account.
Does that mean I’m against the high end lines? Absolutely not. Just like Formula 1 racing, we need companies pushing the envelope on high end parts, so we experience the trickle down effect. That is when a technology which was developed at an insanely high cost can be done cheaper and en masse for the general public. We would never have magnetoheliological dampers on Cadillacs if it weren’t for some genius in F1 figuring out how to do it at all, then someone at Cadillac figuring out how to do it cheaper. I’m all for the high end running the development for all of us, I just don’t believe in paying for an F1 part when I’m not a racecar driver for a living. And neither are most of you.
So, if you are looking for the high end stuff on our site and can’t find it, this is why. Its extraneous and extravagantly expensive for no reason. We don’t believe in up-selling people for no reason; we have your best interests in mind. If you still want or need a high end part, like a $2000 CPU, we can build it. But think about whether or not that money would be better spent in other places first.
Intel’s Core i7-7700k: The Return of the 5 GHz OC
For all of the complaints about Kaby Lake, there is one thing which can be said in its favor, it is no slouch in performance. Recently, Intel has released its 8th generation Core processor, code named Kaby Lake and taking the 7000 series numbering pattern. We got our hands on the flagship, the i7-7700k, a four core, eight thread, 4.5 GHz (turbo) addition to the current offerings in the 14nm process. On paper, this CPU shouldn’t exist. Intel usually conforms to the tick-tock release pattern, alternating die shrink with architecture revisions. Recently, they have switched to a three part release cycle called PAO: Process, Architecture, Optimization. This is the first chip to be released in the Optimization cycle of this new scheme.
We didn’t really know what to expect from Kaby Lake because no one had ever seen a release from the Optimization cycle of this new scheme. The idea is instead of dropping to a smaller die, like the 10nm Intel had planned on being at based on Moore’s Law, they stay at 14nm and tweak their settings for the best speeds. Thumbs up, Intel, thumbs up. While we would have loved to see what a 10nm chip could have done, this may hold us over until the release of Cannon Lake and Ice Lake.
That being said, Kaby is aimed mainly at mobile in its innovation. The bulk of the changes benefit mobile clients more than desktop users, but two changes have a big impact on performance minded people. First, this chip overclocks like a champion. In our testing, we got our sample up to 5.2 GHz, requiring a 1.4 volt increase to the VCore and reaching 82 degree Celsius temps. Next, there is an upgrade to the chip’s 2D video encode and decode capability. This is to bring proliferation of 4k to desktop, where it has been a struggle without add-on video processing, and difficult even with it.
The Kaby chips have a dedicated fixed-function media engine built to do traditionally processor or GPU intensive functions. The two of the most interest to users are the 4k HEVC and VP9 codecs. The CPU is so powerful it can decode eight standard 4k HEVC streams simultaneously at up to 120Mbps bit rate each. Ridiculous. The VP9 codec is used by Google, who owns YouTube, and all of their high quality video is encoded in it. So basically any time you watch any high def YouTube, you use the VP9 codec. I’m sure no one out there falls into that category.
Media companies are beginning to feel safer having a dedicated encoder/decoder on the CPU itself. There is talk from Netflix about finally letting desktop users stream 4k video, something that only phones and TVs have been able to do so far. Intel claims to have added DRM updates to the CPU which would certify it as a platform for 4k Netflix streaming. It is already certified for Sony’s 4k movie and television streaming service known as ULTRA, which up until this point was restricted to Sony TVs.
The last note on video is about ultra-high definition, which is becoming more common place on TVs these days. Kaby has some tweaks to enable high dynamic range (HDR) and the extended color gamut known as the rec.2020 spec. Currently, HD TVs use the rec.709 spectrum, which is slightly below the P3 color space used in digital movie theatres. HDR brings deeper blacks mixed with the more vivid colors of rec.2020 (particularly the reds), making Kaby more desirable for video enthusiasts using high end panels.
With this new chip comes a new chipset. Luckily, no new sockets (we’re on LGA 1151, by the way), but Z170 is dead, now replaced by Z270. Z270 has some worthwhile updates to it, like 24 lanes of PCI Express 3.0, 10 USB 3.0 sockets, and support for Thunderbolt 3, all native to the 200 series chipset. Kaby Lake and Skylake are forwards and backwards compatible in all LGA 1151 mobos, but make sure you have the latest BIOS revisions for stability.
The rule for upgrading still applies: if you don’t absolutely need one of the new features and are within one or two generations of Kaby, save your money. If not, you can pick up the 7700k for $350 online.
The Two Faces of nVidia: the GeForce GTX 1080
As exciting as the new releases from Intel are, this really is Christmas is July thanks to nVidia releasing its new flagship video card, the GTX 1080. Like most flagship models, this card is flouted as the bee’s knees in terms of performance at any price. It’s really just all about crushing the competition and benchmarking numbers through the ceiling. Spoiler alert: that’s what you’re going to get with this card.
The GTX 1080 is the fastest single-GPU card in existence right now. Not even the Titan X with its absurd price tag can’t outperform nVidia’s new baby. This is also the first semi-affordable solution to 4k gaming. Nothing does 4k at high settings with anti-aliasing on, but with it turned off you can average 60 fps with settings on high. At 1080p, you don’t even want to know. We didn’t see numbers below 80 on FRAPS.
Let’s talk a little about the technology behind the 1080. It’s powered by a 16nm FinFET Pascal GPU. This is a brand new lithography from nVidia, and the one that has been delayed for what seems like forever to get down under 28nm. There is new memory onboard, what is being called GDDR5X. It has higher frequency, capacity per chip, and better power efficiency. There are 8 GB of it on the 1080, running at 10,000 MHz on a 256-bit memory bus. The TDP is slightly higher on this card vs. the 980, drawing about 15 more watts, and it gets hotter faster due to the fan spinning at lower RPM’s. That said, it’s still fairly cool with an internal temp limit of 82°C.
Everything else is the same, but more or faster. The transistor count is up to 7.2 billion, there are 2,560 CUDA cores, 64 ROPs, a core clock of 1,607 MHz, which boosts to 1,733 MHz, outputting through either a DisplayPort 1.4 or HDMI 2.0(b). But this is not the perfect card, or even the best of what nVidia is capable of.
Let’s start with the elephant in the room. This isn’t the uninhibited Pascal. What we have here is a pared-down version released to consumers from the much pricier Tesla P100 GPU compute cards. This is the GP104 from the outgoing Maxwell, not the GP100 from the Tesla. We assume nVidia is keeping is full-on GP100 for future cards, like the GTX 1080Ti or updated Titans. That’s not a bad thing for prices. The Tesla is clear into the 5 figure range and no one playing Far Cry is shelling out that kind of cash for smoother pixels.
The other thing that bothers us is the lack of HBM 2.0. We saw the first iteration of High Memory Bandwidth on AMD’s last generation of top-end cards and it is a great advancement of what we were hoping would be the standard for memory interface from here on out. While the 256-bit bus on the GTX 1080 is sufficient, it would have been really comforting to get the newer HBM 2.0 for a little future proofing.
The GTX 1080 is available now for $599 (or $699 if you want the founder’s edition, which you don’t because it’s a huge waste of money). If you have the last generation of cards from nVidia, wait and see what the price drops will look like in 6 months and buy one for Christmas, or just hold out until next summer when the next generation will be released and you will see the biggest difference.