So, in theory this should be a big deal for computer people. This is the first time in around 6 years since Sandy Bridge (2nd generation) from Intel, that there's been anything of note to talk about in the consumer CPU market.
It was just announced a little bit ago, so benchmarks are sparse, and all we have is a few videos from the likes of:
What we see in these CPU's is a lot more cores for the dollar (all with hyperthreading), however the cores themselves can't compete with Intel. For example, in the 3rd video I linked, we see that the 1800x was directly competing with the 6900k, and we see it had identical single core performance.
The 6900k however is a broadwell processor, so we can roughly expect a 10% increase in core performance in the 7700k at the same clock speeds. The other important thing to note is the 7700k is clocked at 4.2Ghz base, while the 6900k is clocked at 3.2Ghz base. This sample math to me suggests that the 7700k should have roughly 44% higher single core performance than the 1800x without any OC's. That said, the Ryzen 7 chips have twice as many threads, so if multithreading optimization is perfect, in theory they should be 38% faster on tasks that can utilize all these cores.
Anyway, from a consumer standpoint, the 1600X and the 1700 will likely be the sweet spot, and from looking at the specs, I can safely say that they are the best performance for the dollar. There's a lot of information we still need about motherboards and such, but what we can say is that the 1700 (priced at $330), will be around 30% faster for a similarly priced i7 7700k for tasks that can utilize all those cores. However, for whatever reason, a lot of games wont even use more than 4 threads, and just about none will use more than 8, so is there any reason of choosing this over Intel?
This video here is an excellent demonstration of cores and threads you need to play most games with a mid-range graphics card:
Thoughts on Ryzen? How will this shape our custom rig building in the next month to follow?
March 2nd is when Ryzen chips will go on sale, you can preorder them now.
In my aged opinion, AMD was more into future proofing their chips. When I built mine (15 years ago), it still was up to par 2-3 years later. I think AMD is including all of the cores and multithreading so as to keep it relevant when VR and 4k-8k comes more consumer friendly.
As such, I expect the old battle lines to be drawn in a few months once these bad boys are out and people get a good chance to play with them and see how they perform. I'm looking forward to knowing more about what mobos and gpus are going to be compatible. I've been seriously considering building a new PC for work and some light gaming and AMD may have just solved my problems.
I've been looking to upgrade my i5 760k. I held off on upgrading because there was a chance that AMD could make a great CPU for cheap or atleast force a price drop from Intel. So far, from what we've heard, Ryzen sounds pretty compelling. I'd like to see some motherboard pricing. I'm looking forward to seeing some benchmarks when these things get released, especially on some of the lower end stuff like the 1300. It's priced similarly to the i5 7400, is slower but has 8 threads. Really curious how that thing is going to perform.
If the B350 motherboards are priced right, some of this lower end stuff could be killer. It's very possible that an overclocked Ryzen 1100 or 1200+mobo+cooler would beat a locked i5+mobo at a lower price (which would be great since I was considering getting a locked i5...).
Maybe I'm just getting too overly excited and setting myself up for disappointment, but you know... the worst case scenario is the CPU sucks and I just get an i5 for high prices, which is what I was going to do anyway
The only thing that I wish would be standardized are mobos and chips. Somehow create mobos to accept either chip so that way if we find that Intel doesn't work, we can swap in an AMD without having to send everything back and exchanging.
AMD has been putting their stuff into the budget sector and Intel has dominated most every other aspect. I'm looking forward to the competition and joining the old debate on who is better.
So I've recently decided to build a new pc for myself the first time and I'm waiting for this to release to see its effects on the market. Do you guys know how long it takes for accurate performance benchmarks will be available and how long it'll take for Intel to respond which price decreases? I know Intel is releasing a new version of their chips at the end of the year but I'm not gonna wait that long.
I want to see proper independent benchmarks. Just hype from AMD. Also, the NDA for reviews apparently doesn't lift until the release, which I personally find a but quesitonable
but what we can say is that the 1700 (priced at $330), will be around 30% faster for a similarly priced i7 7700k for tasks that can utilize all those cores.
Should be a lot more especially when you take into account OCing
---
However, for whatever reason, a lot of games wont even use more than 4 threads, and just about none will use more than 8
but what we can say is that the 1700 (priced at $330), will be around 30% faster for a similarly priced i7 7700k for tasks that can utilize all those cores.
I think from the numbers we've seen, 30% sounds right on point.
Also, nice wikipedia article, all I know is that for our CFD simulations, almost everything could be parallelized. We ran a cloud computing server where we used some 80 cores, and there was significant performance gains. Most processing power is needed on the physics engines of these large studio games, and currently parallelization isn't very hot.
I think from the numbers we've seen, 30% sounds right on point.
+30% performance with 2x the core count in a nearly-100% parallel task would be awful; that would mean that the core performance was 1.54x weaker than Kaby Lake which is quite clearly not the case.
The actual multithreaded performance would be 2x (due to 2x core count) minus the difference in core performance due to the architecture and achievable clock speeds - twice as many cores with 80% performance-per-core = 1.6x performance (0.8*2) for example.
A number like 30% faster than a 7700k is probably coming from a test that was not scaling well across cores and/or from a large frequency gap that may not hold up when you can OC both CPU's (the $329 bin has low stock clocks)
I think from the numbers we've seen, 30% sounds right on point.
+30% performance with 2x the core count in a nearly-100% parallel task would be awful; that would mean that the core performance was 1.54x weaker than Kaby Lake which is quite clearly not the case. They've broken some world records already which were held by 8c16t Haswell/Broadwell CPU's and Skylake/Kaby is not a massive leap beyond that~
The actual multithreaded performance would be 2x (due to 2x core count) minus the difference in core performance due to the architecture and achievable clock speeds - twice as many cores with 80% performance-per-core = 1.6x performance (0.8*2) for example.
On February 24 2017 06:55 Cyro wrote: It varies program to program but many are functionally 100% at these core counts. x264 on many settings runs literally twice as fast (or like 1.99x faster) when you double core counts, many other usages do too.
Yeah because it's basically one unique task spread across cores the entire time, but afaik you can't scale these gains linearly on such programs for ever (performance starts decreasing above a certain core count due to bus width afaik? not really sure), and for programs such as games you won't have a program that is a giant parallelized process the entire time.
But since the quote is about tasks, not games or "real" programs that do more than one task, yeah you can indeed double gains
I think from the numbers we've seen, 30% sounds right on point.
+30% performance with 2x the core count in a nearly-100% parallel task would be awful; that would mean that the core performance was 1.54x weaker than Kaby Lake which is quite clearly not the case.
The actual multithreaded performance would be 2x (due to 2x core count) minus the difference in core performance due to the architecture and achievable clock speeds - twice as many cores with 80% performance-per-core = 1.6x performance (0.8*2) for example.
A number like 30% faster than a 7700k is probably coming from a test that was not scaling well across cores and/or from a large frequency gap that may not hold up when you can OC both CPU's (the $329 bin has low stock clocks)
Sorry, when I said perfectly parallelized that was a mistake on my part, what I meant is the best optimized programs, so like 95%-97% parallelization?
On a per core performance, I expect the 7700k to be 25-30% quicker (by looking at the per core performance of the 6900k vs 1800X benchmark, and then comparing 6900k vs 7700k).
On February 25 2017 05:04 Cyro wrote: Cinebench ST for OC'd Kaby hits about 220 while Ryzen's probably reaching about 180 at OC (162@4ghz)
That's a 22.2% difference, but we'll see. My expectation of the OC on air is 4.3Ghz or so, seeing as they said that 1800x can go even 4.1Ghz, and a bit higher with better cooling with the dynamic overclocking thing.
So my that's right on point of what I'd expect and agreed with my numbers. 162*4.3/4.0 = 174.15, then 220/174.15, that's 26.3%. But anyway, this is the 1800X you're comparing to, what is more appropriate is the 1700, as that's $20 cheaper than the 7700k.
Those I don't expect to see to be OC'ed as high (though I don't know the exact differences now, like what is the actual difference between the 1800X and 1700X, or is it just a scam? But let's say that OC's 300Mhz to 4.0Ghz turbo, and now you have Kaby Lake be 35.8% faster per core than Ryzen (assuming your original numbers are correct, I don't know what Kaby Lake OC is, but I'm assuming somewhere around 4.8Ghz).
So if we use that 35.8% figure, that's going to be significantly higher performance in games, and in a game 100% absolutely perfectly parallelized you'd get a 47.2% performance improvement from Ryzen. Anyway, I think the highest anything real world will achieve is say 30-40% percent, and Intel will blow games out of the water. But anyway, that's why I'll reserve my final judgement until we see benchmarks through the board.
Either way, it's a great job done by Ryzen, but in no way does make Intel obsolete, opposed to what I've been reading in some other places on the internet. It's just a tighter race now, and from my preliminary calculations and assumptions, I'd still go with Intel if buying my next CPU with the intention of gaming (at this budget).
I think the 1500 and 1600X are going to warrant some attention though, I think that they could outperform the i5's quite well at a similar price range.
Ryzen 1100 and 1200X doesn't impress me much, as I think the G4560 outperforms them at less than half the price, and the 1300 and 1400X are imo only marginally better for most things (than the G4560), and almost triple the price (kind of the same comparison as the original one, half the cores, the pentium cores are just as good if not marginally better, 4 threads will be just fine for most games as seen in the last video of the OP. Plus, the 6c/12t are only 30% more for 50% more theoretical power, so I think that upgrade is well worth it.
I don't know what Kaby Lake OC is, but I'm assuming somewhere around 4.8Ghz)
~5ghz @1.37v which is fine for air cooling. Some 4.9 and some 5.1 with good voltages but the OC variance is actually much smaller than usual so very few fall outside of that range
---
I am extremely interested in gaming & OC benchmarks plus just dozens of other benchmarks when the NDA lifts. So far i guess there is a substantial gap in ST performance between Kaby and Ryzen but the MT-perf/$ of Ryzen looks to be on another level~
Anyway, I think the highest anything real world will achieve is say 30-40% percent
Parallelization is basically a nonfactor for a lot of programs that can achieve values like 99% or 99.9% parallel and scale to hundreds of threads; 16 is nothing. Twice as many cores, twice as fast. Other times you have twice as many cores, 0% faster.
They've already shown Ryzen 8c16t @4ghz to be ~50% faster at cinebench MT than a 7700k @5ghz. Lower performance-per-core, huge frequency gap but twice as many cores doing work
On February 25 2017 05:45 FiWiFaKi wrote: Ryzen 1100 and 1200X doesn't impress me much, as I think the G4560 outperforms them at less than half the price, and the 1300 and 1400X are imo only marginally better for most things (than the G4560), and almost triple the price (kind of the same comparison as the original one, half the cores, the pentium cores are just as good if not marginally better, 4 threads will be just fine for most games as seen in the last video of the OP. Plus, the 6c/12t are only 30% more for 50% more theoretical power, so I think that upgrade is well worth it.
I'd like to see how an overclocked 1100 or 1200X performs compared to a locked i5. With the price difference between a Ryzen 1100 and say an i5 7400, an overclocked Ryzen may be better and cheaper than a locked i5.
The G4560 seems to kinda be in a class of its own. None of the Ryzen products seem to compete with it. I find it interesting that AMD isn't even trying to target the current budget king
I'm excited to see how Intel responds. They still have great products, so if we see a price drop in response to this, we all win!
Looks like ST perf roughly matches up with the leaks and overclocking is around 4ghz rather than 4.5
Very strong MT perf and MT perf/dollar but not very competitive ST and falling short in some games because of that
10% clock for clock + 20-25% freq difference = 1.32 - 1.375x slower on 1 thread.
Also curious to see if the memory latency thing is a software issue (not registering properly) or if it's actually broken in some way and could be improved
Several reviews talking about 5-15% performance loss when enabling SMT in games and we're still missing a lot of info, day 1 might not be the final story here.
I don't know what Kaby Lake OC is, but I'm assuming somewhere around 4.8Ghz)
~5ghz @1.37v which is fine for air cooling. Some 4.9 and some 5.1 with good voltages but the OC variance is actually much smaller than usual so very few fall outside of that range
---
I am extremely interested in gaming & OC benchmarks plus just dozens of other benchmarks when the NDA lifts. So far i guess there is a substantial gap in ST performance between Kaby and Ryzen but the MT-perf/$ of Ryzen looks to be on another level~
Anyway, I think the highest anything real world will achieve is say 30-40% percent
Parallelization is basically a nonfactor for a lot of programs that can achieve values like 99% or 99.9% parallel and scale to hundreds of threads; 16 is nothing. Twice as many cores, twice as fast. Other times you have twice as many cores, 0% faster.
They've already shown Ryzen 8c16t @4ghz to be ~50% faster at cinebench MT than a 7700k @5ghz. Lower performance-per-core, huge frequency gap but twice as many cores doing work
Honestly, I would love to see 7700k without the graphic part and instead more cores. IIRC(and it was a long time ago), 7700k has almost a half of it covered with graphic part that is useless to gamers. With 2 more cores(I don't think they would be able to add 4 cores and maintain the insane clock) it would be interesting. Probably not beating Ryzen in software application, but that gaming application could be awesome.
Well, I keep dreaming, in the meantime I can wait for new CPU war and wait for the response from Intel. (I don't need the upgrade now)
When we approached AMD with these results pre-publication, the company defended its product by suggesting that intentionally creating a GPU bottleneck (read: no longer benchmarking the CPU’s performance) would serve as a great equalizer. AMD asked that we consider 4K benchmarks to more heavily load the GPU, thus reducing workload on the CPU
From GamersNexus
they were doing this all of the time in their pre-launch benchmarks but i had considered it to be mostly due to incompetence rather than malice
I don't know what Kaby Lake OC is, but I'm assuming somewhere around 4.8Ghz)
~5ghz @1.37v which is fine for air cooling. Some 4.9 and some 5.1 with good voltages but the OC variance is actually much smaller than usual so very few fall outside of that range
---
I am extremely interested in gaming & OC benchmarks plus just dozens of other benchmarks when the NDA lifts. So far i guess there is a substantial gap in ST performance between Kaby and Ryzen but the MT-perf/$ of Ryzen looks to be on another level~
Anyway, I think the highest anything real world will achieve is say 30-40% percent
Parallelization is basically a nonfactor for a lot of programs that can achieve values like 99% or 99.9% parallel and scale to hundreds of threads; 16 is nothing. Twice as many cores, twice as fast. Other times you have twice as many cores, 0% faster.
They've already shown Ryzen 8c16t @4ghz to be ~50% faster at cinebench MT than a 7700k @5ghz. Lower performance-per-core, huge frequency gap but twice as many cores doing work
Honestly, I would love to see 7700k without the graphic part and instead more cores. IIRC(and it was a long time ago), 7700k has almost a half of it covered with graphic part that is useless to gamers. With 2 more cores(I don't think they would be able to add 4 cores and maintain the insane clock) it would be interesting. Probably not beating Ryzen in software application, but that gaming application could be awesome.
Well, I keep dreaming, in the meantime I can wait for new CPU war and wait for the response from Intel. (I don't need the upgrade now)
Keep in mind that the iGPU is useful for some enterprises where they need fast i7 for their workers but don't need a dedicated GPU, for example. From the review of the R7 1800X I saw: base 100= i7 2600k R7 1800X: 110 average on the games tested broadwell-e, kabylake, etc: 127 (5820k) to 137 (7700k) 3770k: 108 4790k: 126
I'll read the entire article later, there is some complicated stuff about the cache / smt vs HT / memory latency in the french review I found.
The other review I quickly looked at says roughly the same: very good in applications but not that good in games. The 4K results (strongly GPU bound) leaks from AMD were indeed a sign: they wanted to hide some more CPU dependant results on the gaming side.
On March 03 2017 04:17 ShoCkeyy wrote: I wish I can update to AMD, but due to having to work in the MAC OSX environment, I'm screwed to be stuck with i7 in my hackintosh
Intel isn't over. It's far from it. What we can expect is cheaper CPUs.
Well, after watching what I saw, with the issues we currently see... If I was buying a processor, I think I'd still buy the 7700k.
Gaming performance was even worse than I expected, and the productivity software performed the way we would think it would. If gaming is your priority, I would go with the 7700k, though we see that in some applications it can be worth it (though like one reviewer said, most of the time these tasks are offloaded to the GPU anyway).
When we approached AMD with these results pre-publication, the company defended its product by suggesting that intentionally creating a GPU bottleneck (read: no longer benchmarking the CPU’s performance) would serve as a great equalizer. AMD asked that we consider 4K benchmarks to more heavily load the GPU, thus reducing workload on the CPU
From GamersNexus
they were doing this all of the time in their pre-launch benchmarks but i had considered it to be mostly due to incompetence rather than malice
That is so bad. There is an argument that you're getting a top end CPU, so you're probably gaming on higher resolutions, but still, the advice from AMD is awful.
On March 03 2017 04:17 ShoCkeyy wrote: I wish I can update to AMD, but due to having to work in the MAC OSX environment, I'm screwed to be stuck with i7 in my hackintosh
Intel isn't over. It's far from it. What we can expect is cheaper CPUs.
I don't think they need to lower the price of any CPU bar X99, maybe? There is something inherently bad for gaming with ryzen architecture (it performs as well / better than some x99 CPU on applications but does much worse even in multi threaded games) so 7700k is still of very good value, it performs worse than 6900k/6950x so if you need performance and don't care about the price you go for that, and for the price if you already have 5820k / 5930k etc you don't need to upgrade and have a more balanced CPU anyways.
I saw a lot of people on some reddit hating on Intel for their business practices, but AMD set up a nice smoke screen in order to make people believe it'll perform almost as well as the opponent in gaming, while bossing it up in other tasks, but it was lies. Seems like a dirty move to the consumer to me :o. It would have been better to blatantly say not to expect too much for gaming, but that would have sold less chips!
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
On March 03 2017 04:17 ShoCkeyy wrote: I wish I can update to AMD, but due to having to work in the MAC OSX environment, I'm screwed to be stuck with i7 in my hackintosh
Intel isn't over. It's far from it. What we can expect is cheaper CPUs.
I don't think they need to lower the price of any CPU bar X99, maybe? There is something inherently bad for gaming with ryzen architecture (it performs as well / better than some x99 CPU on applications but does much worse even in multi threaded games) so 7700k is still of very good value, it performs worse than 6900k/6950x so if you need performance and don't care about the price you go for that, and for the price if you already have 5820k / 5930k etc you don't need to upgrade and have a more balanced CPU anyways.
I saw a lot of people on some reddit hating on Intel for their business practices, but AMD set up a nice smoke screen in order to make people believe it'll perform almost as well as the opponent in gaming, while bossing it up in other tasks, but it was lies. Seems like a dirty move to the consumer to me :o. It would have been better to blatantly say not to expect too much for gaming, but that would have sold less chips!
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
I don't think so, because when you're buying a PC now, you don't have a CPU bottleneck, in 5-6 years when you build another PC, then the performance of both units would have increased (probably by fairly similar amounts). As Intel you're definitely not too worried.
Kaby Lake X is coming out in Q2, those should be like what, 15-20% faster IPC that Broadwell, and maybe 5-10% higher clocks.
I think the 6800k is priced fine, if they make a 6 core processor for $434 at 25% performance improvement (25% is a bit higher than normal, but because a little bit of pressure, why not), it'll match or beat the 1800X in everything.
The 8 and 10 core variants have always been super overpriced, and seeing the 8 core go down to say $800 and the 10 core to $1200 would be a lot more reasonable (though still super expensive for the performance). But just goes to show they have a straightforward strategy without having to cut much into their profits by having to do things like make the high end consumer CPU's 6 core, or having to add hyperthreading on i5's... Even though I'd like to see it, they'd drive out AMD from the market with something like that.
The comparisons to the 6800k and 6900k are just a bit annoying, since these are 3 year old processors, and especially the 6900k is overpriced as fuck. Having to hear "oh look, does about the same thing as half the cost" is so annoying, nobody buys that processor. It's like Costco comparing their clothes to some expensive brand name stuff that nobody buys and saying hey look.
From what I've been reading and watching, it seems like Ryzen has very poor OC potential, so there wont be nearly as much performance being squeezed out compared to Intel.
edit: I need to stop reading youtube comments, filled with idiots who are probably looking at a CPU spec for the first time.
That is so bad. There is an argument that you're getting a top end CPU, so you're probably gaming on higher resolutions, but still, the advice from AMD is awful.
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
The main problem here is often missed. It's not that 4k somehow runs way differently on the CPU than 1080p does - the problem is that on 4k with graphically demanding games you run into something else in the system running even worse than the CPU.
Turning your graphics from 1080p to 4k won't make your CPU do 100fps instead of 60fps - it'll just make the graphics card drop framerate well below the 60fps limit of the CPU so that such a limit isn't visible any more.
That is so bad. There is an argument that you're getting a top end CPU, so you're probably gaming on higher resolutions, but still, the advice from AMD is awful.
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
The main problem here is often missed. It's not that 4k somehow runs way differently on the CPU than 1080p does - the problem is that on 4k with graphically demanding games you run into something else in the system running even worse than the weak CPU.
Turning your graphics from 1080p to 4k won't make your framerate go up.
Example situation:
1080p: 65fps with CPU A or 80fps with CPU B. GPU can handle 120fps. The framerate that you get is 65 on the Ryzen CPU or 80 on Kaby so you have clearly CPU limited performance with the faster CPU giving more FPS.
4k on the same hardware: 65fps with CPU A or 80fps with CPU B. GPU can handle 30fps. Both systems are running equally poorly because the GPU can only handle 30fps.
The CPU places an FPS ceiling on the game which changes depending on the settings and situation and that ceiling is below the performance preferences of some people in some games. When the performance demand is below the ceiling of both CPU's there is no CPU limit and everything is fine. When the performance demand is over the ceiling of one or both CPU's you see (and need) performance differences.
I know? But when GPU stops bottlenecking 4K and (if?) 4K 144Hz are there or whatever, you could finally hit CPU bottleneck on 4K as well. So saying: "see on 4K you get the same performance, you buy a high end CPU so you gotta play 4K right haha?? Don't care about 1080p benchmarks pretty please" is valid as long as you wanna play 4K and that 4K is still GPU bounded.
However, since we tend to upgrade GPU more often than CPU/mobo, and that GPU performance progress faster than CPU (1070 vs 970 performance gain is insane whereas CPU progress like 5-10% a year since sandy bridge), if this trend continues we could be CPU bottlenecked on 4K as well if GPU can finally deliver good performance in 4K.
But when GPU stops bottlenecking 4K and (if?) 4K 144Hz are there or whatever, you could finally hit CPU bottleneck on 4K as well.
Yeah, you'll eventually get up to 65fps on the weak CPU and 80fps on the strong CPU and then be CPU limited again.
The thing is, saying that CPU doesn't matter @ 4k is not actually correct. It's more correct to say that CPU doesn't matter much for 30fps but matters a lot more for 60fps or 90fps - that's what is actually being tested here.
That is so bad. There is an argument that you're getting a top end CPU, so you're probably gaming on higher resolutions, but still, the advice from AMD is awful.
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
The main problem here is often missed. It's not that 4k somehow runs way differently on the CPU than 1080p does - the problem is that on 4k with graphically demanding games you run into something else in the system running even worse than the weak CPU.
Turning your graphics from 1080p to 4k won't make your framerate go up.
Example situation:
1080p: 65fps with CPU A or 80fps with CPU B. GPU can handle 120fps. The framerate that you get is 65 on the Ryzen CPU or 80 on Kaby so you have clearly CPU limited performance with the faster CPU giving more FPS.
4k on the same hardware: 65fps with CPU A or 80fps with CPU B. GPU can handle 30fps. Both systems are running equally poorly because the GPU can only handle 30fps.
The CPU places an FPS ceiling on the game which changes depending on the settings and situation and that ceiling is below the performance preferences of some people in some games. When the performance demand is below the ceiling of both CPU's there is no CPU limit and everything is fine. When the performance demand is over the ceiling of one or both CPU's you see (and need) performance differences.
I know? But when GPU stops bottlenecking 4K and (if?) 4K 144Hz are there or whatever, you could finally hit CPU bottleneck on 4K as well. So saying: "see on 4K you get the same performance, you buy a high end CPU so you gotta play 4K right haha?? Don't care about 1080p benchmarks pretty please" is valid as long as you wanna play 4K and that 4K is still GPU bounded.
However, since we tend to upgrade GPU more often than CPU/mobo, and that GPU performance progress faster than CPU (1070 vs 970 performance gain is insane whereas CPU progress like 5-10% a year since sandy bridge), if this trend continues we could be CPU bottlenecked on 4K as well if GPU can finally deliver good performance in 4K as well.
Considering a 2600k runs almost every game in 1080p within 20% of a 7700k when using a Titan XP, I think a true CPU bottleneck in 4K will take a long long time to reach. Heck, just take a look at this video:
When using a GTX1080 on 1080p with a G4560 (a $64 CPU), more than half the games don't have a significant bottleneck. In most games GTX1080 at 1440p, a G4560 has no bottleneck, minus a few outliers like a super CPU intensive game like civ.
You can see in that video, that a GTX1060 and 1440p resolution creates practically no bottlenecks in gaming ever, we're very near being higher performance in 1080p being obsolete since we're at such high framerates already that it's getting unnecessary. Point is if a G4560 can fully handle a GTX1060 on 1440p, it's going to be a long long long time until a 7700k type processor will be getting bottlenecked at 4K.... Especially at consumer price range.
Most of those games (outside of the first two) were well within 20% average. GTA V and Witcher III are definitely on the higher end of CPU usage, and even in those games, they were showing gameplay that is very CPU intensive compared to normal, so averages will be closer than it suggests. Not sure how much the clock speeds would play a part, but anyway, I think it's fairly telling that older processors are able to handle themselves quite well, even with the most modern graphics cards which experienced an absurdly high performance jump.... In 1080p no less.
Most of those games (outside of the first two) were well within 20% average
Lots with >30% gains, two in that video (fc4 & gta). Some of the tests showing less are partially or entirely GPU limited
That's what I mean when I said gaming though. Naturally for most parts of the game you're going to be GPU limited, so the 2600k and 7700k in gaming perform reasonably similarly with a Titan X/1080ti.
Also at times they might be closer to 30%, but that's only in the peaks, once the gameplay slows down a bit, they converge more, it might be a bit of a cop out, but I believe I said average fps or something along the lines.
I quoted avg FPS difference (30 and 33%). Quite a few games get >30% avg's and sometimes >40% when you use fast RAM on both platforms
If you're GPU limited at higher FPS than you want then the CPU performance doesn't matter because it's high enough and either option would be fine, perhaps even a weaker CPU.
The problem is when that performance is not high enough.
It's very rarely important if you want to play at 30fps, sometimes important for people targetting 60fps (if you play WoW or SC2 on one of these CPU's for example, a 7700k @5ghz will give a significantly better experience than an OC'd 2600k or a 1700@4ghz). The higher FPS you want to target, the more the CPU performance actually matters.
--
Some people (none in particular but a lot of vocal people against CPU benches today) seem to be making the point that both CPU's manage X fps (with X being anything from 30 to 150+) and X fps is what they wanted or just happened to get with the GPU so therefore performance differences do not exist or do not matter - i think that's a terrible way to go about benchmarking.
The best things to look at, IMO, are:
#1 - What the performance difference is (is one CPU 10% faster than the other or 30% faster when both are limiting performance?)
and
#2 - Where that difference occurs (40fps vs 50fps is much more relevant than 220fps vs 280fps)
With those two pieces of data anybody can make an educated and personal decision for which CPU they'd like to use based on the games they play and the FPS that they want to target on those specific games. If one or both pieces of information are missing then it's not possible to make as accurate of a decision.
A benchmark of a game at 4k that achieves 33fps and caps neither CPU does not tell you what either CPU is capable of and it does not tell you what the gap between the CPU's would be, it gives you neither out of these 2 pieces of information. The useful data that we get out of this is quite limited - that both CPU's handled at least 33fps. A shocking amount of these benchmarks have been posted today and it pains me to read each and every one of them~
The best way to target this data is to use flagship graphics hardware and drop resolution (at least to 1080p) so that the framerate goes up and up until it hits a wall - if this wall is CPU limited you can get both important pieces of data.
If it's not CPU limited, either CPU will be fine since even the weakest CPU could handle your very high performance level. The weakest CPU handling low performance does not tell us much but the weakest CPU handling very high performance is very useful information.
This is pretty overcomplicated/ranty, just trying to explain my reasoning there
If I was more interested in higher framerates than higher resolutions, would these benchmarks (Ryzen+Titan X Pascal at 1080p) be more relevant? I understand the whole "Well if you're spending THAT much on hardware, what are you doing at 1080p?" but at this moment, I'm more interested in getting a 1080p144Hz display than a 4k. Pairing a Ryzen 5 with some mid-tier graphics card 3 years down the line (that beats the Titan X of today) doesn't sound all that farfetched, and I think it's perfectly reasonable to want a CPU that doesn't get bottlenecked in just a few years
I'm OK with 1080p60 for now, but I'd like to have a system where if I want >60 FPS, I'd simply upgrade my Graphics Card instead of swapping multiple components
If I was more interested in higher framerates than higher resolutions, would these benchmarks (Ryzen+Titan X Pascal at 1080p) be more relevant?
Yeah - they're particularly relevant if you want to play at or above the FPS ranges achieved by the weaker CPU in the benchmark. The more FPS you want, the more relevant CPU performance is.
Graphics cards can do either high FPS on easy settings / resolution or they can do low FPS on hard settings / resolution just by changing settings - 180fps on 1080p low and 25fps on 4k ultra with the same card is quite possible and common. That kind of scalability does not exist for CPU's, if it manages 70fps then you're usually just stuck with 70fps or under and hopefully that's the framerate that you wanted to play at because none of the options will change it much.
CPU performance is also at a relative standstill compared to GPU performance. We've got maybe +40% in 6 years between the 2600k and 7700k; in that same time period the gap between the 680 and 1080ti is more like +300%. It's easy to overpower GPU-bound games by brute force 2-5 years later but a 7700k still struggles at times with some older titles that were CPU demanding in their day.
If I was more interested in higher framerates than higher resolutions, would these benchmarks (Ryzen+Titan X Pascal at 1080p) be more relevant?
Yeah - they're particularly relevant if you want to play at or above the FPS ranges achieved by the weaker CPU in the benchmark. The more FPS you want, the more relevant CPU performance is.
Graphics cards can do either high FPS on easy settings / resolution or they can do low FPS on hard settings / resolution just by changing settings - 180fps on 1080p low and 25fps on 4k ultra with the same card is quite possible and common. That kind of scalability does not exist for CPU's, if it manages 70fps then you're usually just stuck with 70fps or under and hopefully that's the framerate that you wanted to play at because none of the options will change it much.
CPU performance is also at a relative standstill compared to GPU performance. We've got maybe +40% in 6 years between the 2600k and 7700k; in that same time period the gap between the 680 and 1080ti is more like +300%. It's easy to overpower GPU-bound games by brute force 2-5 years later but a 7700k still struggles at times with some older titles that were CPU demanding in their day.
1080ti releases today and is basically the same as Titan XP performance
I think you know the answer. GPU cores are weaker than CPU cores. Also, older games are compiled with older CPU instructions.
In terms of whether or not high core clock or higher core count will matter more in coming years for games i think i'd bet on higher core count over higher clock speeds.
On March 06 2017 06:39 NovemberstOrm wrote: In terms of whether or not high core clock or higher core count will matter more in coming years for games i think i'd bet on higher core count over higher clock speeds.
ST performance is worth the same as MT performance with infinite parallelization
ST performance is worth an increasing amount relative to MT performance as parallelization drops
Programs w/ limited parallelization have a lot of trouble extracting anywhere near 1.5x more performance from 1.5x more cores but a much wider range of programs can get 1.5x performance from the same amount of cores that are 1.5x faster.
2 cores to 4 scale well on todays games, some games scale okay to 6; scaling is poor to 8 and threads 9 through 16 are doomed to near uselessness.
That leads to 4c8t and 6c12t CPU's eating into the gains from 8c16t CPU's because games that scale well across many threads tend to get huge benefits from SMT on the 4c8t CPU and a bit on 6c12t (utilizing threads up to ~6-8) but the 8c16t CPU gets minimal if any benefit from SMT because threads 9-16 are harder to reach and meaningfully scale from without very high parallelization.
This has been gradually evolving over the last decade and dx12/vulkan have helped a little but not all that much, we need a lot more.
People have been betting on core count over core speed since we invented multiple core CPU's and they're still getting shocked every time it falls short (hi Ryzen hypetrain) because they don't understand Amdahl's Law and similar scaling issues. This video is a really nice watch if you've got an hour some time:
For most CPU heavy games you can (extremely roughly) consider the Parallel fraction to be about 30-85% to fit the scaling numbers.
From my limited understanding of Ryzen, its a CPU that is brand new and motherboards manufacturers had only 3 weeks to work on the bios side of things (bare in minds that Intels enthusiast platform is X99 is still broadwell-E, and that is 2.5 generations behind the current gaming flagship, the 7700K).
So, I think that if you only want to go gaming, the 7700k is better today. Amd "may" improve their CPU or motherboards may improve but that is not a guarantee.
In terms of encoding, streaming and everything else Ryzen does look very good and it performs better than Intel in performance per dollar.
I personally will wait a bit longer and make my decision for next upgrade later.
In terms of core count vs core speeds (and also instructions per core efficiency), people were talking about going higher than 4 cores in 2006. We are at 2016, and still no high core count Games are out.
Why? Because its damn hard to code a game for many cores. This is because even if you manage to perfectly (and this is almost impossible) split the work load between cores in a particular game, you still have to:
A) Make sure Windows understands that correctly and uses the resources correctly. (So Microsoft has to do its job right as well, and that does not depend on you) B) Make sure that antivirus, firewall, etc (additional load from other sources) does not mess up the core load balance. (Again, outside of your control). C) Make sure that the shortest command sent to a core is done fast enough that it wont block other cores from going on with their next command (this is a problem even in GPU-CPU communication, HDD/SSD-CPU, RAM-CPU, Kernel-CPU, USB Inputs-CPU. Basically what most people call battlenecks) D) Manage each different hardware set up so it works balanced with the CPU and the load it has. E) Make sure it will still work well on lower core count CPUs (other way you wont sell your game to a lot of potential customers).
On the flipside, faster cores means.... its just faster. Thats it. No additional coding, no additional balance or hardware.... you see why this is the easier thing for game developers.
The good news (or bad) is that the core speed is limited by the hardware itself. So eventually there will be no way but to increase core counts and balance loads. Now, when will that happen, is anyone's guess.
Thats my take on it.
I will personally wait a bit and most likely go for Ryzen. Either 1 or dual socket Ryzen (1700). no rush here :D.
On March 05 2017 15:50 Cyro wrote: CPU performance is also at a relative standstill compared to GPU performance. We've got maybe +40% in 6 years between the 2600k and 7700k; in that same time period the gap between the 680 and 1080ti is more like +300%. It's easy to overpower GPU-bound games by brute force 2-5 years later but a 7700k still struggles at times with some older titles that were CPU demanding in their day.
I'm curious. I did hear that the performance improvement between 2nd to 7th generation is around 10% or less per generation. What's the improvement between my ancient 1st gen and the 2nd gen? My understanding is that it is more substantial.
It was a fair jump and it all came from core performance so it pushed benchmarks up across the board. A mix of performance per clock gains and increased clock speeds at both stock and overclocks - around 20-25% IIRC
Digitalfoundry review that i've been waiting for. So far looks like some good comparisons and large anomalies w/ Ryzen
~0-7% gains when going from 6 core to 8 core as well, that's +33.3% more cores. That shows off how underutilized 8c16t actually is for these games and how close the much cheaper 6 core version could be
Excited about the Ryzen 5's coming out in less than a month. The i5 7600k is still gonna beat it in gaming, but it's cool for us, the consumer, that there are different products that do different things at various prices out there. I'm a little bit torn between the 4 core and the 6 core. I think I'll go with the Ryzen 5 1600 (6 cores 12 threads at $219 USD)
For motherboards, when I see something like "Supports DDR4-3200+(OC) Memory," does that mean that 3200 RAM will work without me fiddling with anything? I sometimes see things like "Supports 2400, 2677 (OC)" and I'm not really sure what that means
This means that if I buy 2x8GB DDR4 RAM at 3200 MHz, I can just stick it in and it'll just run at that speed, right? I don't have to OC or tinker with anything?
My current processor lasted me 6-7 years and counting (i5 760k) and I'm expecting my next upgrade to last me about that long or longer, especially considering how slowly CPUs have been progressing lately. So I'm OK with splurging a little bit on more expensive RAM
On March 19 2017 05:02 Purind wrote: Ah, so just to be clear, when I'm shopping for RAM, I should look for 2666, and if possible overclock above that?
I don't mind tinkering with things. Just wanna make sure I don't buy the wrong parts
No
The overclocked part is the CPU memory controller (specced for up to 2666mhz) that has to run at 3200mhz or whatever to match the RAM sticks. You should buy sticks of the speed that you want to run
---
We have confirmation from AMD that there are no silly games going to be played with Ryzen 5. The six-core parts will be a strict 3+3 combination, while the four-core parts will use 2+2. This will be true across all CPUs, ensuring a consistent performance throughout.