I don't know what Kaby Lake OC is, but I'm assuming somewhere around 4.8Ghz)
~5ghz @1.37v which is fine for air cooling. Some 4.9 and some 5.1 with good voltages but the OC variance is actually much smaller than usual so very few fall outside of that range
---
I am extremely interested in gaming & OC benchmarks plus just dozens of other benchmarks when the NDA lifts. So far i guess there is a substantial gap in ST performance between Kaby and Ryzen but the MT-perf/$ of Ryzen looks to be on another level~
Anyway, I think the highest anything real world will achieve is say 30-40% percent
Parallelization is basically a nonfactor for a lot of programs that can achieve values like 99% or 99.9% parallel and scale to hundreds of threads; 16 is nothing. Twice as many cores, twice as fast. Other times you have twice as many cores, 0% faster.
They've already shown Ryzen 8c16t @4ghz to be ~50% faster at cinebench MT than a 7700k @5ghz. Lower performance-per-core, huge frequency gap but twice as many cores doing work
Honestly, I would love to see 7700k without the graphic part and instead more cores. IIRC(and it was a long time ago), 7700k has almost a half of it covered with graphic part that is useless to gamers. With 2 more cores(I don't think they would be able to add 4 cores and maintain the insane clock) it would be interesting. Probably not beating Ryzen in software application, but that gaming application could be awesome.
Well, I keep dreaming, in the meantime I can wait for new CPU war and wait for the response from Intel. (I don't need the upgrade now)
Keep in mind that the iGPU is useful for some enterprises where they need fast i7 for their workers but don't need a dedicated GPU, for example. From the review of the R7 1800X I saw: base 100= i7 2600k R7 1800X: 110 average on the games tested broadwell-e, kabylake, etc: 127 (5820k) to 137 (7700k) 3770k: 108 4790k: 126
I'll read the entire article later, there is some complicated stuff about the cache / smt vs HT / memory latency in the french review I found.
The other review I quickly looked at says roughly the same: very good in applications but not that good in games. The 4K results (strongly GPU bound) leaks from AMD were indeed a sign: they wanted to hide some more CPU dependant results on the gaming side.
On March 03 2017 04:17 ShoCkeyy wrote: I wish I can update to AMD, but due to having to work in the MAC OSX environment, I'm screwed to be stuck with i7 in my hackintosh
Intel isn't over. It's far from it. What we can expect is cheaper CPUs.
Well, after watching what I saw, with the issues we currently see... If I was buying a processor, I think I'd still buy the 7700k.
Gaming performance was even worse than I expected, and the productivity software performed the way we would think it would. If gaming is your priority, I would go with the 7700k, though we see that in some applications it can be worth it (though like one reviewer said, most of the time these tasks are offloaded to the GPU anyway).
When we approached AMD with these results pre-publication, the company defended its product by suggesting that intentionally creating a GPU bottleneck (read: no longer benchmarking the CPU’s performance) would serve as a great equalizer. AMD asked that we consider 4K benchmarks to more heavily load the GPU, thus reducing workload on the CPU
From GamersNexus
they were doing this all of the time in their pre-launch benchmarks but i had considered it to be mostly due to incompetence rather than malice
That is so bad. There is an argument that you're getting a top end CPU, so you're probably gaming on higher resolutions, but still, the advice from AMD is awful.
On March 03 2017 04:17 ShoCkeyy wrote: I wish I can update to AMD, but due to having to work in the MAC OSX environment, I'm screwed to be stuck with i7 in my hackintosh
Intel isn't over. It's far from it. What we can expect is cheaper CPUs.
I don't think they need to lower the price of any CPU bar X99, maybe? There is something inherently bad for gaming with ryzen architecture (it performs as well / better than some x99 CPU on applications but does much worse even in multi threaded games) so 7700k is still of very good value, it performs worse than 6900k/6950x so if you need performance and don't care about the price you go for that, and for the price if you already have 5820k / 5930k etc you don't need to upgrade and have a more balanced CPU anyways.
I saw a lot of people on some reddit hating on Intel for their business practices, but AMD set up a nice smoke screen in order to make people believe it'll perform almost as well as the opponent in gaming, while bossing it up in other tasks, but it was lies. Seems like a dirty move to the consumer to me :o. It would have been better to blatantly say not to expect too much for gaming, but that would have sold less chips!
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
On March 03 2017 04:17 ShoCkeyy wrote: I wish I can update to AMD, but due to having to work in the MAC OSX environment, I'm screwed to be stuck with i7 in my hackintosh
Intel isn't over. It's far from it. What we can expect is cheaper CPUs.
I don't think they need to lower the price of any CPU bar X99, maybe? There is something inherently bad for gaming with ryzen architecture (it performs as well / better than some x99 CPU on applications but does much worse even in multi threaded games) so 7700k is still of very good value, it performs worse than 6900k/6950x so if you need performance and don't care about the price you go for that, and for the price if you already have 5820k / 5930k etc you don't need to upgrade and have a more balanced CPU anyways.
I saw a lot of people on some reddit hating on Intel for their business practices, but AMD set up a nice smoke screen in order to make people believe it'll perform almost as well as the opponent in gaming, while bossing it up in other tasks, but it was lies. Seems like a dirty move to the consumer to me :o. It would have been better to blatantly say not to expect too much for gaming, but that would have sold less chips!
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
I don't think so, because when you're buying a PC now, you don't have a CPU bottleneck, in 5-6 years when you build another PC, then the performance of both units would have increased (probably by fairly similar amounts). As Intel you're definitely not too worried.
Kaby Lake X is coming out in Q2, those should be like what, 15-20% faster IPC that Broadwell, and maybe 5-10% higher clocks.
I think the 6800k is priced fine, if they make a 6 core processor for $434 at 25% performance improvement (25% is a bit higher than normal, but because a little bit of pressure, why not), it'll match or beat the 1800X in everything.
The 8 and 10 core variants have always been super overpriced, and seeing the 8 core go down to say $800 and the 10 core to $1200 would be a lot more reasonable (though still super expensive for the performance). But just goes to show they have a straightforward strategy without having to cut much into their profits by having to do things like make the high end consumer CPU's 6 core, or having to add hyperthreading on i5's... Even though I'd like to see it, they'd drive out AMD from the market with something like that.
The comparisons to the 6800k and 6900k are just a bit annoying, since these are 3 year old processors, and especially the 6900k is overpriced as fuck. Having to hear "oh look, does about the same thing as half the cost" is so annoying, nobody buys that processor. It's like Costco comparing their clothes to some expensive brand name stuff that nobody buys and saying hey look.
From what I've been reading and watching, it seems like Ryzen has very poor OC potential, so there wont be nearly as much performance being squeezed out compared to Intel.
edit: I need to stop reading youtube comments, filled with idiots who are probably looking at a CPU spec for the first time.
That is so bad. There is an argument that you're getting a top end CPU, so you're probably gaming on higher resolutions, but still, the advice from AMD is awful.
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
The main problem here is often missed. It's not that 4k somehow runs way differently on the CPU than 1080p does - the problem is that on 4k with graphically demanding games you run into something else in the system running even worse than the CPU.
Turning your graphics from 1080p to 4k won't make your CPU do 100fps instead of 60fps - it'll just make the graphics card drop framerate well below the 60fps limit of the CPU so that such a limit isn't visible any more.
That is so bad. There is an argument that you're getting a top end CPU, so you're probably gaming on higher resolutions, but still, the advice from AMD is awful.
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
The main problem here is often missed. It's not that 4k somehow runs way differently on the CPU than 1080p does - the problem is that on 4k with graphically demanding games you run into something else in the system running even worse than the weak CPU.
Turning your graphics from 1080p to 4k won't make your framerate go up.
Example situation:
1080p: 65fps with CPU A or 80fps with CPU B. GPU can handle 120fps. The framerate that you get is 65 on the Ryzen CPU or 80 on Kaby so you have clearly CPU limited performance with the faster CPU giving more FPS.
4k on the same hardware: 65fps with CPU A or 80fps with CPU B. GPU can handle 30fps. Both systems are running equally poorly because the GPU can only handle 30fps.
The CPU places an FPS ceiling on the game which changes depending on the settings and situation and that ceiling is below the performance preferences of some people in some games. When the performance demand is below the ceiling of both CPU's there is no CPU limit and everything is fine. When the performance demand is over the ceiling of one or both CPU's you see (and need) performance differences.
I know? But when GPU stops bottlenecking 4K and (if?) 4K 144Hz are there or whatever, you could finally hit CPU bottleneck on 4K as well. So saying: "see on 4K you get the same performance, you buy a high end CPU so you gotta play 4K right haha?? Don't care about 1080p benchmarks pretty please" is valid as long as you wanna play 4K and that 4K is still GPU bounded.
However, since we tend to upgrade GPU more often than CPU/mobo, and that GPU performance progress faster than CPU (1070 vs 970 performance gain is insane whereas CPU progress like 5-10% a year since sandy bridge), if this trend continues we could be CPU bottlenecked on 4K as well if GPU can finally deliver good performance in 4K.
But when GPU stops bottlenecking 4K and (if?) 4K 144Hz are there or whatever, you could finally hit CPU bottleneck on 4K as well.
Yeah, you'll eventually get up to 65fps on the weak CPU and 80fps on the strong CPU and then be CPU limited again.
The thing is, saying that CPU doesn't matter @ 4k is not actually correct. It's more correct to say that CPU doesn't matter much for 30fps but matters a lot more for 60fps or 90fps - that's what is actually being tested here.
That is so bad. There is an argument that you're getting a top end CPU, so you're probably gaming on higher resolutions, but still, the advice from AMD is awful.
Edit: the top end argument is a bit of a fallacy, since GPU performance gets better very fast, in two or three years there will probably be some CPU bottleneck on 4K too?
The main problem here is often missed. It's not that 4k somehow runs way differently on the CPU than 1080p does - the problem is that on 4k with graphically demanding games you run into something else in the system running even worse than the weak CPU.
Turning your graphics from 1080p to 4k won't make your framerate go up.
Example situation:
1080p: 65fps with CPU A or 80fps with CPU B. GPU can handle 120fps. The framerate that you get is 65 on the Ryzen CPU or 80 on Kaby so you have clearly CPU limited performance with the faster CPU giving more FPS.
4k on the same hardware: 65fps with CPU A or 80fps with CPU B. GPU can handle 30fps. Both systems are running equally poorly because the GPU can only handle 30fps.
The CPU places an FPS ceiling on the game which changes depending on the settings and situation and that ceiling is below the performance preferences of some people in some games. When the performance demand is below the ceiling of both CPU's there is no CPU limit and everything is fine. When the performance demand is over the ceiling of one or both CPU's you see (and need) performance differences.
I know? But when GPU stops bottlenecking 4K and (if?) 4K 144Hz are there or whatever, you could finally hit CPU bottleneck on 4K as well. So saying: "see on 4K you get the same performance, you buy a high end CPU so you gotta play 4K right haha?? Don't care about 1080p benchmarks pretty please" is valid as long as you wanna play 4K and that 4K is still GPU bounded.
However, since we tend to upgrade GPU more often than CPU/mobo, and that GPU performance progress faster than CPU (1070 vs 970 performance gain is insane whereas CPU progress like 5-10% a year since sandy bridge), if this trend continues we could be CPU bottlenecked on 4K as well if GPU can finally deliver good performance in 4K as well.
Considering a 2600k runs almost every game in 1080p within 20% of a 7700k when using a Titan XP, I think a true CPU bottleneck in 4K will take a long long time to reach. Heck, just take a look at this video:
When using a GTX1080 on 1080p with a G4560 (a $64 CPU), more than half the games don't have a significant bottleneck. In most games GTX1080 at 1440p, a G4560 has no bottleneck, minus a few outliers like a super CPU intensive game like civ.
You can see in that video, that a GTX1060 and 1440p resolution creates practically no bottlenecks in gaming ever, we're very near being higher performance in 1080p being obsolete since we're at such high framerates already that it's getting unnecessary. Point is if a G4560 can fully handle a GTX1060 on 1440p, it's going to be a long long long time until a 7700k type processor will be getting bottlenecked at 4K.... Especially at consumer price range.
Most of those games (outside of the first two) were well within 20% average. GTA V and Witcher III are definitely on the higher end of CPU usage, and even in those games, they were showing gameplay that is very CPU intensive compared to normal, so averages will be closer than it suggests. Not sure how much the clock speeds would play a part, but anyway, I think it's fairly telling that older processors are able to handle themselves quite well, even with the most modern graphics cards which experienced an absurdly high performance jump.... In 1080p no less.
Most of those games (outside of the first two) were well within 20% average
Lots with >30% gains, two in that video (fc4 & gta). Some of the tests showing less are partially or entirely GPU limited
That's what I mean when I said gaming though. Naturally for most parts of the game you're going to be GPU limited, so the 2600k and 7700k in gaming perform reasonably similarly with a Titan X/1080ti.
Also at times they might be closer to 30%, but that's only in the peaks, once the gameplay slows down a bit, they converge more, it might be a bit of a cop out, but I believe I said average fps or something along the lines.
I quoted avg FPS difference (30 and 33%). Quite a few games get >30% avg's and sometimes >40% when you use fast RAM on both platforms
If you're GPU limited at higher FPS than you want then the CPU performance doesn't matter because it's high enough and either option would be fine, perhaps even a weaker CPU.
The problem is when that performance is not high enough.
It's very rarely important if you want to play at 30fps, sometimes important for people targetting 60fps (if you play WoW or SC2 on one of these CPU's for example, a 7700k @5ghz will give a significantly better experience than an OC'd 2600k or a 1700@4ghz). The higher FPS you want to target, the more the CPU performance actually matters.
--
Some people (none in particular but a lot of vocal people against CPU benches today) seem to be making the point that both CPU's manage X fps (with X being anything from 30 to 150+) and X fps is what they wanted or just happened to get with the GPU so therefore performance differences do not exist or do not matter - i think that's a terrible way to go about benchmarking.
The best things to look at, IMO, are:
#1 - What the performance difference is (is one CPU 10% faster than the other or 30% faster when both are limiting performance?)
and
#2 - Where that difference occurs (40fps vs 50fps is much more relevant than 220fps vs 280fps)
With those two pieces of data anybody can make an educated and personal decision for which CPU they'd like to use based on the games they play and the FPS that they want to target on those specific games. If one or both pieces of information are missing then it's not possible to make as accurate of a decision.
A benchmark of a game at 4k that achieves 33fps and caps neither CPU does not tell you what either CPU is capable of and it does not tell you what the gap between the CPU's would be, it gives you neither out of these 2 pieces of information. The useful data that we get out of this is quite limited - that both CPU's handled at least 33fps. A shocking amount of these benchmarks have been posted today and it pains me to read each and every one of them~
The best way to target this data is to use flagship graphics hardware and drop resolution (at least to 1080p) so that the framerate goes up and up until it hits a wall - if this wall is CPU limited you can get both important pieces of data.
If it's not CPU limited, either CPU will be fine since even the weakest CPU could handle your very high performance level. The weakest CPU handling low performance does not tell us much but the weakest CPU handling very high performance is very useful information.
This is pretty overcomplicated/ranty, just trying to explain my reasoning there
If I was more interested in higher framerates than higher resolutions, would these benchmarks (Ryzen+Titan X Pascal at 1080p) be more relevant? I understand the whole "Well if you're spending THAT much on hardware, what are you doing at 1080p?" but at this moment, I'm more interested in getting a 1080p144Hz display than a 4k. Pairing a Ryzen 5 with some mid-tier graphics card 3 years down the line (that beats the Titan X of today) doesn't sound all that farfetched, and I think it's perfectly reasonable to want a CPU that doesn't get bottlenecked in just a few years
I'm OK with 1080p60 for now, but I'd like to have a system where if I want >60 FPS, I'd simply upgrade my Graphics Card instead of swapping multiple components
If I was more interested in higher framerates than higher resolutions, would these benchmarks (Ryzen+Titan X Pascal at 1080p) be more relevant?
Yeah - they're particularly relevant if you want to play at or above the FPS ranges achieved by the weaker CPU in the benchmark. The more FPS you want, the more relevant CPU performance is.
Graphics cards can do either high FPS on easy settings / resolution or they can do low FPS on hard settings / resolution just by changing settings - 180fps on 1080p low and 25fps on 4k ultra with the same card is quite possible and common. That kind of scalability does not exist for CPU's, if it manages 70fps then you're usually just stuck with 70fps or under and hopefully that's the framerate that you wanted to play at because none of the options will change it much.
CPU performance is also at a relative standstill compared to GPU performance. We've got maybe +40% in 6 years between the 2600k and 7700k; in that same time period the gap between the 680 and 1080ti is more like +300%. It's easy to overpower GPU-bound games by brute force 2-5 years later but a 7700k still struggles at times with some older titles that were CPU demanding in their day.
If I was more interested in higher framerates than higher resolutions, would these benchmarks (Ryzen+Titan X Pascal at 1080p) be more relevant?
Yeah - they're particularly relevant if you want to play at or above the FPS ranges achieved by the weaker CPU in the benchmark. The more FPS you want, the more relevant CPU performance is.
Graphics cards can do either high FPS on easy settings / resolution or they can do low FPS on hard settings / resolution just by changing settings - 180fps on 1080p low and 25fps on 4k ultra with the same card is quite possible and common. That kind of scalability does not exist for CPU's, if it manages 70fps then you're usually just stuck with 70fps or under and hopefully that's the framerate that you wanted to play at because none of the options will change it much.
CPU performance is also at a relative standstill compared to GPU performance. We've got maybe +40% in 6 years between the 2600k and 7700k; in that same time period the gap between the 680 and 1080ti is more like +300%. It's easy to overpower GPU-bound games by brute force 2-5 years later but a 7700k still struggles at times with some older titles that were CPU demanding in their day.
1080ti releases today and is basically the same as Titan XP performance
I think you know the answer. GPU cores are weaker than CPU cores. Also, older games are compiled with older CPU instructions.
In terms of whether or not high core clock or higher core count will matter more in coming years for games i think i'd bet on higher core count over higher clock speeds.
On March 06 2017 06:39 NovemberstOrm wrote: In terms of whether or not high core clock or higher core count will matter more in coming years for games i think i'd bet on higher core count over higher clock speeds.
ST performance is worth the same as MT performance with infinite parallelization
ST performance is worth an increasing amount relative to MT performance as parallelization drops
Programs w/ limited parallelization have a lot of trouble extracting anywhere near 1.5x more performance from 1.5x more cores but a much wider range of programs can get 1.5x performance from the same amount of cores that are 1.5x faster.
2 cores to 4 scale well on todays games, some games scale okay to 6; scaling is poor to 8 and threads 9 through 16 are doomed to near uselessness.
That leads to 4c8t and 6c12t CPU's eating into the gains from 8c16t CPU's because games that scale well across many threads tend to get huge benefits from SMT on the 4c8t CPU and a bit on 6c12t (utilizing threads up to ~6-8) but the 8c16t CPU gets minimal if any benefit from SMT because threads 9-16 are harder to reach and meaningfully scale from without very high parallelization.
This has been gradually evolving over the last decade and dx12/vulkan have helped a little but not all that much, we need a lot more.
People have been betting on core count over core speed since we invented multiple core CPU's and they're still getting shocked every time it falls short (hi Ryzen hypetrain) because they don't understand Amdahl's Law and similar scaling issues. This video is a really nice watch if you've got an hour some time:
For most CPU heavy games you can (extremely roughly) consider the Parallel fraction to be about 30-85% to fit the scaling numbers.