|
Hey,
I've started a thread on Battle.Net forums regarding performance of SC II (performance problems were discussed in a thread on TL.net here)
Maybe I'm arrogant for thinking that Blizzard would do something about it, but would you join in my arrogance and show support/help the thread live long?
Here's the BN thread
|
From what I understand, effective parallelization is a huge amount of work and would provide minimal benefit on StarCraft's sequential real-time engine. The game depends on actions happening in the exact same order on every client, meaning basically none of the work done could be parallelized in a meaningful manner. The overhead required to distribute the work and keep every core in sync would exceed the amount of benefit you could obtain by doing so.
I believe they already offload as much as is practical, like UI and so on, but the game is ultimately limited by the main game loop. This is basically the reason very few games can utilize multiple processors effectively.
|
Not going to ever happen.
Getting parallelization on the main game loop while preserving determinism would require a full engine rewrite as well as being theoretically difficult (though maybe doable if you started writing an RTS from the ground up today). Rewriting the engine so that it behaves the same way as before sounds nigh impossible.
|
Czech Republic12115 Posts
It wouldn't be an issue, huge datacenters are all massivelly scaled. I'm just working on one Running this game in 200 threads wouldn't be an issue, but I don't think Blizzard will ever invest the money needed
|
It will be nice if Blizzard improves performance because all other competitors (CS: GO and LoL) run a lot smoother. I shouldn't need to pay but if Blizzard offers a premium patch for $5-10, I'm more willing to pay for it rather than for commanders they sell.
|
In theory, it's technically possible. The technology is there. In reality, no fucking way. Money, dear boy.
|
cs, lol: a few units sc2: hundreds of units you can't compare them
-> no way +1
|
On June 17 2018 05:31 bela.mervado wrote: cs, lol: a few units sc2: hundreds of units you can't compare them
-> no way +1
What about constant shooting and lots of people moving, calculating damage, etc?
|
On June 17 2018 06:17 sc-darkness wrote:Show nested quote +On June 17 2018 05:31 bela.mervado wrote: cs, lol: a few units sc2: hundreds of units you can't compare them
-> no way +1 moving, calculating damage, etc?
thats done in sc2 as well....
|
On June 17 2018 06:17 sc-darkness wrote:Show nested quote +On June 17 2018 05:31 bela.mervado wrote: cs, lol: a few units sc2: hundreds of units you can't compare them
-> no way +1 What about constant shooting and lots of people moving, calculating damage, etc?
That's not a big deal programatically which is why they can get by with server authoritative models.
On June 17 2018 04:35 pvsnp wrote: In theory, it's technically possible. The technology is there. In reality, no fucking way. Money, dear boy.
It would be cheaper and easier to make SC3 most likely.
|
On June 17 2018 06:17 sc-darkness wrote:Show nested quote +On June 17 2018 05:31 bela.mervado wrote: cs, lol: a few units sc2: hundreds of units you can't compare them
-> no way +1 What about constant shooting and lots of people moving, calculating damage, etc?
with 10 players and 120 fps that can be handled on a 20 year old cpu. player models are controlled by humans, projectiles travel a straight line or an easily computable trajectory.
in an rts you have shitloads of units, when you have an sc2 4v4 with 8* 200/200 armies doing A-click the units have to find their way through the map, with path finding, collisions, etc. that is a waay much harder problem.
|
On June 17 2018 06:44 ZigguratOfUr wrote:Show nested quote +On June 17 2018 06:17 sc-darkness wrote:On June 17 2018 05:31 bela.mervado wrote: cs, lol: a few units sc2: hundreds of units you can't compare them
-> no way +1 What about constant shooting and lots of people moving, calculating damage, etc? That's not a big deal programatically which is why they can get by with server authoritative models. Show nested quote +On June 17 2018 04:35 pvsnp wrote: In theory, it's technically possible. The technology is there. In reality, no fucking way. Money, dear boy. It would be cheaper and easier to make SC3 most likely.
I for one would 100% rather write five brand new games–or a brand new game engine–than try and refactor SC2 with a parallel game loop. Just thinking about the possibility gives me a headache.
Dunno about the Blizzard devs but I suspect they'd agree.
On June 17 2018 04:27 sc-darkness wrote: It will be nice if Blizzard improves performance because all other competitors (CS: GO and LoL) run a lot smoother. I shouldn't need to pay but if Blizzard offers a premium patch for $5-10, I'm more willing to pay for it rather than for commanders they sell.
You're gonna need a higher price tag than $5 or $10 to cover this one.....
|
Russian Federation40169 Posts
On June 17 2018 03:32 deacon.frost wrote:It wouldn't be an issue, huge datacenters are all massivelly scaled. I'm just working on one Running this game in 200 threads wouldn't be an issue, but I don't think Blizzard will ever invest the money needed Huge datacenters do not run 1 program in 100 threads, they run 100 programs in 1 thread each (yes, even if only 1 executable is running in essence it is still usually 100+ independent threads doing their own thing each most of the time).
Running SC2 in 200 threads would be as good as slowing it down by a factor of 100 or something as ridiculous on any hardware you can think of, actually.
|
SC2 has performance issues since patch 3.0, but no one really cares about that.
|
Never had problems with perfomance in SC2 except maybe 4v4 when like 3 player massed carriers. I play on Medium settings.. and im between 100-200fps
|
Imo this isn't Blizzard's fault. When they started development on SC2, even dual core was new and a lot of games didn't see any benefit from it. The trend in CPU development favored more cores over fewer, faster cores, and Blizzard did not having a crystal ball in 2006-07.
And as others have said, rewriting a core game engine is not trivial.
|
I remember there being rumors of improved multi-core performance coming with heart of the swarm. Then that came and went and there wasn't a mention of it. I do agree that it is unfortunate that big team battles still lag on fairly modern PC's.
|
Hmm... I used to be able to play it fine as well but now I have to upgrade to continue playing, I guess others are experiencing the same issue.
|
On June 17 2018 08:30 Divain wrote: SC2 has performance issues since patch 3.0, but no one really cares about that.
Yeah this exactly.
It ran perfectly when it came out in 2010 (IIRC) on my shitty Dell.
I bought a new computer now, but when I try to run it on my old computer it barely runs in low settings. Definitely some bad optimization, even the menus feel heavy and laggy. You compare that to Dota where you can literally minimize the game you're in to look at stats while still in the .exe... Night and day.
Even in the remastered version of bw the menus lag and are badly connected to the servers (difficult to read server data IE: mmr).
I wish Activision / Blizzard could make a game run smooth with their ≈20 billions usd valued company. Just a thought.
Pray for Warcraft 4
|
On June 17 2018 08:30 Divain wrote: SC2 has performance issues since patch 3.0, but no one really cares about that. Yup, ever since 3.0 stuff went down.
|
On June 17 2018 12:28 crbox wrote:Show nested quote +On June 17 2018 08:30 Divain wrote: SC2 has performance issues since patch 3.0, but no one really cares about that. Yeah this exactly. It ran perfectly when it came out in 2010 (IIRC) on my shitty Dell. I bought a new computer now, but when I try to run it on my old computer it barely runs in low settings. Definitely some bad optimization, even the menus feel heavy and laggy. You compare that to Dota where you can literally minimize the game you're in to look at stats while still in the .exe... Night and day. Even in the remastered version of bw the menus lag and are badly connected to the servers (difficult to read server data IE: mmr). I wish Activision / Blizzard could make a game run smooth with their ≈20 billions usd valued company. Just a thought. Pray for Warcraft 4
Software is expected to take more resources with each new feature. Having said that, it's not an excuse for Blizzard to limit cores to 2. I have i7-5820k and the game is laggy in 3vs3 and 4vs4 when there are a lot of air armies.
|
Czech Republic12115 Posts
On June 17 2018 08:11 lolfail9001 wrote:Show nested quote +On June 17 2018 03:32 deacon.frost wrote:It wouldn't be an issue, huge datacenters are all massivelly scaled. I'm just working on one Running this game in 200 threads wouldn't be an issue, but I don't think Blizzard will ever invest the money needed Huge datacenters do not run 1 program in 100 threads, they run 100 programs in 1 thread each (yes, even if only 1 executable is running in essence it is still usually 100+ independent threads doing their own thing each most of the time). Running SC2 in 200 threads would be as good as slowing it down by a factor of 100 or something as ridiculous on any hardware you can think of, actually. Not exactly, for every operation there's a new thread launched in every instance. 200 was just an example that it's possible on any scale.
|
On June 17 2018 18:40 deacon.frost wrote:Show nested quote +On June 17 2018 08:11 lolfail9001 wrote:On June 17 2018 03:32 deacon.frost wrote:It wouldn't be an issue, huge datacenters are all massivelly scaled. I'm just working on one Running this game in 200 threads wouldn't be an issue, but I don't think Blizzard will ever invest the money needed Huge datacenters do not run 1 program in 100 threads, they run 100 programs in 1 thread each (yes, even if only 1 executable is running in essence it is still usually 100+ independent threads doing their own thing each most of the time). Running SC2 in 200 threads would be as good as slowing it down by a factor of 100 or something as ridiculous on any hardware you can think of, actually. Not exactly, for every operation there's a new thread launched in every instance. 200 was just an example that it's possible on any scale. Just because some tasks can be parallelized effectively doesn't mean all tasks can. The first two posts by Kalera and Ziggurat are correct. Determinism is the problem. SC2 uses the lockstep protocol which requires that the engine is perfectly deterministic down to every single bit. Even doing simple mathematical operations in different orders will lead to different results due to rounding errors, and parallelizing something like SC2 won't happen.
|
On June 17 2018 12:28 crbox wrote:Show nested quote +On June 17 2018 08:30 Divain wrote: SC2 has performance issues since patch 3.0, but no one really cares about that. Yeah this exactly. It ran perfectly when it came out in 2010 (IIRC) on my shitty Dell. I bought a new computer now, but when I try to run it on my old computer it barely runs in low settings. Definitely some bad optimization, even the menus feel heavy and laggy. You compare that to Dota where you can literally minimize the game you're in to look at stats while still in the .exe... Night and day. Even in the remastered version of bw the menus lag and are badly connected to the servers (difficult to read server data IE: mmr). I wish Activision / Blizzard could make a game run smooth with their ≈20 billions usd valued company. Just a thought. Pray for Warcraft 4
They would not ever sell RTS based on WarCraft lore, like MMO ppl = more $$$$$$
Also idk for me its like every second complain about performance is usually because something is bad on users end not the games end
|
sc2 performance has been going steadily downhill over the course of the game. Computers that could run sc2 maxed out in WoL have to play pretty much on the lowest settings now to have the same fps as back then. There really is no excuse for the way performance has gone to hell. There have been several patches that made people lose 20-30 fps from one day to the next...
|
For people experiencing issues, you can try running the 32 bit client. I remember when they first released 64bit, i think it was patch 3.0, it had considerably worse performance than the 32 bit client. I don't know if that's still true because i have a new pc now and haven't had the need to change it, but its worth a try.
You can change it from the launcher, under settings.
|
On June 18 2018 01:02 jimminy_kriket wrote: For people experiencing issues, you can try running the 32 bit client. I remember they first released 64bit, i think it was patch 3.0, it had considerably worse performance than the 32 bit client. I dont know if thats still true because i have a new pc now and havent had the need tp change it, but its worth a try.
You can change it from the launcher, under settings.
Has someone tried this? As a software engineer, I don't see this working. 32-bit vs 64-bit is just about memory not CPU performance.
|
Just tested it because I was curious. Played an ~16 minute replay at 8x speed on both the 32 bit and 64 bit client.
2018-06-17 12:00:34 - SC2 Frames: 5947 - Time: 131110ms - Avg: 45.359 - Min: 9 - Max: 139
2018-06-17 12:04:31 - SC2_x64 Frames: 5657 - Time: 132906ms - Avg: 42.564 - Min: 8 - Max: 146
32 bit had a slight advantage in overall FPS, but nothing killer. At non x8 speed that might amount to more than a 3 fps gain, but I wouldn't expect a miracle.
|
On June 18 2018 02:13 jimminy_kriket wrote: Just tested it because I was curious. Played an ~16 minute replay at 8x speed on both the 32 bit and 64 bit client.
2018-06-17 12:00:34 - SC2 Frames: 5947 - Time: 131110ms - Avg: 45.359 - Min: 9 - Max: 139
2018-06-17 12:04:31 - SC2_x64 Frames: 5657 - Time: 132906ms - Avg: 42.564 - Min: 8 - Max: 146
32 bit had a slight advantage in overall FPS, but nothing killer. At non x8 speed that might amount to more than a 3 fps gain, but I wouldn't expect a miracle. The big thingy is probably how deep it dips in real time big fights. It doesn't matter too much whether the game puts out 250 or 300 fps during the early game as long as the big fights are smooth enough.
|
I'm a bit of a computer science novice, but my understanding was that game engines are a standard example of a computational task that you can't parallelize. Parallelization works when you're doing a large number of unrelated computations (e.g. take a 1000-member integer array and add 1 to each member), but not when the results of each calculation depends on the outcome of the previous one (say, you add 1 to each member preceded by an even number, but subtract 1 from each number preceded by an odd number), you can't split up the calculations. If you tried, the output would depend on which core finished their job first.
So determining what color each pixel on your screen should be is a perfect task for parallelization, because the color of a given pixel doesn't depend on the color of any of the other pixels in that frame. That's why your video card can split it up between 1000 cores or whatever and do calculations for a million pixels 60 times a second or whatever. But a game engine has to iterate through every object one by one and determine its behavior for that frame. Since objects can interact with each other, you can't split up the list of objects and assign them to different cores. Imagine if your army got assigned to a slower core, and the opponent's army got assigned to a faster core, so his army always got their last shot off on the frame they died, and your army never did!
Sorry if I'm getting didactic or stating the obvious or w/e, but I don't understand how people can say "just make it use all available cores." There's a reason people tell you not to bother with lots of cores or hyperthreading CPUs when you're building a gaming PC; games can't use them. That's also why game engines and physics simulations are always CPU intensive instead of GPU intensive; CPUs are optimized to execute one or a few threads really quickly, while GPUs are optimized for splitting tasks between hundreds or thousands of cores.
Is there some recent development in this space I'm unaware of? Some advanced collision detection algorithm or something that lets you quickly identify groups of objects that can't affect each other, and assign them to different cores? Because otherwise I don't understand why this would even be in the category of "theoretically possible, but prohibitively expensive." It's just not a parallelizable task. That doesn't mean they couldn't make optimizations to get yhst single thread running more quickly, of course, but complaints like "why is this only using one core, such bad optimization" don't make sense to me at all.
|
On June 18 2018 02:27 ChristianS wrote: I'm a bit of a computer science novice, but my understanding was that game engines are a standard example of a computational task that you can't parallelize. Parallelization works when you're doing a large number of unrelated computations (e.g. take a 1000-member integer array and add 1 to each member), but not when the results of each calculation depends on the outcome of the previous one (say, you add 1 to each member preceded by an even number, but subtract 1 from each number preceded by an odd number), you can't split up the calculations. If you tried, the output would depend on which core finished their job first.
So determining what color each pixel on your screen should be is a perfect task for parallelization, because the color of a given pixel doesn't depend on the color of any of the other pixels in that frame. That's why your video card can split it up between 1000 cores or whatever and do calculations for a million pixels 60 times a second or whatever. But a game engine has to iterate through every object one by one and determine its behavior for that frame. Since objects can interact with each other, you can't split up the list of objects and assign them to different cores. Imagine if your army got assigned to a slower core, and the opponent's army got assigned to a faster core, so his army always got their last shot off on the frame they died, and your army never did!
Sorry if I'm getting didactic or stating the obvious or w/e, but I don't understand how people can say "just make it use all available cores." There's a reason people tell you not to bother with lots of cores or hyperthreading CPUs when you're building a gaming PC; games can't use them. That's also why game engines and physics simulations are always CPU intensive instead of GPU intensive; CPUs are optimized to execute one or a few threads really quickly, while GPUs are optimized for splitting tasks between hundreds or thousands of cores.
Is there some recent development in this space I'm unaware of? Some advanced collision detection algorithm or something that lets you quickly identify groups of objects that can't affect each other, and assign them to different cores? Because otherwise I don't understand why this would even be in the category of "theoretically possible, but prohibitively expensive." It's just not a parallelizable task. That doesn't mean they couldn't make optimizations to get yhst single thread running more quickly, of course, but complaints like "why is this only using one core, such bad optimization" don't make sense to me at all.
If you're very clever right from the beginning about designing the entity-component system you can divide things in a way that preserves the determinism of the game engine yet allows for parallelism. Deserts of Kharak offloads some of the pathfinding onto a second thread iirc though the developers decided against going the whole way with parallelism despite it seeming possible at some stage in development.
Needless to say that SCII was built in an era where no one was thinking about this, and transforming the game engine to use parallelism might well be close to impossible.
|
Since I play with [Physics off], sc2 runs very smoothly on my AMD x6 1055T 2.8Ghz (2010). I didn't try 3v3 or 4v4 though.
|
On June 18 2018 02:49 ZigguratOfUr wrote:Show nested quote +On June 18 2018 02:27 ChristianS wrote: I'm a bit of a computer science novice, but my understanding was that game engines are a standard example of a computational task that you can't parallelize. Parallelization works when you're doing a large number of unrelated computations (e.g. take a 1000-member integer array and add 1 to each member), but not when the results of each calculation depends on the outcome of the previous one (say, you add 1 to each member preceded by an even number, but subtract 1 from each number preceded by an odd number), you can't split up the calculations. If you tried, the output would depend on which core finished their job first.
So determining what color each pixel on your screen should be is a perfect task for parallelization, because the color of a given pixel doesn't depend on the color of any of the other pixels in that frame. That's why your video card can split it up between 1000 cores or whatever and do calculations for a million pixels 60 times a second or whatever. But a game engine has to iterate through every object one by one and determine its behavior for that frame. Since objects can interact with each other, you can't split up the list of objects and assign them to different cores. Imagine if your army got assigned to a slower core, and the opponent's army got assigned to a faster core, so his army always got their last shot off on the frame they died, and your army never did!
Sorry if I'm getting didactic or stating the obvious or w/e, but I don't understand how people can say "just make it use all available cores." There's a reason people tell you not to bother with lots of cores or hyperthreading CPUs when you're building a gaming PC; games can't use them. That's also why game engines and physics simulations are always CPU intensive instead of GPU intensive; CPUs are optimized to execute one or a few threads really quickly, while GPUs are optimized for splitting tasks between hundreds or thousands of cores.
Is there some recent development in this space I'm unaware of? Some advanced collision detection algorithm or something that lets you quickly identify groups of objects that can't affect each other, and assign them to different cores? Because otherwise I don't understand why this would even be in the category of "theoretically possible, but prohibitively expensive." It's just not a parallelizable task. That doesn't mean they couldn't make optimizations to get yhst single thread running more quickly, of course, but complaints like "why is this only using one core, such bad optimization" don't make sense to me at all. If you're very clever right from the beginning about designing the entity-component system you can divide things in a way that preserves the determinism of the game engine yet allows for parallelism. Deserts of Kharak offloads some of the pathfinding onto a second thread iirc though the developers decided against going the whole way with parallelism despite it seeming possible at some stage in development. Needless to say that SCII was built in an era where no one was thinking about this, and transforming the game engine to use parallelism might well be close to impossible. Interesting. I'm trying to imagine how much SC2 could benefit even if you found an effective way to identify groups of objects that couldn't affect each other. I mean, you could imagine the workers at your bases, your production buildings, etc. might sometimes be safely offloaded from the thread handling the main army (but not all the time, if there's ever any question of whether a building completed a unit before dying, whether a worker finished mining or returning a mineral before dying, or whether an army unit died and freed up supply in time to allow a production building to start the next unit). Or if you've got one fight at the bottom left of the map, and the other at the top right, you could maybe split those.
But unless I'm misunderstanding, the worst case scenario isn't gonna be any better anyway. If there's a big 4v4 200/200 fight at the center of the map, all those units are gonna have to be on the same thread.
|
On June 18 2018 02:49 ZigguratOfUr wrote:Show nested quote +On June 18 2018 02:27 ChristianS wrote: I'm a bit of a computer science novice, but my understanding was that game engines are a standard example of a computational task that you can't parallelize. Parallelization works when you're doing a large number of unrelated computations (e.g. take a 1000-member integer array and add 1 to each member), but not when the results of each calculation depends on the outcome of the previous one (say, you add 1 to each member preceded by an even number, but subtract 1 from each number preceded by an odd number), you can't split up the calculations. If you tried, the output would depend on which core finished their job first.
So determining what color each pixel on your screen should be is a perfect task for parallelization, because the color of a given pixel doesn't depend on the color of any of the other pixels in that frame. That's why your video card can split it up between 1000 cores or whatever and do calculations for a million pixels 60 times a second or whatever. But a game engine has to iterate through every object one by one and determine its behavior for that frame. Since objects can interact with each other, you can't split up the list of objects and assign them to different cores. Imagine if your army got assigned to a slower core, and the opponent's army got assigned to a faster core, so his army always got their last shot off on the frame they died, and your army never did!
Sorry if I'm getting didactic or stating the obvious or w/e, but I don't understand how people can say "just make it use all available cores." There's a reason people tell you not to bother with lots of cores or hyperthreading CPUs when you're building a gaming PC; games can't use them. That's also why game engines and physics simulations are always CPU intensive instead of GPU intensive; CPUs are optimized to execute one or a few threads really quickly, while GPUs are optimized for splitting tasks between hundreds or thousands of cores.
Is there some recent development in this space I'm unaware of? Some advanced collision detection algorithm or something that lets you quickly identify groups of objects that can't affect each other, and assign them to different cores? Because otherwise I don't understand why this would even be in the category of "theoretically possible, but prohibitively expensive." It's just not a parallelizable task. That doesn't mean they couldn't make optimizations to get yhst single thread running more quickly, of course, but complaints like "why is this only using one core, such bad optimization" don't make sense to me at all. If you're very clever right from the beginning about designing the entity-component system you can divide things in a way that preserves the determinism of the game engine yet allows for parallelism. Deserts of Kharak offloads some of the pathfinding onto a second thread iirc though the developers decided against going the whole way with parallelism despite it seeming possible at some stage in development. Needless to say that SCII was built in an era where no one was thinking about this, and transforming the game engine to use parallelism might well be close to impossible.
I mainly use Unity for game development and after reading about their entity-component system, i have COMPLETE empathy for Blizzard for not changing. I have a lot of patterns and ideas i've setup in Unity and while entity-component looks SUPER efficient and works with the data model I'm already using, it is a different system entirely and I'm shying away from it for now. Easier to attempt to make my older code more efficient but it would probably worth it personally to ivnest the time.
But yeah, i'm not really adding anything new to this thread outside of saying that indie devs have the same issue as Blizzard, and 2nding everyone who has said it would take Blizzard a TON of time, effort, and money to remake the engine to run better, when they would probably make a sequel at that point. Or a different IP entirely if they want.
|
Russian Federation40169 Posts
On June 17 2018 18:40 deacon.frost wrote:Show nested quote +On June 17 2018 08:11 lolfail9001 wrote:On June 17 2018 03:32 deacon.frost wrote:It wouldn't be an issue, huge datacenters are all massivelly scaled. I'm just working on one Running this game in 200 threads wouldn't be an issue, but I don't think Blizzard will ever invest the money needed Huge datacenters do not run 1 program in 100 threads, they run 100 programs in 1 thread each (yes, even if only 1 executable is running in essence it is still usually 100+ independent threads doing their own thing each most of the time). Running SC2 in 200 threads would be as good as slowing it down by a factor of 100 or something as ridiculous on any hardware you can think of, actually. Not exactly, for every operation there's a new thread launched in every instance. 200 was just an example that it's possible on any scale. Because that new thread does not need to ask old threads data for every operation it does. SC2, unfortunately, was made with determinism worthy of late 18th century physicians, making it stuck in basic 2 threads.
And yeah, to peeps complaining about performance. I can tell you for sure that it was as slow in 2012 as it was in 2013 and 2015 on the same settings, because i actually played it on the same hardware for this period. If you up them, and every major patch did add a serious fps sink to higher presets, it gets worse, of course.
|
i just want the south american server back, 1 sec delay is no fun
|
United Kingdom20154 Posts
On June 18 2018 01:06 sc-darkness wrote:Show nested quote +On June 18 2018 01:02 jimminy_kriket wrote: For people experiencing issues, you can try running the 32 bit client. I remember they first released 64bit, i think it was patch 3.0, it had considerably worse performance than the 32 bit client. I dont know if thats still true because i have a new pc now and havent had the need tp change it, but its worth a try.
You can change it from the launcher, under settings. Has someone tried this? As a software engineer, I don't see this working. 32-bit vs 64-bit is just about memory not CPU performance.
On June 18 2018 02:13 jimminy_kriket wrote: Just tested it because I was curious. Played an ~16 minute replay at 8x speed on both the 32 bit and 64 bit client.
2018-06-17 12:00:34 - SC2 Frames: 5947 - Time: 131110ms - Avg: 45.359 - Min: 9 - Max: 139
2018-06-17 12:04:31 - SC2_x64 Frames: 5657 - Time: 132906ms - Avg: 42.564 - Min: 8 - Max: 146
32 bit had a slight advantage in overall FPS, but nothing killer. At non x8 speed that might amount to more than a 3 fps gain, but I wouldn't expect a miracle.
Back in the 3.0 beta, 32-bit had IIRC double digit % performance gains over 64 (~+12%?). Can't say why, that's just how it was benchmarking
Your bench would show ~6%. There are some more complicated ways to get a benchmark in sc2 that is statistically very solid
And yeah, to peeps complaining about performance. I can tell you for sure that it was as slow in 2012 as it was in 2013 and 2015 on the same settings, because i actually played it on the same hardware for this period.
Did you run regular repeatable and accurate benchmarks or is that just your feeling? I showed significant losses on the same hardware and settings, especially through the 3.0 patch with the 64-bit client.
|
United Kingdom20154 Posts
On June 18 2018 02:50 Dingodile wrote: Since I play with [Physics off], sc2 runs very smoothly on my AMD x6 1055T 2.8Ghz (2010). I didn't try 3v3 or 4v4 though.
You have poor standards for "very smoothly" my friend ;D
|
On June 18 2018 03:07 ChristianS wrote:Show nested quote +On June 18 2018 02:49 ZigguratOfUr wrote:On June 18 2018 02:27 ChristianS wrote: I'm a bit of a computer science novice, but my understanding was that game engines are a standard example of a computational task that you can't parallelize. Parallelization works when you're doing a large number of unrelated computations (e.g. take a 1000-member integer array and add 1 to each member), but not when the results of each calculation depends on the outcome of the previous one (say, you add 1 to each member preceded by an even number, but subtract 1 from each number preceded by an odd number), you can't split up the calculations. If you tried, the output would depend on which core finished their job first.
So determining what color each pixel on your screen should be is a perfect task for parallelization, because the color of a given pixel doesn't depend on the color of any of the other pixels in that frame. That's why your video card can split it up between 1000 cores or whatever and do calculations for a million pixels 60 times a second or whatever. But a game engine has to iterate through every object one by one and determine its behavior for that frame. Since objects can interact with each other, you can't split up the list of objects and assign them to different cores. Imagine if your army got assigned to a slower core, and the opponent's army got assigned to a faster core, so his army always got their last shot off on the frame they died, and your army never did!
Sorry if I'm getting didactic or stating the obvious or w/e, but I don't understand how people can say "just make it use all available cores." There's a reason people tell you not to bother with lots of cores or hyperthreading CPUs when you're building a gaming PC; games can't use them. That's also why game engines and physics simulations are always CPU intensive instead of GPU intensive; CPUs are optimized to execute one or a few threads really quickly, while GPUs are optimized for splitting tasks between hundreds or thousands of cores.
Is there some recent development in this space I'm unaware of? Some advanced collision detection algorithm or something that lets you quickly identify groups of objects that can't affect each other, and assign them to different cores? Because otherwise I don't understand why this would even be in the category of "theoretically possible, but prohibitively expensive." It's just not a parallelizable task. That doesn't mean they couldn't make optimizations to get yhst single thread running more quickly, of course, but complaints like "why is this only using one core, such bad optimization" don't make sense to me at all. If you're very clever right from the beginning about designing the entity-component system you can divide things in a way that preserves the determinism of the game engine yet allows for parallelism. Deserts of Kharak offloads some of the pathfinding onto a second thread iirc though the developers decided against going the whole way with parallelism despite it seeming possible at some stage in development. Needless to say that SCII was built in an era where no one was thinking about this, and transforming the game engine to use parallelism might well be close to impossible. Interesting. I'm trying to imagine how much SC2 could benefit even if you found an effective way to identify groups of objects that couldn't affect each other. I mean, you could imagine the workers at your bases, your production buildings, etc. might sometimes be safely offloaded from the thread handling the main army (but not all the time, if there's ever any question of whether a building completed a unit before dying, whether a worker finished mining or returning a mineral before dying, or whether an army unit died and freed up supply in time to allow a production building to start the next unit). Or if you've got one fight at the bottom left of the map, and the other at the top right, you could maybe split those. But unless I'm misunderstanding, the worst case scenario isn't gonna be any better anyway. If there's a big 4v4 200/200 fight at the center of the map, all those units are gonna have to be on the same thread.
You wouldn't split units/buildings among different threads, you'd split the different components of the units among different threads, so even a 200/200 fight would be better in theory.
|
Russian Federation40169 Posts
On June 18 2018 06:46 Cyro wrote:Show nested quote +And yeah, to peeps complaining about performance. I can tell you for sure that it was as slow in 2012 as it was in 2013 and 2015 on the same settings, because i actually played it on the same hardware for this period. Did you run regular repeatable and accurate benchmarks or is that just your feeling? I showed significant losses on the same hardware and settings, especially through the 3.0 patch with the 64-bit client. Not quite benchmark but certainly not a feeling, since i always run it with fps counter on and it pretty much had same range of numbers in games all the time. Do note that i never used 64 bit client in that period. And yes, i deliberately said slow because it was slow indeed.
|
United Kingdom20154 Posts
On June 18 2018 07:41 lolfail9001 wrote:Show nested quote +On June 18 2018 06:46 Cyro wrote:And yeah, to peeps complaining about performance. I can tell you for sure that it was as slow in 2012 as it was in 2013 and 2015 on the same settings, because i actually played it on the same hardware for this period. Did you run regular repeatable and accurate benchmarks or is that just your feeling? I showed significant losses on the same hardware and settings, especially through the 3.0 patch with the 64-bit client. Not quite benchmark but certainly not a feeling, since i always run it with fps counter on and it pretty much had same range of numbers in games all the time. Do note that i never used 64 bit client in that period. And yes, i deliberately said slow because it was slow indeed.
Feelings and memories are not that reliable, you should note that even the map used can change the performance by 10-20%+ and we don't play on the old maps any more. Blizzard has poor and inconsistent standards for performance on their new maps and campaign maps.
|
This talk about "parallelization" is a distraction.
The fact is, there is and always will be ways to optimize any game engine you can think of, especially one that's the sheer size of StarCraft 2's engine. So why don't they optimize it, you may ask? Because Blizzard employees don't work for free. Their employers do not instruct them to work on this particular task, and that's that. They've decided it's not worth the cost or they have more important things to do. They've decided it's not worth paying at least one of their software developers to spend a few days looking through the most garbled parts of C++ code in their game engine and tweak it enough so little Billy's computer could manage to process an average of 60 FPS instead of 50 FPS.
|
On June 18 2018 11:11 Lazare1969 wrote: This talk about "parallelization" is a distraction.
The fact is, there is and always will be ways to optimize any game engine you can think of, especially one that's the sheer size of StarCraft 2's engine. So why don't they optimize it, you may ask? Because Blizzard employees don't work for free. Their employers do not instruct them to work on this particular task, and that's that. They've decided it's not worth the cost or they have more important things to do. They've decided it's not worth paying at least one of their software developers to spend a few days looking through the most garbled parts of C++ code in their game engine and tweak it enough so little Billy's computer could manage to process an average of 60 FPS instead of 50 FPS.
I'm glad to know that on these forums we have "experts" with insider knowledge of Blizzard's codebase who can assure us that Blizzard hasn't done trivial optimizations that would yield significant performance improvements.
|
Nice straw man, but developer time and maintenance support is limited and completely determined by the company. Companies allocate it until their software meets a certain standard they have set. These random words you throw around about "experts", "insider knowledge", "trivial", etc. add nothing of substance, just the sound of someone being irrationally butthurt.
If you want to see a game engine that's been continually improved for many years, check out Valve's Source engine, ranging from games like Half-Life 2 (2004) to Counter-Strike: Global Offensive (2012) to Titanfall 2 (2016). It is quite a feat to have an engine with a rendering API able to scale from DirectX 8 to DirectX 11 hardware (technically it's mostly DX9 features with DX11 rendering for performance enhancements) plus OpenGL for Mac and Linux, as well as tacking on multithreading support in 2006. When a company feels it is worth investing developer time in improving and optimizing a game engine, it certainly can be done.
|
Damn i didnt realise they used source on titanfall 2. What a wonderful engine.
|
On June 19 2018 01:25 Lazare1969 wrote:Nice straw man, but developer time and maintenance support is limited and completely determined by the company. Companies allocate it until their software meets a certain standard they have set. These random words you throw around about "experts", "insider knowledge", "trivial", etc. add nothing of substance, just the sound of someone being irrationally butthurt. If you want to see a game engine that's been continually improved for many years, check out Valve's Source engine, ranging from games like Half-Life 2 (2004) to Counter-Strike: Global Offensive (2012) to Titanfall 2 (2016). It is quite a feat to have an engine with a rendering API able to scale from DirectX 8 to DirectX 11 hardware (technically it's mostly DX9 features with DX11 rendering for performance enhancements) plus OpenGL for Mac and Linux, as well as tacking on multithreading support in 2006. When a company feels it is worth investing developer time in improving and optimizing a game engine, it certainly can be done.
You mean I should just have called you full of shit, since you have no idea about how much time Blizzard spends optimizing the engine, and are inventing figures when you claim that a few days work would yield a 50 FPS to 60 FPS improvement?
|
|
|
|