The first empirical study into StarCraft 2 expertise (here) was the largest study of expert human performance to date. Thanks to the participation of thousands of SC2 players this study we have expanded our knowledge of human skill acquisition, and we believe that the results, when published, will go a long way to convince the scientific community that StarCraft 2 is a genuine domain of expertise. The project received positive press coverage from Scientific American, the Wall Street Journal and from game oriented media such as TeamLiquid. It also spawned a segment on the science show Daily Planet on the Discovery Channel (Canada) featuring none other than the incomparable Liquid`TLO. We can’t release our results before publication, but we can summarize some of the findings we thought would be most interesting to players. We do so below, after an all-important solicitation for participation in an even more important study.
While our first project was trail-blazing in a number of ways, our second StarCraft 2 project will dwarf the first both in the quantity and quality of data collected. We believe that the project has the potential to profoundly change how skill learning and expertise are studied forever. The key to the project is that it involves analyzing the changes in individual players across time. What we need is replay packs from players who have saved most (>75%) of their replays. As long as the missing replays are few and you have a lot of replays to provide (300+). This may be three hundred, a couple thousand, or even 10’s of thousands (White-Ra!). The scientific community simply does not have access to similar data on learning in a complex game like StarCraft 2. It will show us when individual players learn the necessary skills. We will also see to what extent certain skills can compensate for others, and if certain skills are always learned before others. Finally, and perhaps most importantly, we’ll have much better ideas of how to actually help StarCraft 2 players improve.
If you haven’t saved your replays, please help spread the word to players who might have.
Interesting data from our project so far...
APM versus The Perception Action Cycle
In daily life, we use our attention to focus on objects we want to interact with. That is, looking is for doing. StarCraft is no different, and thus the vast majority of actions happen as part of what we call a “Perception Action Cycle” (PAC). A PAC basically consists of a shift of the screen to a new location for some time, followed by at least one action (typically 4-6), and then a shift to some other location. The delay to the first action in a PAC turns out to be one of the best predictors across all leagues, and the best in certain leagues (beating out the venerable APM, which, despite it’s faults, is a good predictor of league). Other components of the PAC are also important predictors of skill.
Does Variable Importance Change with Skill?
Our primary research question was: to what extent does the importance (i.e. predictive power) of variables change across levels of skill? If it changes more than a little, it is a difficult problem for researchers, because most of the time it’s not feasible to collect detailed data from across many levels of experience, and the alternatives (for example collecting data just from experts and novices) are less useful where variables frequently change in importance. We used a machine learning technique (random forests of conditional inference trees) to build classifiers for each league compared to the one two leagues up (Bronze vs. Gold, Silver versus Platinum, etc.).
This figure describes the ranked importance of each of our key variables in these “skip-league” classifiers. APM was ranked first, for instance, in the Bronze vs Gold classifier, implying that it was our most useful variable for distinguishing those leagues.
Settings Data
Ever wonder how other people have their options set? So did we. The following graphs are results from our original survey about player settings. Note, we did not get surveys from pro players, so we report only GMs. The following figures describe how players set their game settings in each league. For example, roughly 60% of Pros set flyer status (red lines showing the location over which a flyer hovers) to “always on.” Unit Status bars refer to unit health bars and Alerts (obviously) refer to game alerts.
Hotkey Data
These data shows the assignment of hotkeys by pros. This is interesting when compared to Bronze hotkeys. We have two different kinds of figure here. The first figure describes the frequency of Bronze and Pro players’ hotkey assigns per game (broken down by type of unit being assigned). Note how much more the Pros assign hotkeys.
The following figures describe how often particular unit types are assigned to particular hotkeys by professional players. For example, professional players in our sample assigned almost two production structures to hotkey 4 a game.
GGs
Finally, just for kicks, we scanned the chat of each game for gg or any common variant. If you want to be in GM.
Thanks
The participation of players such as yourselves makes our research possible. Thank you.
Questions
I realize this was brief (probably too science-y for some, and not enough for others) I’ll keep an eye on the comments to this thread and answer any questions that I can. I’ll also post the link to the actual scientific paper (which has much more detail about the specific analyses) once it’s available online.
Is it too late to submit your paper for "peer review" to quantitatively-inclined members of the Starcraft community? I'd be happy to review it.
Meanwhile, some questions:
How do you define Action Latency?
Is Number of PACs literally that, or Number of PACs per unit time?
Is Workers Made literally that, or Workers Made per unit time?
In your analysis do you restrict yourself to ladder games? What other restrictions/filters do you impose, such as game length.
Would you be OK with software such as GGTracker incorporating analysis of PACs along the lines of your research? If so it would be helpful for you to describe your algorithm for PACs.
On March 05 2013 23:34 dsjoerg wrote: Is it too late to submit your paper for "peer review" to quantitatively-inclined members of the Starcraft community? I'd be happy to review it.
A revision of the paper is already under review. We're just awaiting a decision. If it doesn't go at that journal, we'll submit to another.
[*] How do you define Action Latency?
It's the time between the start of a PAC (a new fixation, based on the dispersion threshold algorithm) and the time of the first action.
A youtube description of PACS can be found here:
[*] Is Number of PACs literally that, or Number of PACs per unit time?
Per unit time. There were not big differences in game length based on league, so it's basically the same for our purposes.
[*] Is Workers Made literally that, or Workers Made per unit time?
Again, per minute.
[*] In your analysis do you restrict yourself to ladder games? What other restrictions/filters do you impose, such as game length.
Yes. 1v1 ladder games
[*] Would you be OK with software such as GGTracker incorporating analysis of PACs along the lines of your research? If so it would be helpful for you to describe your algorithm for PACs.
Yes, of course. We're grateful for the community support and glad to share. Special shoutout to DakotaFanning and the SC2 Gears project, it was very helpful.
Basically, Salvucci & Goldberg (2000), Thresholds we used are 20 timestamps/6 map units. PACs are fixations with at least one action. You have to actually parse the replay, though, to get all the actions and screen moves.
On March 06 2013 04:51 CountChocula wrote: I'm curious how you measured the "unique hotkeys" category means in the variable importance table. Could you explain?
This is the number of different keys used for custom hotkeys. This goes from zero to 10 [1-0]. Sometimes high level players spam hotkeys, put their nexus on all 10, but it's not that common overall.
One small note - I don't know if it's because the sun is glaring in from my window right now, but white text on almost teal background is a little hard to read. Sorry for being a bit picky.
Congrats on your paper, Crush. Me and a colleague of mine actually met with you to learn more about your work for an exhibition. Sorry that didn't pan out. But it's really exciting to see some of your findings!
On March 06 2013 05:14 JinDesu wrote: One small note - I don't know if it's because the sun is glaring in from my window right now, but white text on almost teal background is a little hard to read. Sorry for being a bit picky.
I did the layout for this and agree it's not great I might change it to yellow or something, will try to fix today.
The results are pretty cool, looking forward to seeing it published.
wow incredible post. So much research, very interesting. I use 7 for my evo chambers too, didn't know it was the most common. Also in the "gg chart" you inverted platinium and diamond
edit: funny how masters has the lowest gg ratio. Doesn't surprise me, I used to get bm'd way more back in the days in master league.
On March 06 2013 05:25 crbox wrote: wow incredible post. So much research, very interesting. I use 7 for my evo chambers too, didn't know it was the most common. Also in the "gg chart" you inverted platinium and diamond
edit: funny how masters has the lowest gg ratio. Doesn't surprise me, I used to get bm'd way more back in the days in master league.
I assumed better players would know how to play a manner game. Guess not O.o
On March 06 2013 05:25 crbox wrote: wow incredible post. So much research, very interesting. I use 7 for my evo chambers too, didn't know it was the most common. Also in the "gg chart" you inverted platinium and diamond
edit: funny how masters has the lowest gg ratio. Doesn't surprise me, I used to get bm'd way more back in the days in master league.
I assumed better players would know how to play a manner game. Guess not O.o
well most gm players are mannered tbh. I think it has something to do with the frustration of not being recognized even though they consider themselves as good.
It's funny how if you look at the production hotkeys, you can pretty much see a figure similar to a human hand, lol.
Don't forget guys: Don't confuse correlation with causation. (changing your options and typing GG after a game won't make you better)
I personally like alerts, but I hate that they are clickable. Blizzard needs to freaking fix this and maybe people will use alerts more. They added many new options in HotS, but none of which are unclickable alerts! I really don't understand; they have unclickable menus, unclickable control groups, but no unclickable alerts...
What exactly are you guys trying to achieve with that research? Yeah yeah, sc2 requires set of abilities to develop but so what? What are the possible application of your research results from studying sc2 players? How can you benefit different areas of life with the results?
Also I am pretty sure that this kind of studies have been done multiple times by large group of scientists with great amount of resources trying to study expertise acquisition in different areas like playing musical instrument or playing icehockey. They have already a lot of material about how humans improve themselves optimally.
Edit: There is problem that makes it hard to apply any results of this research to anything really. Since you are just analyzing replays, and have little to zero acces to players themselves meaning there is giant lack of extensive interviews of players and clinical studies by different hospitals that is absolutely neccesary to fully understand this field of research.
On March 06 2013 15:38 Mongolbonjwa wrote: What exactly are you guys trying to achieve with that research? Yeah yeah, sc2 requires set of abilities to develop but so what? What are the possible application of your research results from studying sc2 players? How can you benefit different areas of life with the results?
How does expert performance vary throughout a practice session? Which practice schedules lead to the most performance gains? How does age influence reaction times in _complex_ cognitive motor tasks? How does expert performance changes with small changes to the task (e.g., balance patches, expansions)? If you want to do a neurophysiological study of multitasking, what skill level players do you need to recruit? Are some cognitive motor skills necessary to learn before others become important (e.g., A-moving via the minimap is a crazy technique, unless you are already looking at the minimap a lot), and what is the optimal training order? When in the skill spectrum is it optimal to learn hotkeys (expert digital artists have Photoshop hotkeys, for example)? And so on...
One final point: The whole history of science is proof that amazing progress can be made by understanding our world, even if we don't know the applications at the time. The invention of the laser was purely a basic science project, with no obvious practical benefits. Without it though, we have no optical media, no laser eye-surgery.
Also I am pretty sure that this kind of studies have been done multiple times by large group of scientists with great amount of resources trying to study expertise acquisition in different areas like playing musical instrument or playing icehockey. They have already a lot of material about how humans improve themselves optimally.
We are a large group of scientists: http://cslab-sfu.ca Most other research carefully measures performance on simple tasks in the lab, or uses an expert vs. novice approach, which tests only two points on the skill continuum, leaving thousands of hours of skill development unmeasured.
The 7 virtues of studying RTS game replays are these: 1. A rich dynamic task environment. 2. Highly motivated participants. 3. Noninvasive and direct measures of domain performance. 4. Accurate measures of motor performance and attentional allocation. 5. Large datasets. 6. Numerous variables. 7. Many levels of expertise.
No other approach has more than 4 of these. Yes there is lots of research on expertise: The Cambridge Handbook of Expertise and Expert Performance, an 800 page summary of research in this area, sits on my desk. Nevertheless, there are large gaps in our knowledge, and many of those gaps can be filled by studying RTS replays across skill development.
There is problem that makes it hard to apply any results of this research to anything really. Since you are just analyzing replays, and have little to zero acces to players themselves meaning there is giant lack of extensive interviews of players and clinical studies by different hospitals that is absolutely neccesary to fully understand this field of research.
We do not claim that our approach obviates any other research method. Quite the contrary, our method works well in conjunction with brain imaging work and other contrastive approaches. But replays have exactly the kind of information that is lacking in other studies: precise cognitive motor performance measures across time. No player can tell you this information, and no fMRI will show this information.
Think about it this way. Our first study ended up with data from 3,360 players. If a human coach teaches unique 72 players a year, it would take over 40 years to teach that many players. Even then, the human coach will forget lots of players over that time period. Our dataset, not only remembers every player, it remembers every single screen move, and every single mouse click that every player has performed. We're not saying it will tell you everything, but surely you can see how it would be useful.
On March 06 2013 05:25 crbox wrote: wow incredible post. So much research, very interesting. I use 7 for my evo chambers too, didn't know it was the most common. Also in the "gg chart" you inverted platinium and diamond
edit: funny how masters has the lowest gg ratio. Doesn't surprise me, I used to get bm'd way more back in the days in master league.
I assumed better players would know how to play a manner game. Guess not O.o
Master players or better players are more inclined to believe they didn make a mistake in the game and blame it more on the opponent so rage .
On March 07 2013 03:58 danl9rm wrote: So, GM = good mannered?
I think it mostly has to do with the fact that most GM's(provided legitimately gained) don't really have any reason to trash talk because they've already proved their skill in getting there. Masters encompasses people who are good at the game, but are missing something in their play. I think it's because at this point it's really frustrating to figure out what is missing that causes losses.
A silver player who gets supply blocked every 2nd or 3rd depot, misses half his worker timings etc. isn't going to be as frustrated because they have so many mistakes that they have many obvious points to fix, albeit it's probably not a priority to fix them.
The gg chart is shockingly accurate to what my personal experience has been. I don't remember exactly how often I gg'ed from bronze up, but I know as I approached and got into diamond my gg's started to increase quite a bit. Now that I got into masters, people do lots of all-ins and I've been gg'ing quite a bit less, though I'm trying to make a conscious effort to do it more often.
It's also interesting to see the flyer marker (I just recently put that to always after being in Masters for a few months), and that not many people seem to do anything about the alerts (which I never really took the time to see how they work exactly).
I would be really interested in more specifics on settings. Like, what specific sound settings do players use? I believe there is a file with all of that information so you could collect that file from people and automate the data analysis.
This is all such specific information that a lot of people probably don' t really bother to consciously think some of it. Either way, it's awesome to read such rich information, detailing a game I both enjoy watching and playing. Especially when there's a potential scientific gain possible!
On March 07 2013 04:35 JaKaTaK wrote: I would be really interested in more specifics on settings. Like, what specific sound settings do players use? I believe there is a file with all of that information so you could collect that file from people and automate the data analysis.
Note to self: use more hotkeys! It has been said that many problems you face with execution of a particular strategy can be resolved by using more hotkeys or re-hotkeying.
@figq I absolutely agree, especially with HotS, there are more and more reasons to use more control groups. Which is why I use a layout that supports 10 control groups and 8 camera location keys all within 2 keys reach of my resting position.
The gg chart is shockingly accurate to what my personal experience has been. I don't remember exactly how often I gg'ed from bronze up, but I know as I approached and got into diamond my gg's started to increase quite a bit. Now that I got into masters, people do lots of all-ins and I've been gg'ing quite a bit less, though I'm trying to make a conscious effort to do it more often.
It's also interesting to see the flyer marker (I just recently put that to always after being in Masters for a few months), and that not many people seem to do anything about the alerts (which I never really took the time to see how they work exactly).
Alerts are actually more complex than people give them credit for:
There are the alerts that are clickable in the middle left side of the screen.
There are "our forces are under attack" "nuclear launch detected" and "research complete". Each of these can be turned on by 3 different audio settings: Sound effects, Error sounds, Voices
There are "Response Sounds" which include the sound when a queen is birthed, and "SCV Ready" as well as response sounds that occur when a unit is selected or given an order.
There are "Interface sounds" which will make a sound effect whenever you're trying to do something you can't (like build units when you don't have the resources)
It is really unfortunate that there isn't a setting that will exclude the extraneous sounds, while keeping the alerts we want for competitive play. Previously you could edit the variables.txt file specifically to get exactly what you want, but blizzard shut this down
On March 07 2013 03:48 JaKaTaK wrote: I am willing to bet that Spending Skill is a more useful variable than APM for most leagues.
I'm not so sure. APM and other variables we've looked at have a far larger spread than SQ (pro APM is at least 4 times that of Bronze). SQ might have a lot less variability though, so it's not a bad idea to check.
I think that looking at Spending Skill from ggtracker is much more accurate than using SQ because Spending Skill accounts for game length and uses more data than the original SQ metric. I would be particularly interested in seeing which measurements correlate most closely to APM, because the issue with APM is that (IMO) it isn't something specific to work on. Whereas something like, using more control groups, Action Latency, PACs, and minimap commands are something that a player could develop a deliberate practice regiment around.
Thank you so much for doing this. I wish I was better educated in psychology or cognitive science so that I could help more directly, but if there's anything I can do to help (already sending in replays) let me know.
EDIT:
In the survey for submitting data there are 2 options for keyboard layout:
Core
Grid
"Core" is not a hotkey layout. Standard is a hotkey layout. TheCore is a hotkey layout. But not Core. Also, I think it would be a good idea to distinguish between hotkeys (any button used to make a command) and Control Groups, which are specifically groups of units/structures that can be recalled, created, and added to with a combination of buttons.
I would also strongly suggest asking players how many camera location keys they use. I think it will show to be correlated both to league and to APM.
In your last graph of ggs, the tails of the "g.g." are cut off on the y axis label, making it look like it means the opposite: "probability of qq". This made me laugh.
Also, is diamond supposed to be placed before platinum in that graph (on the x-axis), or is that a mislabel? If so, it's interesting that manner spikes in platinum and then diminishes through diamond and master. I have a feeling your small number of GM participants are people who are predisposed to civil behavior, having contributed to science, so they probably skew that result towards "gg". Not that that graph is meant to be serious.
On March 07 2013 09:39 LokiSnake wrote: Love the information, CrushDog, and would always love to see more. I do have a couple of questions:
When would the results be published?
Any insights as to why upper league players don't have alerts turned on?
because they are annoying as hell, that's why. Sometimes you just have to ask people for the reason to find out.
Also, having alerts will pull you toward reacting to things like supply blocks instead of proactively checking your supply so that it never happens. Some players will do sessions without any sound in order to train their minimap awareness and macro mechanics.
I must say it doesn't surprise me that there is a decline in GGs in master. As I noticed from climbing up through the leagues a few times on different accounts, master players seemed to be the worst-mannered.
On March 07 2013 09:39 LokiSnake wrote: Love the information, CrushDog, and would always love to see more. I do have a couple of questions:
When would the results be published?
Any insights as to why upper league players don't have alerts turned on?
because they are annoying as hell, that's why. Sometimes you just have to ask people for the reason to find out.
Also, having alerts will pull you toward reacting to things like supply blocks instead of proactively checking your supply so that it never happens. Some players will do sessions without any sound in order to train their minimap awareness and macro mechanics.
ya, it always felt like it was some kind of support wheel that you would use for kids who learns to ride a bike, once you know it really well it's best just to take them off.
On March 06 2013 15:38 Mongolbonjwa wrote: What exactly are you guys trying to achieve with that research? Yeah yeah, sc2 requires set of abilities to develop but so what? What are the possible application of your research results from studying sc2 players? How can you benefit different areas of life with the results?
How does expert performance vary throughout a practice session? Which practice schedules lead to the most performance gains? How does age influence reaction times in _complex_ cognitive motor tasks? How does expert performance changes with small changes to the task (e.g., balance patches, expansions)? If you want to do a neurophysiological study of multitasking, what skill level players do you need to recruit? Are some cognitive motor skills necessary to learn before others become important (e.g., A-moving via the minimap is a crazy technique, unless you are already looking at the minimap a lot), and what is the optimal training order? When in the skill spectrum is it optimal to learn hotkeys (expert digital artists have Photoshop hotkeys, for example)? And so on...
One final point: The whole history of science is proof that amazing progress can be made by understanding our world, even if we don't know the applications at the time. The invention of the laser was purely a basic science project, with no obvious practical benefits. Without it though, we have no optical media, no laser eye-surgery.
Also I am pretty sure that this kind of studies have been done multiple times by large group of scientists with great amount of resources trying to study expertise acquisition in different areas like playing musical instrument or playing icehockey. They have already a lot of material about how humans improve themselves optimally.
We are a large group of scientists: http://cslab-sfu.ca Most other research carefully measures performance on simple tasks in the lab, or uses an expert vs. novice approach, which tests only two points on the skill continuum, leaving thousands of hours of skill development unmeasured.
The 7 virtues of studying RTS game replays are these: 1. A rich dynamic task environment. 2. Highly motivated participants. 3. Noninvasive and direct measures of domain performance. 4. Accurate measures of motor performance and attentional allocation. 5. Large datasets. 6. Numerous variables. 7. Many levels of expertise.
No other approach has more than 4 of these. Yes there is lots of research on expertise: The Cambridge Handbook of Expertise and Expert Performance, an 800 page summary of research in this area, sits on my desk. Nevertheless, there are large gaps in our knowledge, and many of those gaps can be filled by studying RTS replays across skill development.
There is problem that makes it hard to apply any results of this research to anything really. Since you are just analyzing replays, and have little to zero acces to players themselves meaning there is giant lack of extensive interviews of players and clinical studies by different hospitals that is absolutely neccesary to fully understand this field of research.
We do not claim that our approach obviates any other research method. Quite the contrary, our method works well in conjunction with brain imaging work and other contrastive approaches. But replays have exactly the kind of information that is lacking in other studies: precise cognitive motor performance measures across time. No player can tell you this information, and no fMRI will show this information.
Think about it this way. Our first study ended up with data from 3,360 players. If a human coach teaches unique 72 players a year, it would take over 40 years to teach that many players. Even then, the human coach will forget lots of players over that time period. Our dataset, not only remembers every player, it remembers every single screen move, and every single mouse click that every player has performed. We're not saying it will tell you everything, but surely you can see how it would be useful.
First problem is that you cannot possible find all neccesary information just from replays, good example is that reaction time question, replay does not provide needed informations to really make any conclusions.
To get wider and more complete understanding about experise and its development you would absolutely need to fullfill this research with neurophysiological and psychological examinations.
The teal background is a bit agitating on my eyes (darkened room and brightened) but it's a really nice read anyway (even while I squint a smidgen). Maybe a darker blue (navy) would be better <3.
How long will you be taking uploads for? I've not been saving my replays since I started only playing on the HotS beta, if you will be taking HotS replays then I'll upload my first batch of replays from the expansion.
On March 07 2013 14:34 Skytt wrote: How long will you be taking uploads for? I've not been saving my replays since I started only playing on the HotS beta, if you will be taking HotS replays then I'll upload my first batch of replays from the expansion.
The idea is that we take everything you've got that's old (WoL, HotS beta), then we'll check in with you in a couple months and get what you've accumulated since you submitted.
We want as complete a sample as possible, so if you have a large collection of WoL replays, please do submit them.
Great post! I'm tempted to massively grind out HoTS 1v1s just to participate again (Apparently I have played myself into Diamond over 3 years in under 300 games ... it sucks having to work).
As for PAC, you say that ``A PAC basically consists of a shift of the screen to a new location for some time, followed by at least one action (typically 4-6), and then a shift to some other location."
And that's roughly what you talked about in the video as well. But what you are really talking about is how quickly a person changes their focus and how quickly they decide what to do within that new focus, right?
So, to get the most insane PAC, you'd need to be a good multitasker. Someone who's quickly jumping between, let's say, 3 medivacs and their base, giving very quick movement single commands at each location, would have very high PAC.
So, if higher PAC means that you are a better player, then a better player will more quickly start doing something else and the more quickly part is because the the moment their screen locks, they know how to do the actions as fast as they can. (A combination of decision-making and mechanics.)
So, in another context, a good chef is one that will very quickly jump from preparing one ingredient to another, with very little pause in deciding the next necessary action, and quick movements in preparing each ingredient (for example, if the next action is chopping onions, their hands would immediately grab the appropriate knife and an onion, do the slicing motions quickly and efficiently, and then move on.)
Did I understand that correctly (I should probably read Salvucci & Goldberg [2000] but may-be I can understand it better already here)?
On an unrelated point. Are the major differences in PAC between the races? I could imagine that Z has higher PAC than T due to infects and creep spread, and T higher PAC than P due to the more harass-based gameplay (perhaps limited to TvP).
On March 06 2013 04:02 CrushDog5 wrote: Finally, just for kicks, we scanned the chat of each game for gg or any common variant. If you want to be in GM.
One thing I have always been thinking, is that people often talk about multitasking in starcraft. But actually there is no real multitasking in starcraft, it is not even possible. All actions are in sequences, they do not happen simultaneously. What people think is multitasking, is actually just switching fast between different actions.
In contrary, there is real multitasking in fighter pilot training and actually flying the jet plane, and it is far more difficult than starcrafts "multitasking" which is not even real multitasking. So if you wanna study multitasking, fighter pilots and their training programs are something that should be checked. Even the qualifiers for pilot training programs are very difficult. Flying jets also requires great deal of emergency management skills, which again is far more demanding than playing starcraft. Specially when considered that fighter pilots need to learn to deal with high G-forces and they have to study lots of theory of physics of flying and how the jet itself works, learning to fly jet planes is harder than starcraft.
On March 07 2013 16:37 Ghanburighan wrote: Great post! I'm tempted to massively grind out HoTS 1v1s just to participate again (Apparently I have played myself into Diamond over 3 years in under 300 games ... it sucks having to work).
Did you have a lot of RTS experience already?
If not, then go ahead and submit your replays. Faster learners are very interesting.
Did I understand that correctly (I should probably read Salvucci & Goldberg [2000] but may-be I can understand it better already here)?
\
This is fine. You have the main idea. Salvucci & Goldberg just discribe a particular algorithm for creating fixations in eye-movement data (a dispersion threshold algorithm) that we used to aggregate the raw screen moves into fixations. Fixations with at least one action are PACS.
Just to clarify, PAC Action Latency gets SMALLER, that is, it takes less time to respond.
On an unrelated point. Are the major differences in PAC between the races? I could imagine that Z has higher PAC than T due to infects and creep spread, and T higher PAC than P due to the more harass-based gameplay (perhaps limited to TvP).
We looked at lots of race specific variables, the vast majority were not very good. At one point we had a variable for MicroAPM. This was higher for Terrans, especially the Terran pros. Other than that, race was not that strong. There might still be something that we missed (we weren't keen to have to explain race in a scientific paper).
The other interesting thing we didn't find, is Macro. No macro variable outside of workers trained was useful for anything. Surprising, but there it is.
On March 07 2013 16:37 Ghanburighan wrote: Great post! I'm tempted to massively grind out HoTS 1v1s just to participate again (Apparently I have played myself into Diamond over 3 years in under 300 games ... it sucks having to work).
Did you have a lot of RTS experience already?
If not, then go ahead and submit your replays. Faster learners are very interesting.
I had 0 experience with RTS games before SC2. Now I feel all special ^^ I'll make sure to submit.
Did I understand that correctly (I should probably read Salvucci & Goldberg [2000] but may-be I can understand it better already here)?
\
This is fine. You have the main idea. Salvucci & Goldberg just discribe a particular algorithm for creating fixations in eye-movement data (a dispersion threshold algorithm) that we used to aggregate the raw screen moves into fixations. Fixations with at least one action are PACS.
Just to clarify, PAC Action Latency gets SMALLER, that is, it takes less time to respond.
On an unrelated point. Are the major differences in PAC between the races? I could imagine that Z has higher PAC than T due to infects and creep spread, and T higher PAC than P due to the more harass-based gameplay (perhaps limited to TvP).
We looked at lots of race specific variables, the vast majority were not very good. At one point we had a variable for MicroAPM. This was higher for Terrans, especially the Terran pros. Other than that, race was not that strong. There might still be something that we missed (we weren't keen to have to explain race in a scientific paper).
The other interesting thing we didn't find, is Macro. No macro variable outside of workers trained was useful for anything. Surprising, but there it is.
That is very surprising indeed (especially for injects and SQ). Can't wait to see more results.
On March 08 2013 00:59 CrushDog5 wrote: The other interesting thing we didn't find, is Macro. No macro variable outside of workers trained was useful for anything. Surprising, but there it is.
Wild! Did you look at SQ?
I presume there is a lot of correlation among the various measures you used, so that aside from your top-ranked criterion, the next-highest-ranked criterion achieves that spot because it has the most useful information that's orthogonal to the top-ranked one?
Some more detail about your ranking algorithm/procedure would be welcome of course
On March 08 2013 00:59 CrushDog5 wrote: The other interesting thing we didn't find, is Macro. No macro variable outside of workers trained was useful for anything. Surprising, but there it is.
Wild! Did you look at SQ?
Didn't. SQ is based on the resources graph, which is not in the actions list part of the replay. We only looked (so far) at the actions part. We also want to focus (at the beginning) on Cognitive/Motor sorts of variables.
I presume there is a lot of correlation among the various measures you used, so that aside from your top-ranked criterion, the next-highest-ranked criterion achieves that spot because it has the most useful information that's orthogonal to the top-ranked one? Some more detail about your ranking algorithm/procedure would be welcome of course
To some extent. Conditional inference forests (which is what we used) use bootstrapping, which serves to de-correlate the variables.This method is commonly used in screening procedures. Obviously this is different than something like linear regression. In cases where there were very strong correlations (action latencies to the first action compared to latencies of the other actions in a PAC) we chose the most powerful of the two and dumped the other. The most highly correlated variables in the analysis we show are Hotkey Assigns and Hotkey Selects.
It's in the paper, which we'll post when it's accepted. Basically we use permutation raking; randomly permuting the variable and measure how much worse the classifier gets. We run these 25 times (that is, we create 25 forests using the same procedure) and include a random control predictor. If the median importance is better than the max of the control, it's unlikely due to chance. That's also how we can tell that Action latency is a better predictor than APM is some leagues: There is simply no overlap over 25 runs, APM always looses. Vice versa for Masters/Pro, where APM always wins.
Great post, but I have some criticisms of the "hotkey" section. Specifically, "production building hootkey assigns", "upgrade building hotkey assigns", and "combat unit hotkey designs". These 3 sections extrapolate the numerical control group assignments from the replay. However, when you do so, you fail to recognize or take into account the possibility of control group remapping.
To explain what I mean, if you go into the StarCraft 2 options and select "Hotkeys", then select "Global", then select "Control Groups", you will see that it is possible to remap, for example, the "1" button to control group 4. If a player did that, the replay would show that the player was using a lot of control group 4; implicitly, you assume that he is also using the button "4" to do this, and this is obviously not the case.
Many players remap control group hotkeys. Most commonly, it is possible to remap ALL possible hotkeys (not just control groups) to one small area of the keyboard. In doing so, the hand doesn't have to be moved whatsoever to perform any arbitrary action. For an excellent example of this, see the Core.
A hotkey configuration in which the hand never has to move is optimal for APM purposes, but the specific configuration of each player's setup varies widely. And since there is no way to extrapolate a player's individual remapping's from a replay, it makes the resulting numerical control group assignments completely arbitrary and altogether worthless for any kind of scientific aim. This is the kind of thing that would never be caught in a scientific peer review, but only by a StarCraft player!
On March 10 2013 04:55 Zamiel wrote: Hi CrushDog5,
Great post, but I have some criticisms of the "hotkey" section. Specifically, "production building hootkey assigns", "upgrade building hotkey assigns", and "combat unit hotkey designs". These 3 sections extrapolate the numerical control group assignments from the replay. However, when you do so, you fail to recognize or take into account the possibility of control group remapping.
To explain what I mean, if you go into the StarCraft 2 options and select "Hotkeys", then select "Global", then select "Control Groups", you will see that it is possible to remap, for example, the "1" button to control group 4. If a player did that, the replay would show that the player was using a lot of control group 4; implicitly, you assume that he is also using the button "4" to do this, and this is obviously not the case.
Many players remap control group hotkeys. Most commonly, it is possible to remap ALL possible hotkeys (not just control groups) to one small area of the keyboard. In doing so, the hand doesn't have to be moved whatsoever to perform any arbitrary action. For an excellent example of this, see the Core.
A hotkey configuration in which the hand never has to move is optimal for APM purposes, but the specific configuration of each player's setup varies widely. And since there is no way to extrapolate a player's individual remapping's from a replay, it makes the resulting numerical control group assignments completely arbitrary and altogether worthless for any kind of scientific aim. This is the kind of thing that would never be caught in a scientific peer review, but only by a StarCraft player!
As for the rest, good work. =)
What do you suppose the is the percentage of players who remap their 1-0 control groups to different keys? If it is large, then you're right, the data don't mean very much, because you don't really know where those keys are. If it's small though, it will just add some noise to those data.
My guess is that lots of pros have extensive brood war experience and so haven't changed those keys at all. I also think that the average <platinum player probably hasn't changed them. This is just my intuition, of course.
It would be fairly straightforward to compare various hotkey configurations for efficiency and for ease of learning. We'd need replays. Maybe 100 before the change, and then everything afterwords. We'd need comparable replays from people who have not switched, but have played an equivalent amount of games, and are in the same league. You just match the ActionLatencies at the start, and then compare them after the switch. You'd also see what the cost of the switch is, in actual time (e.g., it costs 100ms per hotkey use, and it is back to normal speed after 300 games, and 25ms better than normal after that). You could ALSO see if there were specific hotkeys which were difficult to learn, and which ones may not be any better than
I'm not sure how easy it is to get that data, but it is not challenging to analyze (though it will take some time).
If you could figure out how to make the data collection painless, I'd be willing to analyze the data.
I haven't hit the 1v1 ladder in well over 6 months IIRC. I race switched to protoss from zerg and I changed my keyboard layout to TheCore. Would grinding some games on the ladder and submitting the replays help you guys in any way? I played a bunch of team games in between but nothing really serious... I just wonder if the data would suffer from so many changes (hots + keys + race).
Quick question since no one seems to have brought this up; do you think you're missing 1/2 of the picture here?
If we compare SC2 to say - blitz chess - I think measuring what level of a chess player someone is from how fast he move his pieces (APM/PAC) might gives impression that chess or SC2 is just a matter of moving things quickly instead of something that require understanding of the rules of the game, strength of each pieces alone and in relation to other pieces, positioning, etc?
That's been said, I can't think how can you automate measuring how good someone is at a) game reading b) understanding flow from economy to tech to unit c) scouting d) positioning and counter from replays alone.
I noticed your request for replays from a large number of games (300). I've played 1200 games on and off since release (high masters). Often taking breaks, coming back and re-mastering basing mechanics each time and relapsing in to bad decision making as I relearn basic variables and get my game sense back. So essentially I plateau and I'd imagine you'd see very little improvement over 300 or 600 games as I bounce back and forth to my usual level. And the amount of people with 1000 ranked still in bronze or similarly lower..baffles me even more.
Even rarer would be trying to find someone starting at bronze, willing to improve up to about diamond, and submit their replays.
Since you guys seem to have hit on the ability of replays to show us things like the PAC cycle, do you have any plans of either creating an add-in for SC2Gears so we can track our own PAC cycle, or collaborating with the creator of SC2 gears so he can add this as a core feature?
Additionally, if the answer is yes, what additional statistics might it be possible for us to pull from our own replays to gauge our progress and/or helping us improve our play level?
On March 06 2013 05:14 JinDesu wrote: One small note - I don't know if it's because the sun is glaring in from my window right now, but white text on almost teal background is a little hard to read. Sorry for being a bit picky.
I did the layout for this and agree it's not great I might change it to yellow or something, will try to fix today.
The results are pretty cool, looking forward to seeing it published.
Try to avoid yellow as much as possible too. It's just not user friendly at all either and should be avoided as much as possible too. D:
On March 07 2013 04:49 JaKaTaK wrote: @figq I absolutely agree, especially with HotS, there are more and more reasons to use more control groups. Which is why I use a layout that supports 10 control groups and 8 camera location keys all within 2 keys reach of my resting position.
On March 06 2013 05:14 JinDesu wrote: One small note - I don't know if it's because the sun is glaring in from my window right now, but white text on almost teal background is a little hard to read. Sorry for being a bit picky.
I did the layout for this and agree it's not great I might change it to yellow or something, will try to fix today.
The results are pretty cool, looking forward to seeing it published.
why not just change it to the dull light blue of TL? I find this colour is extremely ez on the eyes.
Don't know if someone already dumped this here, but it seems to be quite interesting:
Defining Cognitive Science | Mark Blair: Real-time strategy video games
Dr. Blair talks about his current research explores expertise in the context of real-time strategy (RTS) games such as Starcraft II. He discusses the findings of his team's first study, which is the largest expertise study ever conducted, as well as the limitations and vast potential of this approach.
Mark Blair is Associate Professor in the Cognitive Science Program at Simon Fraser University. He leads his research team in the Cognitive Science Lab looking at how selective attention, the ability to pay attention to important features and ignore irrelevant ones, supports categorization, and interacts with our memory and perceptual systems.
To participate in his current study see: http://skillcraft.ca/ Presented by the Cognitive Science Program, Simon Fraser University.
Would it be possible to include EAPM as well to see whether it's a more accurate correlate to skill than traditional APM? Obviously using something similar to SC2Gears EAPM rather than the new unreliable ingame measurement. I would imagine it would be a more accurate measurement, but it'd be nice to see that backed up with data.
On March 07 2013 17:45 Mongolbonjwa wrote: One thing I have always been thinking, is that people often talk about multitasking in starcraft. But actually there is no real multitasking in starcraft, it is not even possible. All actions are in sequences, they do not happen simultaneously. What people think is multitasking, is actually just switching fast between different actions.
In contrary, there is real multitasking in fighter pilot training and actually flying the jet plane, and it is far more difficult than starcrafts "multitasking" which is not even real multitasking. So if you wanna study multitasking, fighter pilots and their training programs are something that should be checked. Even the qualifiers for pilot training programs are very difficult. Flying jets also requires great deal of emergency management skills, which again is far more demanding than playing starcraft. Specially when considered that fighter pilots need to learn to deal with high G-forces and they have to study lots of theory of physics of flying and how the jet itself works, learning to fly jet planes is harder than starcraft.
Well, you've sucked me in, had to register an account.
You keep putting this study down in multiple posts with baseless and extraneous criticisms. Can I ask, what are your qualifications?
Humans are inherently incapable of "high level" multi-tasking. We're similar to a single core processor, rapidly switching between different tasks. Our "sensors" - feel, sight, hearing, etc. - provide "interrupts" (in the computer science sense) that signal our brains' higher-level areas to refocus on a new task. Much like a computer switching tasks, this mental task switching incurs overhead (thus it is more efficient for humans to finish single tasks to completion than it is to hop between several tasks).
Flying a jet is no more "multi-tasking" than Starcraft is. In fact, there is much less task-switching in flying a plane. When you're in an emergency situation piloting a plane, your entire attention is on the single situation at hand. You are flying the plane, based on input from a number of your senses. To add an extra layer of "multi-tasking" to flying a plane, you could add random math problems (on top of an emergency situation) that need to be solved on an interval to prevent the controls from locking out. This would be kind of like attending to worker macro during an intense battle.
Before you ask: Yes, I fly planes. Yes, I play Starcraft.
On March 07 2013 17:45 Mongolbonjwa wrote: One thing I have always been thinking, is that people often talk about multitasking in starcraft. But actually there is no real multitasking in starcraft, it is not even possible. All actions are in sequences, they do not happen simultaneously. What people think is multitasking, is actually just switching fast between different actions.
In contrary, there is real multitasking in fighter pilot training and actually flying the jet plane, and it is far more difficult than starcrafts "multitasking" which is not even real multitasking. So if you wanna study multitasking, fighter pilots and their training programs are something that should be checked. Even the qualifiers for pilot training programs are very difficult. Flying jets also requires great deal of emergency management skills, which again is far more demanding than playing starcraft. Specially when considered that fighter pilots need to learn to deal with high G-forces and they have to study lots of theory of physics of flying and how the jet itself works, learning to fly jet planes is harder than starcraft.
Well, you've sucked me in, had to register an account.
You keep putting this study down in multiple posts with baseless and extraneous criticisms. Can I ask, what are your qualifications?
Humans are inherently incapable of "high level" multi-tasking. We're similar to a single core processor, rapidly switching between different tasks. Our "sensors" - feel, sight, hearing, etc. - provide "interrupts" (in the computer science sense) that signal our brains' higher-level areas to refocus on a new task. Much like a computer switching tasks, this mental task switching incurs overhead (thus it is more efficient for humans to finish single tasks to completion than it is to hop between several tasks).
Flying a jet is no more "multi-tasking" than Starcraft is. In fact, there is much less task-switching in flying a plane. When you're in an emergency situation piloting a plane, your entire attention is on the single situation at hand. You are flying the plane, based on input from a number of your senses. To add an extra layer of "multi-tasking" to flying a plane, you could add random math problems (on top of an emergency situation) that need to be solved on an interval to prevent the controls from locking out. This would be kind of like attending to worker macro during an intense battle.
Before you ask: Yes, I fly planes. Yes, I play Starcraft.
On March 07 2013 17:45 Mongolbonjwa wrote: One thing I have always been thinking, is that people often talk about multitasking in starcraft. But actually there is no real multitasking in starcraft, it is not even possible. All actions are in sequences, they do not happen simultaneously. What people think is multitasking, is actually just switching fast between different actions.
In contrary, there is real multitasking in fighter pilot training and actually flying the jet plane, and it is far more difficult than starcrafts "multitasking" which is not even real multitasking. So if you wanna study multitasking, fighter pilots and their training programs are something that should be checked. Even the qualifiers for pilot training programs are very difficult. Flying jets also requires great deal of emergency management skills, which again is far more demanding than playing starcraft. Specially when considered that fighter pilots need to learn to deal with high G-forces and they have to study lots of theory of physics of flying and how the jet itself works, learning to fly jet planes is harder than starcraft.
Well, you've sucked me in, had to register an account.
You keep putting this study down in multiple posts with baseless and extraneous criticisms. Can I ask, what are your qualifications?
Humans are inherently incapable of "high level" multi-tasking. We're similar to a single core processor, rapidly switching between different tasks. Our "sensors" - feel, sight, hearing, etc. - provide "interrupts" (in the computer science sense) that signal our brains' higher-level areas to refocus on a new task. Much like a computer switching tasks, this mental task switching incurs overhead (thus it is more efficient for humans to finish single tasks to completion than it is to hop between several tasks).
Flying a jet is no more "multi-tasking" than Starcraft is. In fact, there is much less task-switching in flying a plane. When you're in an emergency situation piloting a plane, your entire attention is on the single situation at hand. You are flying the plane, based on input from a number of your senses. To add an extra layer of "multi-tasking" to flying a plane, you could add random math problems (on top of an emergency situation) that need to be solved on an interval to prevent the controls from locking out. This would be kind of like attending to worker macro during an intense battle.
Before you ask: Yes, I fly planes. Yes, I play Starcraft.
I salute you sir. Your first post is far more memorable than mine. Welcome to the realm of those who used to lurk and were yanked into posted because something demanded responding. I greet you as a fellow member ^_^
Just noticed this post, what an interesting topic. Computer games really have an exceptional potential for providing data on learning - and perhaps on motivation further ahead. SC2 provides a mix between the strategic, decision-making skills of a game like chess, and the physical, "mechanical" skills of traditional sports, and it ties them up in knot of perceptual learning. Trying to tease apart the variables of importance must be fascinating work.
For a practical issue, a few of the graphs in your post comes of as somewhat confusing on a first reading. I don't quite understand the use of the three bars to indicate percentages in the first two graphs in the settings data section. Would it not be more intuitive/better for comparison to use a single bar divided into three areas for every league (stacked bar graph)?
I wonder if you'll be doing more research with a focus on the highest echelon of players in the future? From my perspective, the differences between players from the masters level and up are very interesting. Have you considered attempting to compare the interactive data from the replays with more direct perceptual data? I'm thinking something akin to eye-tracking. As far as I can tell, the data from a replay will not easily allow you to measure a players awareness of the mini-map, for instance.
Another area of inquiry could be the success of players at different periods of time in the game. At later stages (say +10 minutes) there are generally more things in the game to keep track of and more things to do. Just from watching different SC2 players, there are big differences in how well otherwise seemingly equally "skilled" players manage that increased demand on attention and action.
Well, best of luck with your work. Please keep updating us.
What's interesting about Starcraft 2 is that it is one of the few real time played games which has its statistical record keeping hard coded/built into the game rules rather than externally recorded. This makes it a prime candidate for an analysis of stress. As an example, the predisposition to certain personality types to "panic" in game through rapid action movement, a suddenly drop in useful APM upon an engagement and so on.
On May 18 2013 02:44 dsjoerg wrote: Has the paper been accepted / published yet? Inquiring minds must know!
We were asked to do some revisions to the manuscript, which we did, and just sent it in today.
We're hoping the editor just accepts it as is without sending it back out to reviewers. If that happens, it will be maybe 3 weeks till it's available online. We submitted to an open access journal, so everyone can just download the paper without needing to be affiliated to a university or pay anything.
If they send it out to reviewers again add 3 weeks.
If the editor goes insane and rejects this revision add 6 months. I don't think that will happen though.
On May 19 2013 04:22 Evangelist wrote: What's interesting about Starcraft 2 is that it is one of the few real time played games which has its statistical record keeping hard coded/built into the game rules rather than externally recorded. This makes it a prime candidate for an analysis of stress. As an example, the predisposition to certain personality types to "panic" in game through rapid action movement, a suddenly drop in useful APM upon an engagement and so on.
I look forward to seeing the results of this.
We have a dataset of 26 games with corresponding heartrate data. We'll be working on this over the summer.
On April 13 2013 03:07 m0ck wrote: I wonder if you'll be doing more research with a focus on the highest echelon of players in the future? From my perspective, the differences between players from the masters level and up are very interesting. Have you considered attempting to compare the interactive data from the replays with more direct perceptual data? I'm thinking something akin to eye-tracking. As far as I can tell, the data from a replay will not easily allow you to measure a players awareness of the mini-map, for instance.
Using the minimap to A-move is predictive in some leagues. So that tells us something. We'll do eye-tracking work eventually, we were an eye-tracking lab before we starting doing SC2 research. With eye-tracking samples are small though, because you have to run people individually.
On May 19 2013 04:22 Evangelist wrote: What's interesting about Starcraft 2 is that it is one of the few real time played games which has its statistical record keeping hard coded/built into the game rules rather than externally recorded. This makes it a prime candidate for an analysis of stress. As an example, the predisposition to certain personality types to "panic" in game through rapid action movement, a suddenly drop in useful APM upon an engagement and so on.
On March 07 2013 17:45 Mongolbonjwa wrote: One thing I have always been thinking, is that people often talk about multitasking in starcraft. But actually there is no real multitasking in starcraft, it is not even possible. All actions are in sequences, they do not happen simultaneously. What people think is multitasking, is actually just switching fast between different actions.
In contrary, there is real multitasking in fighter pilot training and actually flying the jet plane, and it is far more difficult than starcrafts "multitasking" which is not even real multitasking. So if you wanna study multitasking, fighter pilots and their training programs are something that should be checked. Even the qualifiers for pilot training programs are very difficult. Flying jets also requires great deal of emergency management skills, which again is far more demanding than playing starcraft. Specially when considered that fighter pilots need to learn to deal with high G-forces and they have to study lots of theory of physics of flying and how the jet itself works, learning to fly jet planes is harder than starcraft.
Well, you've sucked me in, had to register an account.
You keep putting this study down in multiple posts with baseless and extraneous criticisms. Can I ask, what are your qualifications?
Humans are inherently incapable of "high level" multi-tasking. We're similar to a single core processor, rapidly switching between different tasks. Our "sensors" - feel, sight, hearing, etc. - provide "interrupts" (in the computer science sense) that signal our brains' higher-level areas to refocus on a new task. Much like a computer switching tasks, this mental task switching incurs overhead (thus it is more efficient for humans to finish single tasks to completion than it is to hop between several tasks).
Flying a jet is no more "multi-tasking" than Starcraft is. In fact, there is much less task-switching in flying a plane. When you're in an emergency situation piloting a plane, your entire attention is on the single situation at hand. You are flying the plane, based on input from a number of your senses. To add an extra layer of "multi-tasking" to flying a plane, you could add random math problems (on top of an emergency situation) that need to be solved on an interval to prevent the controls from locking out. This would be kind of like attending to worker macro during an intense battle.
Before you ask: Yes, I fly planes. Yes, I play Starcraft.
Fantastic. Absolutely fantastic.
As for the material presented in the thread. This was utterly intriguing. Thank you for this- this is one of the most interesting things I've seen in days.
On May 18 2013 02:44 dsjoerg wrote: Has the paper been accepted / published yet? Inquiring minds must know!
We were asked to do some revisions to the manuscript, which we did, and just sent it in today.
We're hoping the editor just accepts it as is without sending it back out to reviewers. If that happens, it will be maybe 3 weeks till it's available online. We submitted to an open access journal, so everyone can just download the paper without needing to be affiliated to a university or pay anything.
If they send it out to reviewers again add 3 weeks.
If the editor goes insane and rejects this revision add 6 months. I don't think that will happen though.
The editor decided to send it back to reviewers. That means it will add some time to the process (unfortunately). It really does take forever to get this stuff published!!
In the meantime we are writing up a study of how age influences performance, a paper on performance and heart rate, and a paper on performance variability in pro practice games. Hopefully, having the first paper published already will make subsequent papers go more smoothly.
On June 26 2013 01:59 Dreamer.T wrote: Anyone else laugh at that sudden dip of gg's going from diamond to masters?
Haha yes, I noticed it too. It seems WhiteRa's "More GG more skill" has finally been supported though, despite that dip there does seem to be a positive correlation!
On June 26 2013 03:55 9-BiT wrote: I don't gg, maybe I should start.
Handling defeat with grace is a useful skill, and postgame ggs are a good way to practice it. I suspect that the GM level sees gg frequency spike for social reasons; these guys can't really be strangers to each other if they play each other much more frequently than ye olde random gold vs another gold. Plus, piss someone off, and you will find yourself antagonized in return when the other shoe drops (and it always does).
On June 26 2013 05:04 Lumi wrote: Oh my god, the dip in GG for masters players.. I laughed so hard at that graph, and then a second time when I payed more attention to that detail :D
Also. Crushdog5 - I want your babies.
perhaps the combination of being good players, -who still have lots to learn -mass game to have a chance at reaching GM -copying build orders and are in general frustrated by their/other's play.
although it'd be less accurate, i'd like to see the stats for 'gl hf', 'gl', 'hf' and another one for people who 'gl hf'' then do not 'gg' after game, haha.
On June 26 2013 05:30 nanaoei wrote: Proud that this is all from my local university! : )
SFU fighting~
Faculty of Applied Sciences ftw!
EDIT: Oh, never mind. I forgot Cognitive Sciences is an inter-disciplinary department at SFU (I think Computing Science + Psychology?). Anyways... GO SFU! /woot
On March 31 2013 10:36 Reithan wrote: I'm still interested in a tool or sc2 plugin/feature for seeing your own PAC.
Would still love any response to this question. Even if it's just "We don't plan on implementing any tools or interfaces for individual users at this time" or something...
On March 31 2013 10:36 Reithan wrote: I'm still interested in a tool or sc2 plugin/feature for seeing your own PAC.
Would still love any response to this question. Even if it's just "We don't plan on implementing any tools or interfaces for individual users at this time" or something...
We don't plan on implementing tools or interfaces for individual users at this time.
That being said, we'd be happy to answer questions from anyone who wanted to implement such a thing. PM me or email me if you have implementation questions. All our analysis code is MATLAB, which is obviously not a good choice for a general purpose tool, so some work will need to be done. The required data processing required to get PAC latencies is not difficult.
We're a science lab, so our mandate (and funding) keeps us focused on those aspects of the project that are of primarily scientific interest, but we are very supportive of the community and if our work could be helpful to anyone, that would be great.
If you could share the code for computing PACs in any language, that would help the community maintain 100% consistency in how PAC is measured and described. That would help avoid the situation that exists today with APM and EAPM, where there are thirty-nine different measurements, all different.
You measured these stats per game instead of per time? That means that if a league has longer games it inflates all their stats. I feel like you should've measured them per minute or something. Cool stuff though!
On July 03 2013 06:44 Die4Ever wrote: You measured these stats per game instead of per time? That means that if a league has longer games it inflates all their stats. I feel like you should've measured them per minute or something. Cool stuff though!
We did use measures that are per unit time i.e., workers trained divided by game length(time) for each game. For the record though, there were only a small differences in game length by league.