|
I think a lot of people are saying stupid things.
First off, the 'what is fair'-argument. It doesn't matter. You decide on the challenge you want the AI to solve. No one is going to argue that an AI isn't 'being fair' when it is doing your current job so much better that you get fired. If AI is better at looking at medical imaging data than doctors, why should we let doctors keep doing this job and let people die? So what does this whole 'not fair' thing mean and why is it even important? To me, the APM cap is already silly. In theory, a human could keep count of the hits every unit they have is getting, because it is in view, so they can calculate the hp of every of their 20 marines without having to click on them. But humans cannot. But AI can. That's the point. They don't have the limitations humans have. Yes, the AI can see the game state and doesn't need an interface. But we could also display the game state as an matrix of numbers on the screen, rather than the view of the game 'made for humans'. But humans cannot comprehend that at all, because of human limitations.
BTW, to those that think the AI learned from playing vs Mana, that almost certainly isn't the case. You train the weights and biases of your net using training games. You need to be sure if you are adjusting your weights properly. If you let the AI play 1000's of games vs a human, you do not know if when you are winning more you actually have better weights, or the humans are weaker/tired/messing around, etc.
If you compare this with the BWAPI bots and considering the power deeplearning neural networks have so far shown to have, this clearly shows that a neural net can handle playing Starcraft properly. The AI doesn't get stuck doing silly stuff. It clearly knows the objective of the game.
This means that they succeeded in successfully capturing playing RTS as a mathematical equation. They have a game state, which basically is a matrix of a bunch of numbers, and they have a 'move space' which is basically a matrix that decides which action to take. And they were able to state it such that you have a phase space landscape that has curvature enough to move towards a minimum. Training the neural network doesn't lead from one silly useless attempt to control the game, to some other version of that. It is converging towards proper strong play.
I remember having a debate here where people vehemently claimed that AI would never be able to properly play an RTS. Which was after AlphaGo beat Lee Sedul. Which is completely silly in hindsight. I think people that say they are not impressed somehow expected an AI with an eerie ability of the game to read the mind of the other player. Or they expect 'human play', whatever that means. But the bots train to be good at winning. Yes, they train against each other, so it is not clear initially if the way the AI plays is completely filled with blind spots that humans automatically find and exploit. Often you can see how an AI tries to react to you, and then exploit that. We only saw that once, with the harass (donno what those units are called as I am a SC BW player). But the AI beat the human players. That is what matters. We don't know what would have happened if the humans were better. The AI doesn't care about aesthetics like far distance mining with workers or building too many observers. The AI points out what is important in winning the game. And clearly one aspect there is steady macro, good micro and decisive attacks. To me, the most impressive aspect was that the AI knows when it can win a fight and when it cannot. So this tells you something about the game. Apparently 'being really good at the micro minigame' is at the core of being good at SC2. People say the AI didn't scout. Maybe scouting isn't that important? Or maybe the AI did have 'Star sense' and it already knew what it needed to know. Furthermore, this isn't a human. So they don't think in concrete plans. The AI juggles between playing vs all unknown plays at the same time. So rather than trying to hardcounter what the opponent is doing, the AI decides to play a strategy that wins vs most things the opponent can do. Maybe this is the proper way to approach SC2? Humans try to play to win each and every game. But since RTS are games of limited info, maybe like in poker you should play the best strategy on the long term, not try to win a game you are not going to be able to win because of a random draw/bad luck.
People talk about mindgames and about the AI being like human RTS players, trying to guess their opponent's build. But the AI here is far superior. Especially if you can just introduce a new agent that has a completely different style. The AI doesn't know what happened in the game just before and it doesn't care. It knows the best way to play, and it just keeps doing that. How are you going to counter that as a human? By definition, you will lose every mind game vs AI, because it doesn't 'think'. Well, maybe the midngame will be with the human in control of which agent to select.
As for the other matchups, if you can solve this matchup, you can make a completely independent NN for the other ones. Someone said it will be way more difficult to train a NN that can do all matchups. Yes, if you need the exact same neural net to play all matchups with the same nodes and weights, yes. But you can just make a completely independent agent for each.
As a SC BW player I am kind of confused about how oversaturating minerals is considered 'bad' by SC2 players. I thought we in SC BW already knew you want to oversaturate. So it seems SC2 people forgot or had to unlearn because their game is different. So this AI clearly has a different approach. So how do we know who is wrong? How do we know if this AI is wrong to oversaturate, or if humans are wrong to do so? This is very similar to the discussion people had in Go, where it was not clear if what the NN was doing was far superior to humans, but humans couldn't understand why, or if the AI was wrong, but much stronger in other aspects of the game so that it wouldn't matter.
Same thing for the AI doing mass stalkers vs immortals when humans think stalkers get countered by immortals. Maybe going mass stalkers and microing properly is the best strategy in every PvP. It says something about the game, not about how bad the AI is.
Maybe some things to think about before you say "x would beat the strongest Deepmind AI 10 times in a row" or 'just cannon rush the AI and you will win" or 'attack the AI and you will win'.
I also can only shake my head at people saying "The AI was only good at massing units and the micro mini game. It doesn't understand anything about strategy."
I think the only big question now is if the following can happen. Deepmind releases a very strong AI to the public, so everyone can play against it. The whole community plays vs it and tries to find a weakness. Will the community as a whole figure out a weakness so that it can be exploited so that at some point all good players can just use that exploit and win. Maybe. But the point is, you can just generate a new AI that has a different weakness. And humans will lose many games before they can find it. So what is the point? If you could somehow capture the play style of a top human player and freeze it, you can much easier train a NN overfitted to beat that specific human player.
At this point we know that an AI doesn't make human mistakes, is relentless, has all the attention to do everything it needs to do, is not obviously exploitable, had decent macro, and will outmicro you. If the only way to beat an AI is to cheese it and hope this build wasn't used by competing agents, then how are humans still superior?
|
Eudorus you make a lot of great points, but one thing I don't agree with is giving the computer limitations being silly (i.e. APM). It just depends on the question you are trying to answer. Do we want to know if computers can have higher APM than humans? No, we already know that humans could never keep up with a computer in terms of APM. That would be like trying to find out if humans can beat a calculator at math problem solving speed, we already know the answer to that, we don't need an experiment. Here we are trying to answer can AI be programmed to be smart as or smarter than a human at a game of starcraft? beat a human with strategy rather brute APM.
I think the unexpected answer we are getting some early insight into is that maybe starcraft2 isn't as much of a strategy game as we thought it was. To your point, maybe it is mostly about just doing the most high chance success strategy with insane perfect execution (macro and micro). This is what the most successful sc2 looks like. But I don't think the question is fully answered yet.
Agree that real life applications like healthcare we would never want to give limitations, because we're not trying to answer question here really, the most important thing is saving lives. This kind of research will actually help lead to more real world applications like healthcare tools (the team mentioned weather forecasting, which I thought was a very bland example)
|
But Starcraft is a game. You either win or lose. There is no such thing as 'being smart'.
I do agree with the reason why they put in that limitation. It will be more impressive to see an AI play carefully and strategic rather than an AI that just sits there, then suddenly attacks with some silly unit composition and completely outmicros the human and just somehow wins. It also interesting to see if the AI develops nuanced patterns and behaviors in this realm. But it is probably not the easiest way to minimize towards a NN that wins.
But if you want an AI to be good at Starcraft, there is no reason to put in a limitation. Unless it costs you too much processor power or something and you want so solve the same problem with less resources.
When two humans play, it can really pay off to figure out what your opponent's plan is. Humans usually have a clear and concrete plan. They don't juggle in their mind on 3 candidate plans and have small details sway to which plan they commit. Humans have tells. This can explain why humans play differently. If you can tell your opponent thinks that you are ahead, it means something. Same if you can tell your opponent probably wants a longer game vs you. But those AI agents playing vs each other, they don't have concrete plans, or tells, or try to figure them out and hard-counter them, like humans would.
|
What is fair can be relevant depending on what people are looking for from the benchmark. If it is just about playing Starcraft, then I'd say being fair doesn't really matter and computers simply have advantages in certain areas over humans. However, in my view it isn't just about playing Starcraft but finding methods that can learn how to act in various kinds of complex domains. In that sense the question isn't just whether the machine can beat the best human, but rather can the methods learn to deal with all sorts of things that arise in the game. By handicapping its mechanical abilities human benchmarks can better be used to evaluate how well the methods have learned the other aspects.
|
On January 26 2019 05:40 Eudorus wrote: But Starcraft is a game. You either win or lose. There is no such thing as 'being smart'.
I do agree with the reason why they put in that limitation. It will be more impressive to see an AI play carefully and strategic rather than an AI that just sits there, then suddenly attacks with some silly unit composition and completely outmicros the human and just somehow wins. It also interesting to see if the AI develops nuanced patterns and behaviors in this realm. But it is probably not the easiest way to minimize towards a NN that wins.
But if you want an AI to be good at Starcraft, there is no reason to put in a limitation. Unless it costs you too much processor power or something and you want so solve the same problem with less resources.
When two humans play, it can really pay off to figure out what your opponent's plan is. Humans usually have a clear and concrete plan. They don't juggle in their mind on 3 candidate plans and have small details sway to which plan they commit. Humans have tells. This can explain why humans play differently. If you can tell your opponent thinks that you are ahead, it means something. Same if you can tell your opponent probably wants a longer game vs you. But those AI agents playing vs each other, they don't have concrete plans, or tells, or try to figure them out and hard-counter them, like humans would.
I don't know who would doubt that with infinite apm an AI would win just by overwhelmingly superior mechanics and you definitely don't need a neural network to do that. What AlphaStar is expected to do in the future, if Deepmind will keep on working on it, is to display smart decisions, that's what ordinary AI can't do by themselves.
|
For people who would doubt AI can beat strong players in AI, just read what people posted here a year ago, for example in the Boxer on AlphaGo thread. Secondly, so what kind of AI can beat a top players without an APM cap and also without a neural network?
So you say you think you need an APM cap to get an AI with 'smart decisions' rather than overwhelming the other player with superior mechanics? So what does that mean? I know what you are trying to say, but think about how you would define 'being smart' in the context of Starcraft. How is it 'smart' to play more human-like and have a finely tuned unit composition and play calm and macro carefully when the nature of the game is such that you should just mass stalkers, control them individually every time the game state updates, and simply just mass up, micro them around continuously, and go in for the kill when your opponent makes a mistake? The best way to define 'being smart' is that which leads to a higher winrate.
By the way, there are both simplifying aspects to having an APM cap, as well as real-life applications to not having one. It will be hard to train the same network to control all your 30 stalkers every game state while as well deciding upon when to switch tech or expand. So in that case you need a small fast micro network and a larger slower network. And there are real-world problems that require such NN's as well as real-world problems that require high speed. Furthermore, who knows what strategy the AI would come up with to properly micro infinite APM blink stalker vs blink stalker battles vs itself. That would need many more nodes than just moving the one that is hurt to the back while yourself attacking the lowest ph enemy stalker currently in range.
No, the APM cap was decided on to make the AI more 'human-like' and relatable. It is basically PR as well as seeing if you can train an AI to do tasks in human-like manner. There are many real-world problems that you can do effectively in eerie AI-like manners, which would be unacceptable for social reasons, or in a human-like manner. If you can mimic a human while outperforming them, than that is often better than doing the same thing slightly better, but being completely alien.
|
On January 26 2019 06:38 Eudorus wrote: For people who would doubt AI can beat strong players in AI, just read what people posted here a year ago, for example in the Boxer on AlphaGo thread. Secondly, so what kind of AI can beat a top players without an APM cap and also without a neural network?
So you say you think you need an APM cap to get an AI with 'smart decisions' rather than overwhelming the other player with superior mechanics? So what does that mean? I know what you are trying to say, but think about how you would define 'being smart' in the context of Starcraft. How is it 'smart' to play more human-like and have a finely tuned unit composition and play calm and macro carefully when the nature of the game is such that you should just mass stalkers, control them individually every time the game state updates, and simply just mass up, micro them around continuously, and go in for the kill when your opponent makes a mistake? The best way to define 'being smart' is that which leads to a higher winrate.
By the way, there are both simplifying aspects to having an APM cap, as well as real-life applications to not having one. It will be hard to train the same network to control all your 30 stalkers every game state while as well deciding upon when to switch tech or expand. So in that case you need a small fast micro network and a larger slower network. And there are real-world problems that require such NN's as well as real-world problems that require high speed. Furthermore, who knows what strategy the AI would come up with to properly micro infinite APM blink stalker vs blink stalker battles vs itself. That would need many more nodes than just moving the one that is hurt to the back while yourself attacking the lowest ph enemy stalker currently in range.
No, the APM cap was decided on to make the AI more 'human-like' and relatable. It is basically PR as well as seeing if you can train an AI to do tasks in human-like manner. There are many real-world problems that you can do effectively in eerie AI-like manners, which would be unacceptable for social reasons, or in a human-like manner. If you can mimic a human while outperforming them, than that is often better than doing the same thing slightly better, but being completely alien.
I believe that they picked StarCraft as a game to test the AI on because it is a game where players have to decide how to divide attention and make decisions with imperfect information. If AlphaStar is winning games with brute force because it has superhuman micro and the ability to focus across multiple screens at the same time then it isn't achieving that goal. The final game where it could only focus on one screen and failed to build a phoenix to counter it, didn't counter Mana's army composition and made bad army movement decisions makes it look like it still has a long way to go on the decision making side of things. I don't think the purpose of the AlphaStar is to have a bot that just bashes players but instead to have a bot that beats players at the imperfect information game, but the bot may never have to learn to do that without some limitations put on it.
|
Starcraft has this thing called “balance”, where unit strength is calibrated around human abilities to provide strategically rich gameplay. If you break the balance of the game then you can’t test the abilities of your agent, as it will just find out some silly unbeatable micro strat. APM caps and such is not just PR...
|
I wonder if AlphaStar would play differently if it didn't learn anything from replays, but had to figure everything out by itself. Would be more interesting strategywise to see what it does when it doesnt "mimic humans".
|
First of all, it was very impressive. Better than any other bots we have seen so far.
Secondly, MaNa just barely lost the first game (if there was a second sentry, the AI would have been as good as dead with its mindless blind rushing up-ramp) and another one (when he was ahead with an army of immortals VS phoenix-stalker. In this case MaNa simply threw the game, which he had in the bag by that moment, because hadn't expected such a decision from the AI , didn't know his style. If he just kept his stuff together and killed the third, he would have won). I wasn't impressed by the A.I.'s micro -- it was not perfect. More impressed that its early aggression was so "commited", because it's exactly how you throw games -- rushing. So , even that part wasn't hopeless for humans
Then , so long time working on that program and so much money spent -- and it's still crushed by a simple workers harassment , like a typical in-built A.I, a thing which I learned in , like , 5 games VS Insane A.I. Kind of stupid for such an advanced program... I am pretty sure I can beat that program, although I'm no pro . It's still very vulnerable.
Makes me think the battle for mankind hasn't been lost yet...
Finally , if some guy played one map whole his life in a mirror match and had a good micro control and mechanics, you would expect that he, having figured the map and all the most plausible scenarios , would win a lot. That's pretty much what we saw in that demonstration. Now, that guy would be totally helpless were he to play good players on many maps in a random order.
So, there is still hope : only when we see an A.I. crush random pro-gamers in a typical tournament setting, 100 out of 100 matches (or something like that, very persuasive), will we have the right to say the game has been figured by A.I, as happened to Go and Chess. Nowadays no world champion at those games can even hope to take a game from the best A.I.'s.
Actually , I'm a bit worried here : now that a human champion has lost to a program, there is a chance that the same happens what happened to Go and Chess...
|
I'd love to watch a set of exhibition matches against actual top tier players. There is such a skill gap between the best 5 pros and the lesser skilled pros. It would be so nice to see some Terran matches and mixed race games. Serral, Maru, TY, Classic, Stats, etc. I would watch those games on the edge of my chair
|
By the way, the AI can see invisible units, that’s what they said in the reddit AMA.
|
On January 26 2019 08:17 Grumbels wrote: By the way, the AI can see invisible units, that’s what they said in the reddit AMA.
The AI would be handicapped vs humans if cloaked units would be invisible to it, but visible to humans.
|
On January 26 2019 07:53 CobaltBlu wrote: I believe that they picked StarCraft as a game to test the AI on because it is a game where players have to decide how to divide attention and make decisions with imperfect information.
Why would 'divided attention' be a fundamental AI problem? AI's can be programmed to have just as much 'attention' as you have parallel threads available. I believe you are mistaken. Imperfect information? Definitely. Hard to capture in terms of moves, definitely. Attention. No way!
If AlphaStar is winning games with brute force because it has superhuman micro and the ability to focus across multiple screens at the same time then it isn't achieving that goal.
What do you mean 'brute force'? If you write an AI to detect brain lesions or tumors on a CT or MRI, why wouldn't you allow the AI access to many many previous images? If you want to write an AI to be an air controller and direct planes to their runways, why wouldn't you want the AI to control multiple airplanes at the same time?
The final game where it could only focus on one screen and failed to build a phoenix to counter it, didn't counter Mana's army composition and made bad army movement decisions makes it look like it still has a long way to go on the decision making side of things. I don't think the purpose of the AlphaStar is to have a bot that just bashes players but instead to have a bot that beats players at the imperfect information game, but the bot may never have to learn to do that without some limitations put on it.
The AI had imperfect information in all games. Only in the last game they had a new AI that had to use a window to access game info rather than just read out the game state from the API. We don't know why the last AI lost to Mana. Was it weaker because it had less training? Was it weaker because of this new restriction? Was Mana playing better and had some more luck?
Yes, they thought they had strong bots, so they put a new limitation on it. They obviously bought into the 'fairness' argument, be it marketing purposes or because having a restriction makes it more challenging and is relevant to some real-world applications. But to say that their core purpose was to have a neural network with limited attention perform better is false. And even more false is to come in with the notion that an AI without APM cap is 'less smart' because it wins through micro rather than through other means.
On January 26 2019 08:00 shabby wrote: I wonder if AlphaStar would play differently if it didn't learn anything from replays, but had to figure everything out by itself. Would be more interesting strategywise to see what it does when it doesnt "mimic humans".
Considering the move space, initially a neural network with random weights will have purely random actions. It will be as if your cat is walking across your keyboard, or as if a monkey is clicking your mouse. At that point you depend on the AI accidentally building a pylon, building a gateway, and having a zealot move towards the enemy base. So you would have hundreds of thousands of games lasting for a very long time with literally nothing happening. In other words, the phase space of tremendously huge and only a very tiny segment of that phase space has engines that actually attempt to play the game. And if you initiate a random neural net, it will be out there in a flat desert of completely random clicking. If you are there and move in any direction of the phase space, you aren't suddenly winning more games. All your agents would just randomly click while the clock counts down to a draw. So all your bots draw vs all of them because the phase space is so huge, no neural network gets initialized with the proper weights. That's why you first imitate human play. We already know how a game of Starcraft should look like. So there is no sense in exploring the vast flat desert of useless neural nets. Maybe you are copying things from humans that are bad and you don't unlearn them. Hard to know. But it makes no sense to try to train neural nets when only 0.0000001% of them actually send their probes to mine minerals.
|
I'd like to see how well AlphaStar does in BW, his main edge in the matches came from being aggressive and outmicroing his opponents w/ blink micro (obviously while having perfect macro/base management). BW has a much greater defenders advantage, it wouldn't be able to capitalize on the same aspects it did in SC2 (which to me was mainly exploiting micro and positioning).
BW games are slower paced and there's way less poking, AlphaStar wouldn't be able to exploit micro maneuvers which obviously benefits human players.
Edit: Well, AlphaStar could exploit microing multiple muta groups perfectly but that would only be viable in ZvT and ZvZ. Even then a progamer T could adapt and just open 1-1-1 into valks or fast vessel, AlphaStar would probably be godly in ZvZ tho.
|
If AlphaStar can recognize a very simple winning strategy, that doesn’t prove it can play in a more sophisticated manner.
|
People are so offended by this showmatch its making me think you believe that whatever happened in it has meaningful value beyond what the team at deepmind learned that can be applied to other fields.
- Do you honestly believe that any human can control 3 groups of stalkers on 3 different screens keeping them on the razors edge of dying vs immortals while keeping a near perfect surround on 3 different sides? - Do you think it was fair to throw a human that has never played vs an AI with that level of micro and expect him to not constantly misjudge the balance of power at any given screen ending up with the human taking fights he can't get any advantage out of?
Wondered why PvP was chosen as the AI's focus? - Easily distinguishable units for the uninitiated to starcraft, only 1 set of them too since its a mirror. - The unit with the most micro potential in the game that doesn't look hilariously unfair (see individual zerling / marine / bane micro) - Not a whole lot of emphasis on building placement for defending which I could see the AI having major issues with. - A matchup known for its short games and high agression. - Stalkers allow the computer to never really commit to anything it doesn't like while still being able to kill the human player at any point and keeping a good air defense while not having to do much scouting.
Its just a demonstration to improve their company's reputation and stock, attracting talent while doing some R&D and there is nothing wrong with that. I'm gonna keep watching people play the game and if every now and again they make a showmatch vs an AI cool, if they make AI's fight eachother and we get to learn something that can or cannot be utilized by human.... cool. If its still hurting your ego keep this in mind: starcraft was designed by people, balanced for people and is played by people with their inherent limitations, the moment you change that no one gets to claim that starcraft has been solved.
|
On January 26 2019 08:50 Doko wrote: People are so offended by this showmatch its making me think you believe that whatever happened in it has meaningful value beyond what the team at deepmind learned that can be applied to other fields.
- Do you honestly believe that any human can control 3 groups of stalkers on 3 different screens keeping them on the razors edge of dying vs immortals while keeping a near perfect surround on 3 different sides?
No, that is why we have AI
- Do you think it was fair to throw a human that has never played vs an AI with that level of micro and expect him to not constantly misjudge the balance of power at any given screen ending up with the human taking fights he can't get any advantage out of?
Neither had the AI ever played vs a human before. Well, it was trained purely vs other AI's. It didn't know it was playing something else. So what exactly are you trying to say? I also don't understand what this has to do with 'fair'. The only argument you can make was that the humans weren't playing as good as they normally would. And if a human misjudges a fight that the AI does judge correctly, how has that to do with the human being new to AI? You either misjudge a situation or you don't. You mean that the human didn't expect to be outmicroed because subconsciously they had never experienced that before so they were trained to play differently. Correct. But that it what happens when you play vs a superior player. Ai or no AI.
Wondered why PvP was chosen as the AI's focus? - Easily distinguishable units for the uninitiated to starcraft, only 1 set of them too since its a mirror. - The unit with the most micro potential in the game that doesn't look hilariously unfair (see individual zerling / marine / bane micro) - Not a whole lot of emphasis on building placement for defending which I could see the AI having major issues with. - A matchup known for its short games and high agression. - Stalkers allow the computer to never really commit to anything it doesn't like while still being able to kill the human player at any point and keeping a good air defense while not having to do much scouting.
Obviously, they thought that PvP would be easiest. You start testing your methods on the easiest matchup. Then you go to the more difficult one. If you don't do that, you are doing it wrong. Protoss has always been easiest to play, SCBW and apparently also SC2.
If you can go from Go to Starcraft, then going from PvP to ZvT is trivial.
Its just a demonstration to improve their company's reputation and stock, attracting talent while doing some R&D and there is nothing wrong with that. I'm gonna keep watching people play the game and if every now and again they make a showmatch vs an AI cool, if they make AI's fight eachother and we get to learn something that can or cannot be utilized by human.... cool.
Of course. But let's not forget that Deepmind did also quite well in the protein folding competition just a while ago.
If its still hurting your ego keep this in mind: starcraft was designed by people, balanced for people and is played by people with their inherent limitations, the moment you change that no one gets to claim that starcraft has been solved.
I don't know what this means.
|
On January 26 2019 05:40 Eudorus wrote: But Starcraft is a game. You either win or lose. There is no such thing as 'being smart'.
First of all, there is, of course, such thing as "being smart". IQ tests, with all their limits, measure that thing quite well. And that thing has a lot of "predictive power" : we can actually formulate provable hypotheses based on the IQ level and then test them : see , how things work out for this particular individual. It's a proper experiment , and psychometrics and statistics have accumulated a lot of data regarding that.
But you are absolutely correct : you either win or lose . No chess grandmaster, world chess champion etc. will ever win the Fritz or Stockfish Chess program. No Go champion will ever win the Deepmind program.
But it's not the case with the StarCraft : the game is still winnable VS AI. That's because StarCraft is a complex intellectual cognitively demanding game , and the AIs are still not capable of solving cognitive tasks of that level cogently. The AI that can be beat by a harass trick is worth nothing : it's stupid, it doesn't win games.
When they make the AI that beats any pro in any matchup in 100 out of 100 games, like it is the case in Chess now and like it's almost the case in Go now, we will have to admit that AI has surpassed humans in that cognitively very advanced domain as well. But it's not the case yet...
|
On January 26 2019 08:43 TT1 wrote: I'd like to see how well AlphaStar does in BW, his main edge in the matches came from being aggressive and outmicroing his opponents w/ blink micro (obviously while having perfect macro/base management). BW has a much greater defenders advantage, it wouldn't be able to capitalize on the same aspects it did in SC2 (which to me was mainly exploiting micro and positioning).
BW games are slower paced and there's way less poking, AlphaStar wouldn't be able to exploit micro maneuvers which obviously benefits human players.
Edit: Well, AlphaStar could exploit microing multiple muta groups perfectly but that would only be viable in ZvT and ZvZ. Even then a progamer T could adapt and just open 1-1-1 into valks or fast vessel, AlphaStar would probably be godly in ZvZ tho.
AlphaStar in TvT could be extremely interesting imo.
|
|
|
|