|
On November 22 2012 01:29 creamyturtle wrote: Have any of you guys studied Deep Blue? This computer was actually beaten by many many grandmasters. The problem with a computer is that it can be deceived. The grandmasters would rotate strategies, or mix strategies at random in order to trick it. This is because a player can purposely play a move that has a lower payoff, whereas the computer will always choose the highest payoff choice. Who says the computer always has to play the best strategy? If it seems advisable, the computer can be programmed to play a mixed strategy (with a probability distribution over the possible alternatives that determines how likely each one is).
So even if you built a computer that could theoretically play perfectly, it still wouldn't be perfect. Computers can't take big crazy risks, they can't think on their feet, and they can't be conditioned in the same way a human can. Plus, starcraft is a game of imperfect information, unlike chess. Wait, so a "theoretically perfect" strategy is bested by an alternative strategy? That's a contradiction in terms, because then the strategy wasn't perfect to begin with, unless you are talking about a game of chance. In deterministic games such as SC2, chess, etc. that cannot happen.
|
On November 22 2012 02:31 Legatus wrote:Show nested quote +On November 22 2012 01:29 creamyturtle wrote: Have any of you guys studied Deep Blue? This computer was actually beaten by many many grandmasters. The problem with a computer is that it can be deceived. The grandmasters would rotate strategies, or mix strategies at random in order to trick it. This is because a player can purposely play a move that has a lower payoff, whereas the computer will always choose the highest payoff choice. Who says the computer always has to play the best strategy? If it seems advisable, the computer can be programmed to play a mixed strategy (with a probability distribution over the possible alternatives that determines how likely each one is). Show nested quote +So even if you built a computer that could theoretically play perfectly, it still wouldn't be perfect. Computers can't take big crazy risks, they can't think on their feet, and they can't be conditioned in the same way a human can. Plus, starcraft is a game of imperfect information, unlike chess. Wait, so a "theoretically perfect" strategy is bested by an alternative strategy? That's a contradiction in terms, because then the strategy wasn't perfect to begin with, unless you are talking about a game of chance. In deterministic games such as SC2, chess, etc. that cannot happen.
Yes, you are right a computer can employ mixed strategies, but there are more complex ways to vary your strategy. Imagine purposely sacrificing a Rook in chess, in order to get the opponents Queen. A computer may kill the Rook, because on average this would produce the highest payoff.
That's what I vaguely remember from my game theory class. Check this out I just found: http://en.wikipedia.org/wiki/Anti-computer_tactics_(gaming)
|
On November 22 2012 04:51 creamyturtle wrote:Show nested quote +On November 22 2012 02:31 Legatus wrote:On November 22 2012 01:29 creamyturtle wrote: Have any of you guys studied Deep Blue? This computer was actually beaten by many many grandmasters. The problem with a computer is that it can be deceived. The grandmasters would rotate strategies, or mix strategies at random in order to trick it. This is because a player can purposely play a move that has a lower payoff, whereas the computer will always choose the highest payoff choice. Who says the computer always has to play the best strategy? If it seems advisable, the computer can be programmed to play a mixed strategy (with a probability distribution over the possible alternatives that determines how likely each one is). So even if you built a computer that could theoretically play perfectly, it still wouldn't be perfect. Computers can't take big crazy risks, they can't think on their feet, and they can't be conditioned in the same way a human can. Plus, starcraft is a game of imperfect information, unlike chess. Wait, so a "theoretically perfect" strategy is bested by an alternative strategy? That's a contradiction in terms, because then the strategy wasn't perfect to begin with, unless you are talking about a game of chance. In deterministic games such as SC2, chess, etc. that cannot happen. Yes, you are right a computer can employ mixed strategies, but there are more complex ways to vary your strategy. Imagine purposely sacrificing a Rook in chess, in order to get the opponents Queen. A computer may kill the Rook, because on average this would produce the highest payoff. That's what I vaguely remember from my game theory class. Check this out I just found: http://en.wikipedia.org/wiki/Anti-computer_tactics_(gaming) Your example is quite poor because i can hardly think of an enforced trade which takes more turns than the depth of the program analyzing it.
I also find the notion of perfect play misguided. I am not sure in a game where you have to chose your opening "blind" there exists an unbeatable opening/strategy.
But I'm quite sure with enough manpower one could program an AI which can't be beaten by any human.
|
Unlike Chess, SC2 is a game of information, and from a strategy point of view (removing perfect micro from the equation that would obviously give A.I an insane edge), without perfect information a lot of decisions are based on gut-feeling / previous experience and speculation based on the opponent, thus no computer A.I would be able to win 100% of the time. Even with Machine Learning approaches through millions of fed replays, the A.I could still make the wrong decision. But I do believe that a very effective A.I could be developed to win 95% of games against the top 10 players in the world. It would be able to store information about minuscule things like unit movement to notice details like hesitation / reaction time to gauge a player's mind state, and things like that would be very powerful in the decision making process.
|
Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_network with lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.)
With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro.
For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed)
A cool example i could quickly find is this TED documentary about neural networks and making realistic animations. They for instance could learn a "stick figure" how to walk (with unlimited variables, influencing the figure) With the use of " artifical evolution".(another trick they can use to make a insane good SC2 computer) http://www.ted.com/talks/torsten_reil_studies_biology_to_make_animation.html
an example of how to use aftifical evolution when creating a SC2 robot:
Imagine if 2 computers could battle with each other and both computers can "learn". Next to that both the computers skills are randomly generated. you would just let unlimited computers fight against each other. Keep the victor's algorythm, and destroy the loser's algorythm.
After a few years of calculating, the victor of those billions of SC2 battles, can probably beat any human.
|
On November 22 2012 05:16 Timmsh wrote:Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_networkwith lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.) With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro. For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed)
This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial.
|
On November 22 2012 05:19 Xyik wrote:Show nested quote +On November 22 2012 05:16 Timmsh wrote:Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_networkwith lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.) With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro. For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed) This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial. I'm also not sure if this is the best approach. I would simply chose a tactic for the computer which plays into its strengths. I already mentioned muta/ling/bling in ZvT before. The program hast advantages in:
- Larva injects
- creep spread (hello scarlett)
- multitasking
- keeping track of mined gas/minerals
- evading AoE effects like thor/siegetank
- map awareness
- choosing engagements
- micro
I'm sure you could bring the AI to outexpand T if he plays overly defensive and to abuse its micro if attacked (creepspread!). If maxed start transition into blord/infestor.
I'm sure the actual AI has to consider a lot more but it would be a start and once 10 mutas are out I hardly see the AI losing it. The opening defense would be a bit trickier but so are the options for Z.
|
On November 22 2012 05:19 Xyik wrote:Show nested quote +On November 22 2012 05:16 Timmsh wrote:Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_networkwith lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.) With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro. For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed) This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial.
Uuhm, I just stated that the amount of variables is irrelevant if enough data is given to the computer. For the computer a variable showing itself, is just a variation which he has seen before and solved before. He does not know why he solves it, and why it works, but it worked millions of times before so...
edit: I added in my original post (3 posts above) a part about artifical evolution. it's awesome :-)
|
Speaking as a researcher in the area of robotics and intelligent control of unmanned vehicles this is not a simple problem.
Things to consider: -There may not be a Perfect way to play Starcraft. -I don't know any human who could give an answer on how to play a perfect game of Starcraft -Computers are are programmed based on human direction
All of the task you suggested are very dependent on your strategy, and the opponent strategy. For example you say can you perfect micro? Well perfect micro depend on the situation, on unit composition, opponent micro, terrain, and unit position. There are situations in Starcraft where micro in certain scenarios has never occurred, therefore there is no human experience to program the AI with.
The fact that it is pretty much impossible to sit down and program an AI with all possible scenarios since we as humans don't even know all possibilities makes this problem very difficult! It is the human ability to innovate and come up with creative and new solutions to unknown situations that makes it very difficult for an AI to compete if it is only programmed based on previous human experience and knowledge.
That being said I do believe that a very competitive AI could be developed given enough time and resources. The trick would be to instead of trying to program the AI with our own knowledge and experience of the game, let the AI learn through its own experience. This can be done using Neural Networks that would learn based on in game experience of the AI to new situations.
For example the AI has a fight of 10 marines vs 5 lings 5 banelings. First time he encounters this the AI attack moves into the zerg army and loses. It uses the feedback of this loss to say that this micro was wrong. Next time it tries doing something different, and records the results. With enough trials the AI would probably learn to perfectly stutter step and split the marines for an engagement like this. The AI now can do an excellent job for this specific scenario. Given enough games (and I mean probably millions of games) the AI would become incredibly good and truly a player to be feared. To solve the problem of millions of games the computer could begin by playing simulated games (without graphics generations) against another simulated AI. These two AI's could play these games at accelerated speeds where game speed would be far quicker than we could handle. This would generate two unique AI's with game experience, and it would be interesting to see the differences developed between the two play styles! Once the AI's had been trained sufficiently they could play a large number of games against human players and given enough time and games they could get much better!
TL:DR - It would be difficult to make a PERFECT AI, however and AI with the potential to learn for experience and adapt could be developed to be very competitive with, if not far better than, human players.
|
On November 22 2012 05:43 Wilsonator wrote: Speaking as a researcher in the area of robotics and intelligent control of unmanned vehicles this is not a simple problem.
Things to consider: -There may not be a Perfect way to play Starcraft. -I don't know any human who could give an answer on how to play a perfect game of Starcraft -Computers are are programmed based on human direction
All of the task you suggested are very dependent on your strategy, and the opponent strategy. For example you say can you perfect micro? Well perfect micro depend on the situation, on unit composition, opponent micro, terrain, and unit position. There are situations in Starcraft where micro in certain scenarios has never occurred, therefore there is no human experience to program the AI with.
The fact that it is pretty much impossible to sit down and program an AI with all possible scenarios since we as humans don't even know all possibilities makes this problem very difficult! It is the human ability to innovate and come up with creative and new solutions to unknown situations that makes it very difficult for an AI to compete if it is only programmed based on previous human experience and knowledge.
That being said I do believe that a very competitive AI could be developed given enough time and resources. The trick would be to instead of trying to program the AI with our own knowledge and experience of the game, let the AI learn through its own experience. This can be done using Neural Networks that would learn based on in game experience of the AI to new situations.
For example the AI has a fight of 10 marines vs 5 lings 5 banelings. First time he encounters this the AI attack moves into the zerg army and loses. It uses the feedback of this loss to say that this micro was wrong. Next time it tries doing something different, and records the results. With enough trials the AI would probably learn to perfectly stutter step and split the marines for an engagement like this. The AI now can do an excellent job for this specific scenario. Given enough games (and I mean probably millions of games) the AI would become incredibly good and truly a player to be feared. To solve the problem of millions of games the computer could begin by playing simulated games (without graphics generations) against another simulated AI. These two AI's could play these games at accelerated speeds where game speed would be far quicker than we could handle. This would generate two unique AI's with game experience, and it would be interesting to see the differences developed between the two play styles! Once the AI's had been trained sufficiently they could play a large number of games against human players and given enough time and games they could get much better!
TL:DR - It would be difficult to make a PERFECT AI, however and AI with the potential to learn for experience and adapt could be developed to be very competitive with, if not far better than, human players.
Dude, this is exactly what i mean (4 posts above)
|
+ Show Spoiler +
Haha Calvinball. That takes me back.
Guess there could be some kinda early marine or stalker rush that might be able to win vs most matchups with invincible micro / shield management so you would less have to program for "every possibilty".
|
Sorry browser broke. Didnt mean to doublepost.
|
On November 22 2012 05:33 Timmsh wrote:Show nested quote +On November 22 2012 05:19 Xyik wrote:On November 22 2012 05:16 Timmsh wrote:Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_networkwith lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.) With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro. For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed) This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial. Uuhm, I just stated that the amount of variables is irrelevant if enough data is given to the computer. For the computer a variable showing itself, is just a variation which he has seen before and solved before. He does not know why he solves it, and why it works, but it worked millions of times before so... edit: I added in my original post (3 posts above) a part about artifical evolution. it's awesome :-)
Partly true. The difficulty doesn't lie in that lack of data to feed a NN, the fact that there are so many inputs to consider and be weighted is what would make the construction of such a network so difficult. For example, how many hidden layers would be optimal in a case like this? I wouldn't even begin to know. How would the NN interact with the software, what kinds of outputs is it outputting? Is it taking all the information and simply outputting a single instruction at a time? This would be extremely flawed? Is it outputting a long term plan, how does it weigh up the advantages of changing this plan (it's not as simple as simply continuously picking the new optimal plan, imagine things like tech investments).
Not only this how will the reactive system be interfacing with the reflective one? What tasks constitute as reactive, how should they be handled? In the case that you don't have to 'create' a knowledge base for the reflective system (because its a learning NN), you still have to create a very large knowledge base for the reactive system which is not a trivial task.
I believe all of it would probably be possible, if given enough time and resources. However who would bother? Doing so much work for no pay, just for the sake of curiosity? A few people who had the prerequisite knowledge maybe, the odds of all of them being able to afford commit years of their life for a theoretical research goal with no pay? Seems unlikely.
edit: My source; just completed a unit on game AI at university, which gives me at least some insight into the complexity of the situation.
|
I don't think people understand what, "perfect micro," would entail. That would break the game. Perfect macro, maybe. Perfect BO's, yes. Perfect judgement, yes. But PERFECT micro would be way too much for any player to really handle against even a player of weaker macro.
|
On November 22 2012 04:51 creamyturtle wrote:Show nested quote +On November 22 2012 02:31 Legatus wrote:On November 22 2012 01:29 creamyturtle wrote: Have any of you guys studied Deep Blue? This computer was actually beaten by many many grandmasters. The problem with a computer is that it can be deceived. The grandmasters would rotate strategies, or mix strategies at random in order to trick it. This is because a player can purposely play a move that has a lower payoff, whereas the computer will always choose the highest payoff choice. Who says the computer always has to play the best strategy? If it seems advisable, the computer can be programmed to play a mixed strategy (with a probability distribution over the possible alternatives that determines how likely each one is). So even if you built a computer that could theoretically play perfectly, it still wouldn't be perfect. Computers can't take big crazy risks, they can't think on their feet, and they can't be conditioned in the same way a human can. Plus, starcraft is a game of imperfect information, unlike chess. Wait, so a "theoretically perfect" strategy is bested by an alternative strategy? That's a contradiction in terms, because then the strategy wasn't perfect to begin with, unless you are talking about a game of chance. In deterministic games such as SC2, chess, etc. that cannot happen. Yes, you are right a computer can employ mixed strategies, but there are more complex ways to vary your strategy. Imagine purposely sacrificing a Rook in chess, in order to get the opponents Queen. A computer may kill the Rook, because on average this would produce the highest payoff. That's what I vaguely remember from my game theory class. Check this out I just found: http://en.wikipedia.org/wiki/Anti-computer_tactics_(gaming) Times of grandmasters beating chess engines are long gone.
|
|
On November 22 2012 05:33 Timmsh wrote:Show nested quote +On November 22 2012 05:19 Xyik wrote:On November 22 2012 05:16 Timmsh wrote:Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_networkwith lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.) With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro. For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed) This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial. Uuhm, I just stated that the amount of variables is irrelevant if enough data is given to the computer. For the computer a variable showing itself, is just a variation which he has seen before and solved before. He does not know why he solves it, and why it works, but it worked millions of times before so... edit: I added in my original post (3 posts above) a part about artifical evolution. it's awesome :-)
I study Computer Science and have dabbled in Machine Learning / Neural Networks so I understand what you are proposing. My point however, was that yes, although you can make a very effective A.I that learns like this (to the likes of which it could compete with / and / or simulate big names like DRG / MVP) it would still not be perfect or even come close to winning 90% of the time because there is no perfect way to play the game based on experience. It's all probability and just because engaging in a 10 marine vs 5 zergling battle is effective in 99% of the trials does not mean it is always the best overall decision, even considering the other hundreds of thousands of variables at that point in time. I suspect you would need perfect information (like Chess), and even then its not necessarily feasible. Machine learning is all based on Stats and basing things on Stats, there is no 100% certainty in anything.
|
There are a lot of misconceptions being thrown around. A perfect strategy is *not* a strategy that wins 100% of the time - that is impossible in a game of imperfect information. A perfect strategy is a set of strategies (based on some fixed probability distribution) such that no one would ever deviate from it if they are trying to win.
In rock, paper, scissors, the perfect strategy is to randomly throw each 1/3 of the time. That does *not* guarantee that you will win 100% of the time, but you cannot do better against a player who uses this strategy against you. Therefore if you were serious about winning RPS, you would only play this strategy. It is the single and stable nash equilibrium in RPS.
There may be multi nash equalibria in SC, some of which not be pareto-optimal (think of two players doing cheezy builds). However, if an AI is playing a nash strategy, is it exhibiting "perfect" play, as there is no way to improve.
|
A computer can always outperform a human in theory. You can program one to follow a decision theory perfectly. They can make 'optimal' choices based on imperfect information without bias. They can analyze enormous stores of past results.
|
On November 22 2012 05:43 Wilsonator wrote: Speaking as a researcher in the area of robotics and intelligent control of unmanned vehicles this is not a simple problem.
Things to consider: -There may not be a Perfect way to play Starcraft. -I don't know any human who could give an answer on how to play a perfect game of Starcraft -Computers are are programmed based on human direction
All of the task you suggested are very dependent on your strategy, and the opponent strategy. For example you say can you perfect micro? Well perfect micro depend on the situation, on unit composition, opponent micro, terrain, and unit position. There are situations in Starcraft where micro in certain scenarios has never occurred, therefore there is no human experience to program the AI with.
The fact that it is pretty much impossible to sit down and program an AI with all possible scenarios since we as humans don't even know all possibilities makes this problem very difficult! It is the human ability to innovate and come up with creative and new solutions to unknown situations that makes it very difficult for an AI to compete if it is only programmed based on previous human experience and knowledge.
That being said I do believe that a very competitive AI could be developed given enough time and resources. The trick would be to instead of trying to program the AI with our own knowledge and experience of the game, let the AI learn through its own experience. This can be done using Neural Networks that would learn based on in game experience of the AI to new situations.
For example the AI has a fight of 10 marines vs 5 lings 5 banelings. First time he encounters this the AI attack moves into the zerg army and loses. It uses the feedback of this loss to say that this micro was wrong. Next time it tries doing something different, and records the results. With enough trials the AI would probably learn to perfectly stutter step and split the marines for an engagement like this. The AI now can do an excellent job for this specific scenario. Given enough games (and I mean probably millions of games) the AI would become incredibly good and truly a player to be feared. To solve the problem of millions of games the computer could begin by playing simulated games (without graphics generations) against another simulated AI. These two AI's could play these games at accelerated speeds where game speed would be far quicker than we could handle. This would generate two unique AI's with game experience, and it would be interesting to see the differences developed between the two play styles! Once the AI's had been trained sufficiently they could play a large number of games against human players and given enough time and games they could get much better!
TL:DR - It would be difficult to make a PERFECT AI, however and AI with the potential to learn for experience and adapt could be developed to be very competitive with, if not far better than, human players.
AIXI would be able to play starcraft optimally. This answers the question as to whether it's theoretically possible. In practice, it's very difficult.
|
|
|
|