On November 22 2012 05:16 Timmsh wrote: Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_network with lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.)
With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro.
For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed)
This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial.
Uuhm, I just stated that the amount of variables is irrelevant if enough data is given to the computer. For the computer a variable showing itself, is just a variation which he has seen before and solved before. He does not know why he solves it, and why it works, but it worked millions of times before so...
edit: I added in my original post (3 posts above) a part about artifical evolution. it's awesome :-)
I study Computer Science and have dabbled in Machine Learning / Neural Networks so I understand what you are proposing. My point however, was that yes, although you can make a very effective A.I that learns like this (to the likes of which it could compete with / and / or simulate big names like DRG / MVP) it would still not be perfect or even come close to winning 90% of the time because there is no perfect way to play the game based on experience. It's all probability and just because engaging in a 10 marine vs 5 zergling battle is effective in 99% of the trials does not mean it is always the best overall decision, even considering the other hundreds of thousands of variables at that point in time. I suspect you would need perfect information (like Chess), and even then its not necessarily feasible. Machine learning is all based on Stats and basing things on Stats, there is no 100% certainty in anything.
You study computer science and don't understand that optimal play doesn't necessarily produce a 100% success rate?
On November 22 2012 05:16 Timmsh wrote: Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_network with lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.)
With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro.
For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed)
This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial.
Uuhm, I just stated that the amount of variables is irrelevant if enough data is given to the computer. For the computer a variable showing itself, is just a variation which he has seen before and solved before. He does not know why he solves it, and why it works, but it worked millions of times before so...
edit: I added in my original post (3 posts above) a part about artifical evolution. it's awesome :-)
I study Computer Science and have dabbled in Machine Learning / Neural Networks so I understand what you are proposing. My point however, was that yes, although you can make a very effective A.I that learns like this (to the likes of which it could compete with / and / or simulate big names like DRG / MVP) it would still not be perfect or even come close to winning 90% of the time because there is no perfect way to play the game based on experience. It's all probability and just because engaging in a 10 marine vs 5 zergling battle is effective in 99% of the trials does not mean it is always the best overall decision, even considering the other hundreds of thousands of variables at that point in time. I suspect you would need perfect information (like Chess), and even then its not necessarily feasible. Machine learning is all based on Stats and basing things on Stats, there is no 100% certainty in anything.
You study computer science and don't understand that optimal play doesn't necessarily produce a 100% success rate?
You inhabit this planet and don't understand that people don't understand???
On November 22 2012 05:16 Timmsh wrote: Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_network with lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.)
With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro.
For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed)
This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial.
Uuhm, I just stated that the amount of variables is irrelevant if enough data is given to the computer. For the computer a variable showing itself, is just a variation which he has seen before and solved before. He does not know why he solves it, and why it works, but it worked millions of times before so...
edit: I added in my original post (3 posts above) a part about artifical evolution. it's awesome :-)
Partly true. The difficulty doesn't lie in that lack of data to feed a NN, the fact that there are so many inputs to consider and be weighted is what would make the construction of such a network so difficult. For example, how many hidden layers would be optimal in a case like this? I wouldn't even begin to know. How would the NN interact with the software, what kinds of outputs is it outputting? Is it taking all the information and simply outputting a single instruction at a time? This would be extremely flawed? Is it outputting a long term plan, how does it weigh up the advantages of changing this plan (it's not as simple as simply continuously picking the new optimal plan, imagine things like tech investments).
Not only this how will the reactive system be interfacing with the reflective one? What tasks constitute as reactive, how should they be handled? In the case that you don't have to 'create' a knowledge base for the reflective system (because its a learning NN), you still have to create a very large knowledge base for the reactive system which is not a trivial task.
I believe all of it would probably be possible, if given enough time and resources. However who would bother? Doing so much work for no pay, just for the sake of curiosity? A few people who had the prerequisite knowledge maybe, the odds of all of them being able to afford commit years of their life for a theoretical research goal with no pay? Seems unlikely.
edit: My source; just completed a unit on game AI at university, which gives me at least some insight into the complexity of the situation.
You have some interesting points, i need to reconsider :-) (100th post, yay!)
On November 22 2012 05:16 Timmsh wrote: Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_network with lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.)
With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro.
For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed)
This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial.
Uuhm, I just stated that the amount of variables is irrelevant if enough data is given to the computer. For the computer a variable showing itself, is just a variation which he has seen before and solved before. He does not know why he solves it, and why it works, but it worked millions of times before so...
edit: I added in my original post (3 posts above) a part about artifical evolution. it's awesome :-)
I study Computer Science and have dabbled in Machine Learning / Neural Networks so I understand what you are proposing. My point however, was that yes, although you can make a very effective A.I that learns like this (to the likes of which it could compete with / and / or simulate big names like DRG / MVP) it would still not be perfect or even come close to winning 90% of the time because there is no perfect way to play the game based on experience. It's all probability and just because engaging in a 10 marine vs 5 zergling battle is effective in 99% of the trials does not mean it is always the best overall decision, even considering the other hundreds of thousands of variables at that point in time. I suspect you would need perfect information (like Chess), and even then its not necessarily feasible. Machine learning is all based on Stats and basing things on Stats, there is no 100% certainty in anything.
You study computer science and don't understand that optimal play doesn't necessarily produce a 100% success rate?
Optimal play in tic-tac-toe doesn't produce 100% success rate.
On November 22 2012 05:16 Timmsh wrote: Is there anyone who studies AI sciences or something like that? I know there are ways to mimic the human brain, and sort of feed a neural network http://en.wikipedia.org/wiki/Neural_network with lots of data (just like humans who deal with situations with lots of variables, they just try something alot (trial error) till stuff works.)
With SC2 this is very easy to mimic, you just feed the computer ALOT of games (replays) of pro gamers. till the computer can mimic the games of the pro gamers, without knowing why stuff happens (or need to be done). After that it can train itself with pro gamers, to increase skills like the perfect micro.
For this to work (in the future), you don't need alot of additional programming (again, you don't need the computer to know why stuff need's to be done, so this does not need to be programmed)
This would not make a computer A.I play perfectly and win 99% of its games. There are also lots of variables to consider so its not trivial.
Uuhm, I just stated that the amount of variables is irrelevant if enough data is given to the computer. For the computer a variable showing itself, is just a variation which he has seen before and solved before. He does not know why he solves it, and why it works, but it worked millions of times before so...
edit: I added in my original post (3 posts above) a part about artifical evolution. it's awesome :-)
I study Computer Science and have dabbled in Machine Learning / Neural Networks so I understand what you are proposing. My point however, was that yes, although you can make a very effective A.I that learns like this (to the likes of which it could compete with / and / or simulate big names like DRG / MVP) it would still not be perfect or even come close to winning 90% of the time because there is no perfect way to play the game based on experience. It's all probability and just because engaging in a 10 marine vs 5 zergling battle is effective in 99% of the trials does not mean it is always the best overall decision, even considering the other hundreds of thousands of variables at that point in time. I suspect you would need perfect information (like Chess), and even then its not necessarily feasible. Machine learning is all based on Stats and basing things on Stats, there is no 100% certainty in anything.
You study computer science and don't understand that optimal play doesn't necessarily produce a 100% success rate?
Optimal play in tic-tac-toe doesn't produce 100% success rate.
If I remember correctly, optimal play in tic-tac-toe always results in a draw.
On November 11 2012 12:53 AbideWithMe wrote: Perfect mechanics and micro with predefined build orders yes ofc.
Perfect strategies and scouting with dynmaic BOs not so much.
With certain pefectly executed timing pushes A.I. could beat every human for sure though.
Sure it can. Scouting would be relatively easy lol.
Most difficult part would be to optimize position. So many variables too consider and most players don't always know how to optimally position them selves. Only the top players do that.
On November 11 2012 12:53 AbideWithMe wrote: Perfect mechanics and micro with predefined build orders yes ofc.
Perfect strategies and scouting with dynmaic BOs not so much.
With certain pefectly executed timing pushes A.I. could beat every human for sure though.
Sure it can. Scouting would be relatively easy lol.
Most difficult part would be to optimize position. So many variables too consider and most players don't always know how to optimally position them selves. Only the top players do that.
If you mean with "optimize position" positional play in the sense of micro this won't sure as hell be a problem. Look how Automaton and Ursadak micro and position their stuff. No problem at all.
But how is an AI going to dynamically calculate a build order based on scouting if the buildings are proxied. You would just have to go with some allround bo for the most cases. This is why I said dynamic BOs won't be easy and sure as hell not the way to program such an AI.
Building an AI that can perfect rush builds for times < 7 minutes isn't too difficult. Forcefields and other dynamic changes that alter the environment might prove difficult to program around but it's a solvable problem. Anything that involves only first tier units+timings and the subsequent micro/macro of them is possible I think.
On November 23 2012 03:52 Lmui wrote: Building an AI that can perfect rush builds for times < 7 minutes isn't too difficult. Forcefields and other dynamic changes that alter the environment might prove difficult to program around but it's a solvable problem. Anything that involves only first tier units+timings and the subsequent micro/macro of them is possible I think.
Finding a "good" rush build based on some questionable heuristic is feasible. Perfecting a rush build is vastly unfeasible.
On November 23 2012 03:34 monkybone wrote: The optimal way to play might as well be INTENSE scouting with a tons of units the whole game with perfect control. There's no reason why an optimal AI would choose a predetermined build.
On November 11 2012 12:53 AbideWithMe wrote: Perfect mechanics and micro with predefined build orders yes ofc.
Perfect strategies and scouting with dynmaic BOs not so much.
With certain pefectly executed timing pushes A.I. could beat every human for sure though.
Sure it can. Scouting would be relatively easy lol.
Most difficult part would be to optimize position. So many variables too consider and most players don't always know how to optimally position them selves. Only the top players do that.
If you mean with "optimize position" positional play in the sense of micro this won't sure as hell be a problem. Look how Automaton and Ursadak micro and position their stuff. No problem at all.
But how is an AI going to dynamically calculate a build order based on scouting if the buildings are proxied. You would just have to go with some allround bo for the most cases. This is why I said dynamic BOs won't be easy and sure as hell not the way to program such an AI.
The automaton was used in very simple scenarios, where the enemy AI was 100% predictable, plus it was only one unit type in question. In a computer vs human, or even computer vs computer scenario you can't predict enemy behavior in such a way, and the unit compositions are diverse and changing, the whole micro and movement scenario becomes million^million times more complex.
Imagine if that in the automaton speedling vs tank scenario, there were marines and banelings as well. It's one thing to program the AI to ALWAYS split, but what if splitting is not always optimal? What if you had to take the entire battlefield in consideration for every millisecond of every single unit to determine the optimal increment of movement based on the entire field of options the enemy has. It's basically impossible for a computer to ever micro optimally.
It's one thing to program an AI that will never lose a micro fight, it's an entirely different thing to program an optimal such one.
This is what happens if people don't actually read everything. Either you don't know Ursadak and didn't look it up or you simply ignored it. Either way it makes your post quite preposterous. MICRO IS NOT A PROBLEM FOR AN AI. Not at all.
On November 23 2012 03:34 monkybone wrote: The optimal way to play might as well be INTENSE scouting with a tons of units the whole game with perfect control. There's no reason why an optimal AI would choose a predetermined build.
On November 23 2012 03:17 AbideWithMe wrote:
On November 23 2012 03:08 Hider wrote:
On November 11 2012 12:53 AbideWithMe wrote: Perfect mechanics and micro with predefined build orders yes ofc.
Perfect strategies and scouting with dynmaic BOs not so much.
With certain pefectly executed timing pushes A.I. could beat every human for sure though.
Sure it can. Scouting would be relatively easy lol.
Most difficult part would be to optimize position. So many variables too consider and most players don't always know how to optimally position them selves. Only the top players do that.
If you mean with "optimize position" positional play in the sense of micro this won't sure as hell be a problem. Look how Automaton and Ursadak micro and position their stuff. No problem at all.
But how is an AI going to dynamically calculate a build order based on scouting if the buildings are proxied. You would just have to go with some allround bo for the most cases. This is why I said dynamic BOs won't be easy and sure as hell not the way to program such an AI.
The automaton was used in very simple scenarios, where the enemy AI was 100% predictable, plus it was only one unit type in question. In a computer vs human, or even computer vs computer scenario you can't predict enemy behavior in such a way, and the unit compositions are diverse and changing, the whole micro and movement scenario becomes million^million times more complex.
Imagine if that in the automaton speedling vs tank scenario, there were marines and banelings as well. It's one thing to program the AI to ALWAYS split, but what if splitting is not always optimal? What if you had to take the entire battlefield in consideration for every millisecond of every single unit to determine the optimal increment of movement based on the entire field of options the enemy has. It's basically impossible for a computer to ever micro optimally.
It's one thing to program an AI that will never lose a micro fight, it's an entirely different thing to program an optimal such one.
This is what happens if people don't actually read everything. Either you don't know Ursadak and didn't look it up or you simply ignored it. Either way it makes your post quite preposterous. MICRO IS NOT A PROBLEM FOR AN AI. Not at all. http://www.youtube.com/watch?v=mrbYd4OFrWE
We all know about these things...they don't bring any value to the table on top of what we already know because 1) some of the demos operate on perfect knowledge (such as the ling vs. tank demo) and 2) the others are designed to be one-sided to showcase the micro capabilities of an AI.
In an actual game of SC, your AI-controlled zerglings do *not* know which ones are being targeted by your opponent's AI-controlled tanks. In a similar sense, your AI-controlled marines do not know the trajectory that your opponents units take in advance.
You can say that a hellion can kill infinite amounts of zerglings 1v1 but to say that because of that, they can kill infinite amounts of zerglings in general is a fallacy.
The only thing that Automaton and Ursadak illustrate is that a unit controlled by an AI is capable of doing ABC against an opponent who chooses to do XYZ. It does not in the slightest sense predict an outcome of a battle under perfect play.
On November 23 2012 03:52 Lmui wrote: Building an AI that can perfect rush builds for times < 7 minutes isn't too difficult. Forcefields and other dynamic changes that alter the environment might prove difficult to program around but it's a solvable problem. Anything that involves only first tier units+timings and the subsequent micro/macro of them is possible I think.
Finding a "good" rush build based on some questionable heuristic is feasible. Perfecting a rush build is vastly unfeasible.
There is a rush build that would be easy to perfect and that, if executed by an AI, would be almost unstoppable.
What is it? A simple worker rush.
Everyone is focusing on the problems that AI has... namely, decision making. A worker rush gets around this by vastly limiting the number of decisions that have to be made. A worker rush quickly turns the game into a micro battle. And in a micro battle, AI wins.
On a 2 player map where your location is known, a human has a 0% chance of winning. Our only hope would be fail scouting on larger maps, but even there, I think you could make the AI smart enough to win every time.
I'm a chess player (master) and have used and studied computer chess programs for a decade now. I just became familiar with this thread. My thoughts are that it might be theoretically possible to build a Deep Blue type computer for SC2 in the following conditions:
1. Stable meta-game. 2. Strict "opening" book that counters most strategies outright.
Here are the problems: SC2 is not a game of perfect information the way chess is. You'd need coding for perfect scouting patterns that are pretty random (so as to not get your scout sniped). It would become increasingly difficult as the game increased in duration since you'd have to "teach" the computer still unknown variables such as the value of economy vs. army vs. time vs. base race etc. Also it'd have to be done on one map at a time, and I think it'd be much easier on narrow maps with hard-choke points the computer could control (limiting the range of options drastically).
All that said, if the game was completely figured out then maybe a "sort" of super AI could be programmed but we're talking like 10 years of a stable meta-game minimum, with thousands and thousands of hours (more, millions maybe) coding the AI to react properly to the million of possible plays in a SC2 game.
In chess, there are few pieces and the pieces have only a set amount of entries into your position (64 squares on the board, its simple in that way). Also, chess opening theory is so figured out (20+ moves deep in every major opening) that you can program computers so well that they can't be vulnerable in the early stages of the game to any variations (this is what the top programs do). You even program them strong opening books for "non-factor" weak chess openings (designed to "confuse the computer")... hell there are opening books for even the worst chess openings.
The problem is that SC2 is played on a huge "map" with equivalent to millions of "squares" and hundreds of entry points into your position. Stuff like drop play, multi-tasked and varied attacks at separate points, etc. would make it very hard for a computer to defend.
It is one thing to allocate your resources in the most correct manner against that which is known... it is altogether another thing to allocate your resources appropriately against that which is unknown. As much as SC2 has elements in it which are similar to chess - it cannot ever be a game of perfect information unless the game is played without fog of war. Remove fog-of-war and a proper "opening book" to sidestep or shortcut the A.I.'s weaker points is possible.
In short, it is theoretically possible decades from now with a million dollar team behind it (and years of work), and a simple set-map design, and no fog-of-war - but will not happen anytime soon. If you want more information on why this is the case read up on the way chess GM's can slaughter most any sub-par computer program without a solid opening book (which is based on the known). For that matter, (as a chess master myself) I've played many VERY strong AI's in my life without opening books and beaten them badly since the positions they get are "strategically unplayable" without a proper opening book. The ones with real opening books though: good luck (not even the best GM's can do well against them now). That said, this is like the fog-of-war effect in SC2 - as long as such a thing exists the computer cannot build a "correct" opening book to play against yours.
On November 23 2012 03:52 Lmui wrote: Building an AI that can perfect rush builds for times < 7 minutes isn't too difficult. Forcefields and other dynamic changes that alter the environment might prove difficult to program around but it's a solvable problem. Anything that involves only first tier units+timings and the subsequent micro/macro of them is possible I think.
Finding a "good" rush build based on some questionable heuristic is feasible. Perfecting a rush build is vastly unfeasible.
There is a rush build that would be easy to perfect and that, if executed by an AI, would be almost unstoppable.
What is it? A simple worker rush.
Everyone is focusing on the problems that AI has... namely, decision making. A worker rush gets around this by vastly limiting the number of decisions that have to be made. A worker rush quickly turns the game into a micro battle. And in a micro battle, AI wins.
On a 2 player map where your location is known, a human has a 0% chance of winning. Our only hope would be fail scouting on larger maps, but even there, I think you could make the AI smart enough to win every time.
The problem that the research community is most interested is not how to develop an AI to beat a particular human. It's to build a AI that provably has the best strategy. The problem is interesting because we know that a solution exists - we just haven't been able to discover it yet.
There are many ways to beat a said human, which mostly involves exploitation of that opponent's weaknesses. But to build an AI that performs well against a generic opponent, we must look for one that can execute the optimal strategy.
On November 23 2012 03:52 Lmui wrote: Building an AI that can perfect rush builds for times < 7 minutes isn't too difficult. Forcefields and other dynamic changes that alter the environment might prove difficult to program around but it's a solvable problem. Anything that involves only first tier units+timings and the subsequent micro/macro of them is possible I think.
Finding a "good" rush build based on some questionable heuristic is feasible. Perfecting a rush build is vastly unfeasible.
There is a rush build that would be easy to perfect and that, if executed by an AI, would be almost unstoppable.
What is it? A simple worker rush.
Everyone is focusing on the problems that AI has... namely, decision making. A worker rush gets around this by vastly limiting the number of decisions that have to be made. A worker rush quickly turns the game into a micro battle. And in a micro battle, AI wins.
On a 2 player map where your location is known, a human has a 0% chance of winning. Our only hope would be fail scouting on larger maps, but even there, I think you could make the AI smart enough to win every time.
The problem that the research community is most interested is not how to develop an AI to beat a particular human. It's to build a AI that provably has the best strategy. The problem is interesting because we know that a solution exists - we just haven't been able to discover it yet.
There are many ways to beat a said human, which mostly involves exploitation of that opponent's weaknesses. But to build an AI that performs well against a generic opponent, we must look for one that can execute the optimal strategy.
That's really interesting. How you know the solution exists and how could you prove it was optimal in an SC2 context?