I'm not sure how many of you are aware of the old Chinese game "Go", and of the current Bo5 match going on between Google's deep learning AI machine AlphaGo versus the world's champion, Lee Sedol hailing from South Korea.
Late last year, AlphaGo was able to knock off Europe's champion 5 to nil. So what's the big deal this time around?
Simply put, the level of competition (against AlphaGo). Europe's champion, Fan Hui, is a level 2-dan player. Lee Sedol is 9-dan. The statistical probability of a 9-dan player beating a 2-dan player is over 95%.
Last night, the first game was played, with AlphaGo taking the early lead 1-0. This marks an amazing point of progress for AI, and more specifically, deep learning capabilities. The reason AI hasn't been able to handily beat professional Go players until very recently is simply because of the complexity of the game. The possible permutations in Go, exceeds that of the number of atoms in the universe (and for marked effect, by several orders of magnitude at that!). If you're a visual person, Go has about 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible permutations. So it's pretty clear, that computers don't have the capability to win these games through brute-force computations. There simply doesn't exist the computational power to do so, especially considering that these matches are played with an allotted total time for each player, similar to how professional Chess matches are played.
Google's AlphaGo has gotten to this point by deep learning. It's able to view recordings of professional matches and learn. It can also not only learn from the mistakes and moves of its' own games played, but it also has the huge advantage that it can play against itself at speeds incomprehensible for us simple humans
Anyways, I thought some of you might find it interesting. I don't play Go much myself, and I don't have the time to watch the live streams of the matches. But I think it's fascinating because at some point, there may not exist a game out there that some form of AI can't beat us at.
Here's a few extras (including live stream link for those of you interested):
SKynet begins with go? Seriously though, I was impressed with deep blue all those years ago and am equally impressed now (Go I think would be much harder to program than a chess AI, as you said with the near infinite possible permutations).
The algorithm that AlphaGo used is a Monte Carlo tree search algorithm. In this algorithm, you start with a couple guesses as to reasonable plays based on heuristics, then evaluate each of those guesses by picking random counter moves for each potential move and seeing what can work and what can't. What AlphaGo has improved upon since this algorithm first came into Go in 2006 is two neural networks, and the training techniques for each one: 1) A network for predicting good potential moves in a present situation 2) A network for evaluating a board position
These neural networks make the tree search algorithm much more effective by eliminating obviously bad moves from the search and allowing the algorithm to evaluate board positions without having to simulate more future plays.
Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar.
On March 10 2016 04:26 RoyGBiv_13 wrote: The algorithm that AlphaGo used is a Monte Carlo tree search algorithm. In this algorithm, you start with a couple guesses as to reasonable plays based on heuristics, then evaluate each of those guesses by picking random counter moves for each potential move and seeing what can work and what can't. What AlphaGo has improved upon since this algorithm first came into Go in 2006 is two neural networks, and the training techniques for each one: 1) A network for predicting good potential moves in a present situation 2) A network for evaluating a board position
These neural networks make the tree search algorithm much more effective by eliminating obviously bad moves from the search and allowing the algorithm to evaluate board positions without having to simulate more future plays.
The learning of the various networks is a fun part. First supervised learning to get a network that predicts correctly the next "human" move on a set of games (they let it train on a games database until it got ~60% of the time the human move), then get several copies of the network to play against itself in reinforcement learning on the whole game (when the game is won, the network changes for that game are valued more). The best network after ... lots of games was elected to predict the potential moves.
The second network is then used on the complete set, trained until it accurately provides an outcome (color winning) from a given position.
Combine the two with a standard Monte Carlo and you get a very good engine.
The only surprising part for me is how well the convergence on the two networks seems to have worked. In my days the networks tended to spend a few days learning to end up with barely better results than randomness (except on a select few problems). Then again, we worked with 3 layers and 50 neurones total (not including the programmer's).
For those looking for a (English) video live analysis of the game adressed to a more Go educated crowd: The same channel will broadcast as well in the future. But as I said, you should have a decent understanding of Go to be able to follow that broadcast, while the regular stream is more adressed at beginners. For those just looking for a quick summary and analysis of the game in written form, you can look here: https://gogameguru.com/alphago-defeats-lee-sedol-game-1/ It should be noted, that this summary is done by another 9p, but is actually summarizing a lot of different korean media/professional opinions on the match.
On March 10 2016 06:51 Yurie wrote: Sad the first game recording is so low quality, audio choppy and cuts to the wrong camera all the time.
Check out American Go Association's stream vod: https://www.youtube.com/watch?v=6ZugVil2v4w It starts an hour into the match, commentary is quicker and less oriented towards beginners.
Interesting thing about that xkcd comic: in Arimaa a bot beat the best human players just last year (in April). So the field of AI in games appears to be advancing rapidly.
I'm personally excited by the possibility of AI becoming superhumanly good at other tasks, as well as games. Like how driverless cars + Show Spoiler +
I hope the term auto catches on
will be better than humans, but for medicine, or scientific research.
Close match still if I understood correctly. Some poor quality news website says that Lee lost from only 2 points after a 5h game. Crazy shit. I believe that if Lee doesn't win next one it will be a clean sweep 5-0. Otherwise 4-1, but I don't see him defeating the AI twice. Damn.
I wonder if they're going to continue developing AlphaGo. Maybe it was only a proof of concept type product? For instance, chess engines are not that interesting any more for research programs. I kinda hope they won't, I felt like chess was more mysterious before computers.
On March 10 2016 04:58 nepeta wrote: Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar.
Really? That's quite fascinating. What level does the current AIs play at? Years ago when I watched I think there was an AI that got to D+ by cheesing. But then again iiccup rankings aren't what they used to be, so I assume its easier now.
As for Go, I only watched a little because I don't understand the game much. But is it true that some of the moves AlphaGo did in the midgame were considered 'bad' yet they were instrumental in winning him the game later on?
Can the best AIs really not beat the best players in Starcraft? Intuitively you'd think the AI would have no problem because it could just abuse micro way beyond what any human is capable of, as we saw with marines perfectly splitting against banelings for example. Starcraft is an analogue game while Chess/Go are digital, but in this case that would be to the advantage of the AI.
There is just a lot more to Starcraft than micro. That avoiding siege splash micro needs to directly read the targeting info from the tanks, so it's not exactly micro in the way a human handles this problem. Strategy, making decisions that affect the whole map and understanding all the implications, that kind of thinking is too much for an AI, at least for now. Furthermore, in Starcraft the player has incomplete information, that complicates matters even more. If Google poured ressources into developing a Starcraft AI, I think it can be as successful as AlphaGo, but it would still take some time to pull that off.
On March 10 2016 04:58 nepeta wrote: Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar.
It is not an issue of time (or processing power). It is an issue of money. And the money is a function of how much promotion an AI beating a top human at Brood War would yield (which is near 0).
You can brute force/Monte Carlo Go. You can't do the same for Brood War. Well, you can try but it makes even less sense. But I guess in a way you can brute force anything eventually. Just record the mouse movement of all top SC games ever play and data mine it.
On March 10 2016 04:58 nepeta wrote: Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar.
It is not an issue of time (or processing power). It is an issue of money. And the money is a function of how much promotion an AI beating a top human at Brood War would yield (which is near 0).
You can brute force/Monte Carlo Go. You can't do the same for Brood War. Well, you can try but it makes even less sense. But I guess in a way you can brute force anything eventually. Just record the mouse movement of all top SC games ever play and data mine it.
I think computers will get pretty confused by all the click spamming that pros do just to stay warmed up. There's just too much noise to reasonably understand what pros are doing just by mining their inputs.
why are people saying that computer uses brute force? its not even near using bruteforce in chess or go. It uses very smart parameters for deciding on what move to make making its calculations much more accurate then bruteforce. Computers would still get absolutely destroyed in chess if they were using bruteforce but they dont,
Alpha Go pulled off a move that hasn't been tried yet despite hundreds of years of progress in the game. Quite fascinating how exponentially AI has been developing from a game people believed to be incredibly difficult to beat a medium tier player to now stumping a 9th dan pro.
Someone had to have performed the move seeing how the game has existed for over 3000 years, no?
Upon learning that Google Deepmind, Alphabet’s artificial intelligence wing, won the first of five matches against the 33-year-old grandmaster of the ancient Chinese game Go with its AlphaGo AI program, Musk sent his congratulations via Twitter to the A.I. company, of which he was once an early investor before Google bought it back in 2014.
Go champion Lee Sedol predicted he’d sweep the machine in a 5-0 in a Tyson-style knockout, but had to resign the first round, following a three and a half hour stand-off. There are four more rounds to go, but this is the first time a computer program has ever been able to best such a skilled player in Go, a game conceived roughly 3,000 years ago and considered much harder to master than chess.
If this comes off as a sign of the impending robot apocalypse, don’t fret, Musk is worried about this too.
While the billionaire tech company mogul was quick to give praise, tweeting, “Experts in the field thought AI was 10 years away from achieving this,” he’s also highly concerned about the pitfalls of A.I. and the dystopian future it could breed.
On March 11 2016 05:05 PhoenixVoid wrote: Alpha Go pulled off a move that hasn't been tried yet despite hundreds of years of progress in the game. Quite fascinating how exponentially AI has been developing from a game people believed to be incredibly difficult to beat a medium tier player to now stumping a 9th dan pro.
It's somewhere around 5 and 7 points given two equally skilled players. At the rate alpha go is improving I'm sure our understanding of the starting advantage will vastly improve.
On March 11 2016 09:02 ejozl wrote: How big is the advantage of being the starting player in Go?
Pretty big, but Go is no Chess, here you are not really playing to kill enemy pieces, but to control terrain (points), at the end of the game once both players decide the game is over then they count their points, then the white gets 7.5 free extra points which are there to address the inherent advantage of black doing the first move (the extra points are called komi).
On March 11 2016 04:52 sertas wrote: why are people saying that computer uses brute force? its not even near using bruteforce in chess or go. It uses very smart parameters for deciding on what move to make making its calculations much more accurate then bruteforce. Computers would still get absolutely destroyed in chess if they were using bruteforce but they dont,
I actually published a paper where I used Monte Carlo to simulate molecular modeling. I think I know what it is. In layman terms it can be called brute forcing. Especially if you compare human thinking to computer thinking. Imagine a human trying to run a Monte Carlo algorithm. You'd go insane.
I can kind of see if the possibility space or Go is really that big, humans prefer a small segment of that possibility space.
If the AI finds a possibility space alien to human players, but that is solid in itself, by accident or by design, the human player will suddenly not have their usual game sense. I don't know if that is how Go can work, but I can see how that may be possible.
AlphaGo is definitely not using brute force. MCTS was first developed with completely random playouts, which is why it was called Monte Carlo, but it turns out it works a lot better if you have intelligent playouts, so long as they're still fast enough.
Short summary of how AlphaGo works: It learns a deep neural net that takes as input board states and outputs a predicted move, which it trains using tens of thousands of recorded professional Go games. It actually learns two of these, one bigger and slower, but more accurate, and one that is faster that it can use for the playouts of MCTS. With these (and a bit of retraining to incentivize winning rather than accurate prediction), it then plays itself millions of times to generate a huge amount of data mapping board states to wins or losses, then learns another deep neural net that predicts the value of a board state. These two networks were respectively called the "policy net" and the "value net" by the DeepMind guy on the interview yesterday.
All of that is trained offline before a game. Using those two networks, AlphaGo does game tree search (MCTS) during a game to decide the best move. But it prunes the game tree using its policy net, so it only explores moves that are likely moves for an expert to play, based on what it has learned. That's why it's not really brute force in the way the term is usually applied, because it's only thinking about reasonable moves. The final move selected is a balance between the results of the game tree search and the evaluation of the position after the move by the value net. There's clearly a lot more complications that this, but that's the base approach.
At a talk at a recent AI conference I was at, the CEO of DeepMind said that they actually evaluated less game states in their games against Fan Hui than Deep Blue did, and that was on 20-year old technology. That means their search is most definitely pretty intelligent, and not brute forcing the game.
On March 11 2016 12:24 trulojucreathrma.com wrote: Well, Deep Blue had a human playing.
Btw, brute force doesn't mean it is stupid. As long as you randomly iterate, I think you can call it brute forcing.
Most of the time, "brute force" does mean stupid. Bruteforcing is what you do when you revert to exploring every possibility available until you reach your stop condition, abandonning anything but the most basic algorithm and relying on raw computing power to find a solution. In a game, it would be a tree search of possible moves without cutting any branches for example.
If you tag as "brute force" any iterative analysis of future lines based on candidate moves, humans are also "brute forcing".
I'd say brute force is when you don't use heuristics. I.e. when you try to bruteforce a pin code and just enter all 10.000 possibilities of 4-number codes until you find the one.
As soon as heuristics come into play, it's no longer brute forcing imo, but it's up for debate. You could argue the algorithm trying out all the possible moves, and picking the best one according to some heuristics, is similar to trying all pin codes until it works.
On March 10 2016 05:11 {CC}StealthBlue wrote: I would imagine the real development will be when AlphaGo knows the Human has lost before the player has made his/her move or counter.
I wouldn't be so sure. The algorithm (to me understanding) is mostly "pattern finding". Being able to determine definitively the outcome of a game given certain circumstances would likely require an extremely expensive depth first search on the remaining game states given the current one. By that I mean, the AI could one day be able to say "99% of victory" at a certain point but could likely never determine the inevitable outcome of a game from a nontrivial starting state.
To all of you crying "brute-force" :
A brute force solution would look like this: given the current board state, determine all possible next states, then all the next states for those, etc. until you have computed every possible end state, then use that to inform you as to what next state to choose.
Obviously alphago cannot do that, because when the game is not near the end go just has too many possible permutations to compute.
AlphaGo "knew" game 2 was won as in the late game it did some suboptimal moves just to settle the center. In go this kind of plays are often made by humans when they know they win for sure. You simplify the game to not let a chance to your opponent to pull out a tricky sequence that could reverse the result.
So you can only brute force when you move through all of possibility space in a totally arbitrary manner, ignoring all information about where in the possibility space the solution is most likely to be found?
I disagree. Monte Carlo randomly picks something. That's not thinking. That's relying in sheer calculation power in being able to evaluate so many positions. You force a solution through sheer calculation power.
It is like a human Chess/Go player deciding what move to make by having a hundred (trillion) million people play all his candidate moves and then playing the move that wins most often.
Brute force isn't a technical term. I can use it just fine. Now many in game AI you have no real smart algorithms, because there are no definite ways to measure a position. You have to evaluate it. But in physical sciences, you can measure and Monte Carlo is brute force. And in the mind of a lay person, it also is.
On March 11 2016 21:08 trulojucreathrma.com wrote: So you can only brute force when you move through all of possibility space in a totally arbitrary manner, ignoring all information about where in the possibility space the solution is most likely to be found?
Yes, that is the definition of brute force. Search by exhaustion. No reasoning.
You can disagree with the semantics if you like, most people won't.
That's stupid to say. Also, many people don't agree. Go google "Monte Carlo" " brute force". If only exhaustive methods are brute force, then that word loses 99.999 of it's meaning. When do you ever do an exhaustive search?
Dude, it is used all the time, and what you call "safe" encryption is never safe for long periods.
Some years ago the 'DES' algorithm was a standard for symmetric encryption. Then it got brute-forced. So Triple-DES and AES were introduced. With current computing power, they cannot be brute-forced. Yet. NIST estimates Triple-DES will be brute-forced by 2030. And of course who knows if the NSA has a supercomputer that can do it already.
Yes, and it was considered "secure" until 1998 or so. Just like you think our current encryption is safe. Wait some years and computational power has increased to the point where our current encryption can be brute-forced, and the cycle continues.
Hence my point that brute-force is used all the time. And calling AlphaGo brute force is an insult to the team behind it.
My name was randomly generated. What you see is your own bias.
So DES was declared no longer secure before you were born. Not sure why you bring it up. I can say that the electrical telegraph is no longer used. But then you bring up that it is, by hobbyists. It's disingenuous to bring that up considering the nature of the debate we were having.
The matches will be held at the Four Seasons Hotel, Seoul, South Korea, starting at 1pm local time (4am GMT; day before 11pm ET, 8pm PT) on March 9th, 10th, 12th, 13th and 15th.
On March 11 2016 22:03 Laurens wrote: I was born in 91, and in the context of our discussion it was very clear why i brought it up.
But I do believe you are trolling now, so I'll just stop responding.
Ahuh ahum, I am sorry for being so mature. I don't really care about your future publications on Monte Carlo anymore
No! You are a troll!! omgz
I guess you are allowed to call people trolls, but calling a Monte Carlo algorithm 'brute force' is a deep deeply cutting demeaning insult.
I like Monte Carlo. MCMC's are about the only real comp sci algorithms we use in our lab. Google is obviously using something much more advanced. But we are a simple lab where no one even has a comp sci degree.
Can we please stop mentioning brute force ? Deep reinforcement learning with double networks - the kind of techniques used by Deepmind - specifically has one neural network dedicated to the task of 'guessing' which few moves to explore next. These chosen few are then sent to another part of the algorithm for evaluation. Effectively this is smart pruning of the tree of possibles, and is nothing like MC methods with O(n) at best or O(sqrt n) accuracy.
If you want to find out about how this was first exploited in chess, check out Matthew Lai's 'Giraffe' chess engine in september 2015, it's on ArXiv.
Well, to be fair to those that brought up 'brute force' first, as it wasn't me. The weren't referring to AlphaGo or Deepmind.
On March 12 2016 00:42 stuchiu wrote: It's a narrative thing. People want the computer to be about brute force.
No. It is about context. It is not a technical term. Apparently, in cryptography, they call anything that's not exhaustive 'brute force'.
In physical sciences, we call everything that's expensive computationally but simple to implement 'brute force'.
We can code for years and get all the laws of physics right, then get an answer with little computational cost. Or we can implement something simple and just run it for a relatively long time.
Of course, calculating something from first principle is impossible. You fail at that when you get to the level of a water molecule.
To briefly translate, a random lawyer guy who specializes on IT is complaining that AlphaGo vs Lee Sedol is loaded game where it's 100% impossible for human to win, and Google is insulting the entire Baduk community by their chicanery of historic proportions.
He argues since AlphaGo is connected to internet, AI can basically overpower human with force of sheer numbers. Bear with me here. More specifically, AIphaGo doesn't make move by making prediction about Lee Sedol's future moves, but by calculating the ideal move after looking at the last play Lee Sedol made. Therefore, since AlphaGo uses brute force to analyze all the possibilities, it is not a true AI.
The fact that AlphaGo is using cloud computing is directly against the principles of Baduk where it's supposed to be fair 1v1 with no external advice. "Google says AlphaGo do not use brute force algorithm, but it's receiving advice from another program that is using brute force. This is blatant cheating. Because AlphaGo can run thousands of AlphaGo at the same time over the internet, and can add more computers to its resource network when running out of time, it's impossible for it to lose by time unlike Lee Sedol", says this lawyer while adding that "Google offered million dollars but if Google wins, it will make much more higher profits due to being the frontrunner in AI technology."
He concludes that Google should publicly apologize to Lee Sedol, Fan Hui, and entire Baduk community in general since the company is deceiving them using AlphaGo that does not truly understand how Baduk works nor can be considered a true AI.
True gold is 1000+ subsequent netizen comment that unambiguously blames Google for being lying piece of shit, that what AlphaGo is doing is same thing as bringing textbook to the exam, and that it should be disconnected from the internet for rest of the match so it becomes Lee Sedol vs one laptop program.
Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
It almost seems like to me that to play vs. this thing would be like playing against the luckiest idiot savant ever. Like, the game doesn't know HOW to play at all...all it knows is that X move is statistically the best given where the pieces are on the board. Would personifying it be kind of like AlphaGo just plays where it "feels best" every time, without understanding a single intricacy of the game besides where pieces can physically be played and what it means to win?
Is what AlphaGo does really so different from what an actual human player does? Take a potential move, evaluate how it would play out and discard it if it is not good enough? The difference is that as a computer AlphaGo can do that same process far faster and more extensive then a human could but the basic principle is the same.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Brute force is as precise as saying: "the primary technique it uses, is programmed algorithms". Brute force 'might' be true by 'some definitions', but honestly such a basic and inaccurate description that it is like saying Flash uses his mouse to become a champion :S
For those who are interested in AI and statistics, it is definitely worth reading.
First and foremost, it's worth to point out that according to the paper, their setup only had 40 search threads, 1202 CPUs and 176 GPUs. I don't think this is even remotely close to a super computer in today's age. The computing power probably isn't even as strong as Deep Blue that was built decades ago.
One of the greatest challenges in go is how to evaluate a certain board state. The number of potential moves are large and it is incredibly difficult to assign a score to any board position. The board gets easier and easier to evaluate the deeper you go down a tree (less moves possible towards the end game). The early/mid game where the decision tree has many branches makes any brute force algos unfeasible.
In layman's terms, neuro networks allowed the program to develop a strong ability to predict moves by "guessing". This ability is reinforced by playing games against itself using a large library of recorded professional games, and assigning a probabilistic score to each simulated situation. This is how the program "learns" on its own. The use of bayesian conditional probability is what differentiates this program from other brute force algos found in the market.
In a live game, it estimates only the moves with high values/payoffs, this reduces the number of branches on the search tree, and allows the program to analyze them into much greater depth. This ultimately results in board values that are much more accurate. I think this process is very similar to what a human would do, which is only to focus on a handful of key possibilities. A brute force approach would have been to analyze all possible moves to a much shallower depth, resulting in less reliable value networks.
The key here is that the program remembers its past "training" games when it plays, so it spends much less time evaluating situations that it has seen before.
The value network was trained for 30 million mini-batches of 32 positions, using 50 GPUs, for one week.
Alphago incorporated so many modern AI techniques, and the fact that it is working this well is truly revolutionary.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
I feel the same way, but also that makes me respect human brains even more. Everything in Go happens in one dimension, even though it's on a 2-d board. Stones can only move in a 1-d fashion. In Starcraft, every unit on the map is represented in 2-d and has those degrees of freedom. So what if a computer can beat us at Go? It's a revolution, but I don't know enough about computers unfortunately to say how much harder it is for this type of AI to consider moves in more than 1 dimension.
How exactly is it in one dimension? The board is two dimensions, and stones don't move at all. They are just placed in an (x,y) location each turn and potentially removed if captured. Units in starcraft also exist in an (x,y) coordinate system and are removed if their HP reaches 0. Obviously starcraft is more complex in that the units move and fire projectiles and such but it's not a dimension higher than go.
On March 12 2016 09:30 Petrosidius wrote: How exactly is it in one dimension? The board is two dimensions, and stones don't move at all. They are just placed in an (x,y) location each turn and potentially removed if captured. Units in starcraft also exist in an (x,y) coordinate system and are removed if their HP reaches 0. Obviously starcraft is more complex in that the units move and fire projectiles and such but it's not a dimension higher than go.
You can consider the Go board to be one dimensional because the two dimensions are both discrete. You can take the 19 rows, and place them end to end and reduce them to a computationally equivalent single 361 row. This is called vectorization.
This process only applies when the dimensions are discrete and finite.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.
On March 12 2016 09:30 Petrosidius wrote: How exactly is it in one dimension? The board is two dimensions, and stones don't move at all. They are just placed in an (x,y) location each turn and potentially removed if captured. Units in starcraft also exist in an (x,y) coordinate system and are removed if their HP reaches 0. Obviously starcraft is more complex in that the units move and fire projectiles and such but it's not a dimension higher than go.
You can consider the Go board to be one dimensional because the two dimensions are both discrete. You can take the 19 rows, and place them end to end and reduce them to a computationally equivalent single 361 row. This is called vectorization.
This process only applies when the dimensions are discrete and finite.
Every starcraft map is also finite and discrete. Maybe much bigger than a Go board but it's finite and there is a minimum x and y distance.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
For those questioning the "brute force part" i just checked the wikipedia to get some numbers.
"November 2006 match between Deep Fritz and world chess champion Vladimir Kramnik, the program ran on a personal computer containing two Intel Core 2 Duo CPUs, capable of evaluating only 8 million positions per second, but searching to an average depth of 17 to 18 plies in the middlegame thanks to heuristics"
Deep Fritz, in 2006, was superior to deep blue (from 1996!) and ran in a core duo. 8 million positions per second is already completely out of human reach. Even in chess heuristics was the key step and in Go, more is required. Anyway a potato can evaluate millions of moves while the human is racing against time to check one by one.
I don't even know why people make such a big deal. Its not a fair match, and i don't think its supposed to be. Just don't be fools and swallow the idea that its a battle of "minds" or that Google is actually challenging the man. The computer is not even emulating the human thought process. They know they will win, what is done is to showcase what they are capable of to the general public. But discussing the fairness of a match between a 1202 CPUs computer using cloud computing and a human makes no sense.
Your cellphone can brute force you in a simple arithimetics contest while running youtube, the point is that despite that nobody ever did it with GO before because "the technology just wasn't there yet".You cannot calculate to win at go. Simple calculation is not enough for a human OR computer (at least for now).
Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.
people questioning the use of the words "brute force" actually know what "brute force" means in computer science. how alphago got very strong is due to the hyperbolic time chamber ability to train against itself and other AIs/emulated players very quickly in a short amount of time. the computing power that alphago has been given over time isnt sufficient enough to make the program one of the best in the world by simply stupidly calculating all possible moves on the board.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.
And the point you are missing is that it really is pretentious to argue semantics about industry jargon as an outsider. I work in aerospace. We have our own acronyms and jargon and code words like every industry does. There are many terms that have a very specific meaning in the aerospace industry.
If I learned anything in reading 5 pages of this thread, it is that the term "brute force" has a specific meaning in the computer industry, a meaning that the people who sound like they work in the industry or follow it closely all use, a meaning that you are pointlessly trying to argue the semantics of.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.
And the point you are missing is that it really is pretentious to argue semantics about industry jargon as an outsider. I work in aerospace. We have our own acronyms and jargon and code words like every industry does. There are many terms that have a very specific meaning in the aerospace industry.
If I learned anything in reading 5 pages of this thread, it is that the term "brute force" has a specific meaning in the computer industry, a meaning that the people who sound like they work in the industry or follow it closely all use, a meaning that you are pointlessly trying to argue the semantics of.
That might be true that it is "industry specific" term but it is still frustrating to see people calling Alphago bruteforce when that is just simply not doing it justice because it is just not "brute force" in a computer science context (which this is, since we're talking about AI/ML).
In a sense, AlphaGo does have a "decision making process" since it is deciding that some moves will give a higher probability of victory than others. Alphago is basically doing what Lee Sedol's brain is doing but on a far more precise level, but not to the point of brute force, since that would mean all possible variations which it just isn't doing, so Alphago's algorithm is far more intelligent than a simple "brute force" mechanism.
On March 12 2016 14:10 Wegandi wrote: Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.
What makes the proccess of AlphaGO so diffrent from humans?
On March 12 2016 14:10 Wegandi wrote: Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.
Poker is a game of percentages. It is trivial for a computer to calculate its chance of winning at any single point in the game and react "perfectly" to the information available. Over a large enough sample size to even out the element of chance a computer will win, no doubt about it.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
Very true. Always reminds me of the Automaton2000 videos about the marine split micro. Regardless, i think it would be entertaining to see what would happen.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
Very true. Always reminds me of the Automaton2000 videos about the marine split micro. Regardless, i think it would be entertaining to see what would happen.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
Very true. Always reminds me of the Automaton2000 videos about the marine split micro. Regardless, i think it would be entertaining to see what would happen.
what i'd love to see is if the AI can find different build orders to try and create new strategies, ie like the fast corsair strategies in pvz
Definitely, and this would probably end up happening too. Noticed a comment on the Reddit thread about AlphaGo's 3rd victory, that sums this up well i think.
Just remember, this is not the end of Go. As it was in chess, computers will gradually go from our nemesis to part of Go culture, assisting us and enhancing the game for human play.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
The one problem with the DeepMind vs SC pro thought is that DeepMind should be required to be limited to the input speed of the keyboard and mouse. It's incredibly dishonest to allow the computer to perform tasks that a Player simply isn't allowed to because of the interface. That's not really a competition at that point, it's simply allowing the computer to abuse parts of the game engine the human player doesn't have access to.
But there'd also be some fairly serious technical issues to work through. It's one thing to have access to the direct API of SC:BW, it's wholly another thing to find a way to be able to play a match instantly. That's what would be required for it to constantly run simulations it would need to "learn" the game.
Lastly, on the Go matches, considering they're throwing a parallelized super-computer at the problem, even if Moore's Law holds for the next decade, thus making the processing power available to the home user, there's still the issue of 10s of millions in programming money that went into the code to make this work. That's never going to be common.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
The one problem with the DeepMind vs SC pro thought is that DeepMind should be required to be limited to the input speed of the keyboard and mouse. It's incredibly dishonest to allow the computer to perform tasks that a Player simply isn't allowed to because of the interface. That's not really a competition at that point, it's simply allowing the computer to abuse parts of the game engine the human player doesn't have access to.
But there'd also be some fairly serious technical issues to work through. It's one thing to have access to the direct API of SC:BW, it's wholly another thing to find a way to be able to play a match instantly. That's what would be required for it to constantly run simulations it would need to "learn" the game.
Lastly, on the Go matches, considering they're throwing a parallelized super-computer at the problem, even if Moore's Law holds for the next decade, thus making the processing power available to the home user, there's still the issue of 10s of millions in programming money that went into the code to make this work. That's never going to be common.
google would have to work something out with blizzard to do it legally anyway, but if they really wanted to crack open BW to suit their needs they could certainly do it.
and lastly, google is making a huge advertisement for the power of cloud computing. moore's law will not hold up, at least for now. however, companies have found it lucrative to sell processing power through cloud computing. perhaps one day having big enough server farms and more efficient parallelism will enable alphago's improvements to be available to the average person.
They can use two robot arms with robot hands to use the mouse and keyboard physically, and use a camera to detect movement.
That's only fair as with chess or go, moving the piece is immaterial, even for a robot. Unless maybe it is speed chess. The chess or go board would be very very easy to interpret compared to a video game screen.
The eye-brain-hand coordination challenge is probably as hard of a problem as the game itself. If you are going to pick a challenge just because it is hard to solve, to show you can do things never done before, why exclude half of the problem. Let's see if they can make a hand that can do 350 apm all across the keyboard, using any key combination.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
The one problem with the DeepMind vs SC pro thought is that DeepMind should be required to be limited to the input speed of the keyboard and mouse. It's incredibly dishonest to allow the computer to perform tasks that a Player simply isn't allowed to because of the interface. That's not really a competition at that point, it's simply allowing the computer to abuse parts of the game engine the human player doesn't have access to.
But there'd also be some fairly serious technical issues to work through. It's one thing to have access to the direct API of SC:BW, it's wholly another thing to find a way to be able to play a match instantly. That's what would be required for it to constantly run simulations it would need to "learn" the game.
Lastly, on the Go matches, considering they're throwing a parallelized super-computer at the problem, even if Moore's Law holds for the next decade, thus making the processing power available to the home user, there's still the issue of 10s of millions in programming money that went into the code to make this work. That's never going to be common.
Under current approach, Deepmind for 2D video games would very likely be restricted to 60 or 120 APM ( one keyboard and/or one mouse for each rendered frame ).
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.
And the point you are missing is that it really is pretentious to argue semantics about industry jargon as an outsider. I work in aerospace. We have our own acronyms and jargon and code words like every industry does. There are many terms that have a very specific meaning in the aerospace industry.
If I learned anything in reading 5 pages of this thread, it is that the term "brute force" has a specific meaning in the computer industry, a meaning that the people who sound like they work in the industry or follow it closely all use, a meaning that you are pointlessly trying to argue the semantics of.
I agree with you . But the problem arises when the people from the industry forget that they use the words in a very specific meaning and then use it in an other enviroment etc. for example if u then try to explain the human brain etc. in analogy to the method AlphaGO uses just look at some posts in here and you will see that it´s not even a rare case
On March 12 2016 14:10 Wegandi wrote: Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.
Poker is a game of percentages. It is trivial for a computer to calculate its chance of winning at any single point in the game and react "perfectly" to the information available. Over a large enough sample size to even out the element of chance a computer will win, no doubt about it.
This explanation is wrong. Current AIs do not beat the best poker players in complex variations of poker (NLHE, PLO, etc.). While I'm pretty sure it's more a matter of resources than technology, the reason they're not winning is because poker is a game of balance. Your bluffs have to be balanced with strong hands. Your greedy value bets have to appear in spots you bluff a significant amount. Math for a single hand has almost nothing to do with it.
If a team with the resources of AlphaGo made a PokerAI, they would get there in a few years, at most, but meanwhile, a few dozen players still beat the best AIs.
Same goes for games like Magic or Hearthstone. This is the spot Chess was in ~25 years ago, and now your phones are GM level.
On March 13 2016 11:30 BisuDagger wrote: Has anyone considered what a game of go would be like if it was AlphaGo vs AlphaGo?
This is how AlphaGo was trained. After an initial phase of learning through playing on an online human server, it has been playing millions of games versus itself in the cloud.
The main advantage a human player would have is exploiting fog of war and strategies that the AI has never seen before. AI will optimize for everything it already knows, it can't optimize for things it doesn't know.
There are other ways to exploit the AI's weaknesses also, for example making seemingly illogical decisions (such as building a CC in the enemy's natural, only to cancel it later); it might trick the AI into believing something is happening that really isn't, causing it to do completely the wrong thing.
The main issue will be that any advantage the human gains has to overcome the incredible micro and macro advantage of the AI's perfect mechanics.
To make the competition fair, it would be sensible to limit the AI to hardware inputs and outputs, meaning it has to read from the monitor and input through mouse and keyboard. You might then also want to set an APM limit, for example of 300.
This will force the AI to work with what a human has available to him, and test the AI's strategic and tactical abilities, rather than raw mechanics.
On March 13 2016 18:00 Buddhist wrote: The main advantage a human player would have is exploiting fog of war and strategies that the AI has never seen before. AI will optimize for everything it already knows, it can't optimize for things it doesn't know.
There are other ways to exploit the AI's weaknesses also, for example making seemingly illogical decisions (such as building a CC in the enemy's natural, only to cancel it later); it might trick the AI into believing something is happening that really isn't, causing it to do completely the wrong thing. .
But isn't the point of AlphaGo exactly the opposite? It would "know" that the cc-cancel is fake and would play accordingly because it learned it from previous games?
On March 13 2016 18:00 Buddhist wrote: The main advantage a human player would have is exploiting fog of war and strategies that the AI has never seen before. AI will optimize for everything it already knows, it can't optimize for things it doesn't know.
There are other ways to exploit the AI's weaknesses also, for example making seemingly illogical decisions (such as building a CC in the enemy's natural, only to cancel it later); it might trick the AI into believing something is happening that really isn't, causing it to do completely the wrong thing.
The main issue will be that any advantage the human gains has to overcome the incredible micro and macro advantage of the AI's perfect mechanics.
To make the competition fair, it would be sensible to limit the AI to hardware inputs and outputs, meaning it has to read from the monitor and input through mouse and keyboard. You might then also want to set an APM limit, for example of 300.
This will force the AI to work with what a human has available to him, and test the AI's strategic and tactical abilities, rather than raw mechanics.
That's not how it works, though. The AI can optimize for things that could 'potentially' be happening in fog of war just like a human can (or better) because it would have access to all the replays of previously played games fed to it. Tricking AIs with seemingly illogical decisions is what people tried a lot in games like Chess and it doesn't work -- a properly built AI will be able to predict potential outcomes of whatever shenanigans you're doing and react accordingly.
It would consider "the best option" for it's opponent. So if it's an obvious fake, I am pretty sure that it would act accordingly.
Also since it can literally react within no time, it will be incredibly tough to keep it from having next to perfect map knowledge at all times.
Limiting the micro might be forced, but it would set arbitrary limits. You could do that, but it is weird. The thing why I feel it is an amazing thing to have for Go is because it has the potential to show "us humans" the boundaries of what is possible. Putting arbitrary limits on that will completely destroy that point.
You can only play mindgames vs something that knows it can be tricked.
If you build a CC and cancel, the only info you give it is that you have spend 400 minerals less than you normally would. This allows it to cut corners.
Point is exaclty that a AI plays like a robot. It won't panic or overreact like a human. When a human makes a mistake, they overcompensate it the next game, making again a mistake.
An AI can more easily find the optimal path because it has no human bias.
I do agree you can adapt to a bad static AI in RTS. You can see through it's pattern very quickly. If the AI is static, you can do it in 3 games.
On March 12 2016 14:10 Wegandi wrote: Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.
Poker is a game of percentages. It is trivial for a computer to calculate its chance of winning at any single point in the game and react "perfectly" to the information available. Over a large enough sample size to even out the element of chance a computer will win, no doubt about it.
This explanation is wrong. Current AIs do not beat the best poker players in complex variations of poker (NLHE, PLO, etc.). While I'm pretty sure it's more a matter of resources than technology, the reason they're not winning is because poker is a game of balance. Your bluffs have to be balanced with strong hands. Your greedy value bets have to appear in spots you bluff a significant amount. Math for a single hand has almost nothing to do with it.
If a team with the resources of AlphaGo made a PokerAI, they would get there in a few years, at most, but meanwhile, a few dozen players still beat the best AIs.
Same goes for games like Magic or Hearthstone. This is the spot Chess was in ~25 years ago, and now your phones are GM level.
Hearthstone is childs play. I would love to watch an AI try to play Vintage (MtG) Grixis control mirror against LSV. Even Caw Blade standard mirror would be quite fun. It would have to be a mirror since that would make it an even playing field, unless you had something close to 50/50 (like say Bgx vs UWx control). You can't have say the AI play Tron against UG Infect. Even if the AI had perfect information and did everything perfect they'd still lose 65%+ of the time.
In the more complex formats of Magic there simply is too many branching decisions and asymmetric information for the AI to "dominate" like it does in information perfect games like Chess and Go. There's a reason poker people like David Williams and Efro love playing Magic.
Lee resigns just a few moves away from the end (I think?) so we don't get an official count. Seems like alphaGo was winning by just a few points, but that is how it plays apparently: win small, lose big.
On March 15 2016 18:28 nayumi wrote: well Lee might go down in the history of mankind as the only human being to ever beat Alphago ... i guess that's an achievement
Good point. The last human to take a map from the best computer.
On March 15 2016 18:28 nayumi wrote: well Lee might go down in the history of mankind as the only human being to ever beat Alphago ... i guess that's an achievement
Good point. The last human to take a map from the best computer.
And in 2 years people will say: "But [insert current champion] could beat Alphago no problem. Lee just played bad."
On March 15 2016 18:28 nayumi wrote: well Lee might go down in the history of mankind as the only human being to ever beat Alphago ... i guess that's an achievement
Good point. The last human to take a map from the best computer.
And in 2 years people will say: "But [insert current champion] could beat Alphago no problem. Lee just played bad."
Ke Jie in China is already claiming that he can beat it. I really hope Google takes up that challenge. My money is on AlphaGo.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Amazing. Completely self taught? 100% artificial intelligence? Had a fascination for Go since reading and watching HnG. Amazing that this is actually happening today when all I could read back then was how it was yet impossible for computers to beat humans in Go. So computers will be the first to achieve the hand of god...
AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours.
The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up.
AlphaZero won or drew all 100 games, according to a non-peer-reviewed research paper published with Cornell University Library’s arXiv.
“Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case,” said the paper’s authors that include DeepMind founder Demis Hassabis, who was a child chess prodigy reaching master standard at the age of 13.
“It’s a remarkable achievement, even if we should have expected it after AlphaGo,” former world chess champion Garry Kasparov told Chess.com. “We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all.”
Computer programs have been able to beat the best human chess players ever since IBM’s Deep Blue supercomputer defeated Kasparov on 12 May 1997.
DeepMind said the difference between AlphaZero and its competitors is that its machine-learning approach is given no human input apart from the basic rules of chess. The rest it works out by playing itself over and over with self-reinforced knowledge. The result, according to DeepMind, is that AlphaZero took an “arguably more human-like approach” to the search for moves, processing around 80,000 positions per second in chess compared to Stockfish 8’s 70m.
After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2.
The new generalised AlphaZero was also able to beat the “super human” former version of itself AlphaGo at the Chinese game of Go after only eight-hours of self-training, winning 60 games and losing 40 games.
While experts said the results are impressive, and have potential across a wide-range of applications to complement human knowledge, professor Joanna Bryson, a computer scientist and AI researcher at the University of Bath, warned that it was “still a discrete task”.
After reading up on the research around Alphago and its subsequent iterations, it feels like the framework can be generalized to solve most discrete decision making problems.
It can also be used to evaluate if a game is balanced or not. I'd love to see it applied to other games such as Hearthstone, Poker, etc in future iterations.
On December 15 2017 04:54 Glacierz wrote: After reading up on the research around Alphago and its subsequent iterations, it feels like the framework can be generalized to solve most discrete decision making problems.
It can also be used to evaluate if a game is balanced or not. I'd love to see it applied to other games such as Hearthstone, Poker, etc in future iterations.
as far as my understanding goes, we might be getting much further ahead than that, and getting to the point of being able to solve *most* simulatable games check out their atari attack which is where it first got really interesting (inputs = pixels on screen only)
100% certain that they can use the same framework to solve poker and ALOT of similar games within that realm (hearthstone included) though solving hearthstone will be of course, not quite as interesting haha
also, i use the word "solve" tentatively in the way that alphazero is "solving" chess and go
Documentary is now up. Just finished it, was amazing. Loved the footage of the Korean commentators especially during the first game where Lee was blocked by AlphaGo for the first time the woman was outraged "How dare it interrupt him!" was hilarious.
What's absolutely incredible is how much simpler alpha zero is compared to it's previous iterations, and yet it is many orders of magnitude stronger (To give you an idea, it crushes the lee version 100-0)
This step is just as huge of an achievement as alpha go itself, and is essentially a fair claim that deepmind have an algorithm that "solves" a big portion of all conceivable games within reason
No (man-made) computer will ever beat me in a game of Rocket League 1v1 hoops taking exclusively pixels as input(ie, not hooked deep into the game logic itself granting it obscene levels of predictive omniscience.)
I submit readily to the idea that I currently exist within the framework of a quantum computer universe, and that it is simulating a fraction of the current population and that I have already lost games of Rocket League to such an advanced intelligence.
On January 03 2018 01:20 ItsFunToLose wrote: No (man-made) computer will ever beat me in a game of Rocket League 1v1 hoops taking exclusively pixels as input(ie, not hooked deep into the game logic itself granting it obscene levels of predictive omniscience.)
I submit readily to the idea that I currently exist within the framework of a quantum computer universe, and that it is simulating a fraction of the current population and that I have already lost games of Rocket League to such an advanced intelligence.
What makes you think this? Rocket League (regardless of which game mode) is quite simplistic with regards to physics, rules, and total complexity. Especially compared to a game like Starcraft.
Rocket League would take some visual recognition if it has to use the screen input, but the rest of it is relative child's play. Just basic physics calculations.
From an AI perspective, it would be 100% possible to make a bot that would be able to beat you in rocket league only using pixel screen inputs
Such a task however, would probably require the entire deepmind team and google's support working on the problem for a fairly significant amount of time
On January 03 2018 01:20 ItsFunToLose wrote: No (man-made) computer will ever beat me in a game of Rocket League 1v1 hoops taking exclusively pixels as input(ie, not hooked deep into the game logic itself granting it obscene levels of predictive omniscience.)
I submit readily to the idea that I currently exist within the framework of a quantum computer universe, and that it is simulating a fraction of the current population and that I have already lost games of Rocket League to such an advanced intelligence.
yeah a computer will never beat me in a fist fight either
Seems the entire point of this research is to help humans in things like folding proteins and designing molecules. It's going to be used to assist in developing new materials and drugs. Solving turn based games help. Solving real time games like Starcraft seem useless.
Was it a good documentary, or something only someone interested in comp sci or go would enjoy?
It was a good documentary, it focused more on the story and the context behind the match than anything else. No comp sci knowledge required to enjoy this one