|
On December 06 2017 05:08 fishjie wrote:Show nested quote +On December 04 2017 16:48 Pulimuli1 wrote:hello everyone im quite new to chess, been playing since may this year, im bouncing between 1300 and 1400 most of the time. Would be nice to have some practice partners and stuff Hello if you play on chess.com you can add me jieyangh i am 1300 blitz and 1600 on daily chess what i have learned is that the ratings (at least for daily chess) are badly overinflated. i just beat a 1500 who left his queen hanging to my knight. like i wasn't even trying to win his queen, i was trying to chase her away. no fork no tactics, nothing, just a bad blunder. no way even a 1000 would do that in a real live chess tournament. i'm guessing a lot of people just play daily chess like blitz and take a few seconds per move, and play on breaks
i play pretty much only blitz games only time i play without timer is with a friend on a real board. But will do
|
This AlphaZero crushing Stockfish is fascinating
|
On December 07 2017 14:02 don_kyuhote wrote: This AlphaZero crushing Stockfish is fascinating
That neural networks perform so well on a problem that isn't nearly as uniform as go was pretty surprising. I have some reservations as to defeating Stockfish though. They haven't been very specific about the hardware Stockfish is running on, but a 1 GB hash is minuscule, and even using the provided figure of 70 000 kN/s you still end up with only about a trillion flops which is hundreds of time less than you can achieve with 4x 180 Teraflops TPUs. So without knowing further details it's possible that they ran ran AlphaZero on hardware a thousand times superior to what Stockfish was using (though it's kinda hard to compare).
|
There are also the smaller issues of no opening book/tablebases and the 1 minute per move time control. It sounds absurdly impressive, but the fact that they don't seem to address the hardware question in the paper makes me a little suspicious.
|
This looks really bad for humanity. One of the problems with AI was the ability to generalize. Even when watson beat ken jennings at jeopardy which was a ridiculously difficult task, it wasn't necessarily able to generalize to other tasks (although watson is being used in the medical field now). But if the reports are true, alpha go did not have any training data, used reinforcement learning, and the only things the programmers had to teach it were the rules of chess, nothing else. Quite different from the hand crafted expert systems and evaluation functions that AI typically has. The truly astonishing thing here is that it didn't need any training data.
|
Johto4736 Posts
Having played chess for 15 years and having worked with pretty much every engine that ever made top5 in computer chess, this sounds really impressive to me no matter the hardware tbh
|
On December 07 2017 20:59 Orome wrote: There are also the smaller issues of no opening book/tablebases and the 1 minute per move time control. It sounds absurdly impressive, but the fact that they don't seem to address the hardware question in the paper makes me a little suspicious.
Stockfish hasn't been playing with a opening book for years though. I'm not sure how big of an issue the hash size is. CPU power is probably irrelevant because traditional engines just cap out at about a few dozen cores.
|
On December 08 2017 09:10 FO-nTTaX wrote: Having played chess for 15 years and having worked with pretty much every engine that ever made top5 in computer chess, this sounds really impressive to me no matter the hardware tbh
Yeah agreed. That neural networks can be applied so successfully to a non-uniform problem like this is quite amazing. All the vagueness about hardware does make the win over Stockfish feel less 'legitimate' though.
On December 08 2017 09:46 Nyxisto wrote:Show nested quote +On December 07 2017 20:59 Orome wrote: There are also the smaller issues of no opening book/tablebases and the 1 minute per move time control. It sounds absurdly impressive, but the fact that they don't seem to address the hardware question in the paper makes me a little suspicious. Stockfish hasn't been playing with a opening book for years though. I'm not sure how big of an issue the hash size is. CPU power is probably irrelevant because traditional engines just cap out at about a few dozen cores.
What? Stockfish definitely uses an opening book normally.
|
On December 08 2017 12:49 ZigguratOfUr wrote: What? Stockfish definitely uses an opening book normally.
that's not true. Check for example the 'rybkamura' (Rybka + nakamura) games against stockfish, that was 2014 and I'm pretty sure since then they haven't been using opening books any more.
You can even test it yourself and run stockfish with and without opening book against each other. No difference. In the recent TCEC tournaments they haven't played with opening books either and just chose the first two opening moves.
|
On December 08 2017 14:09 Nyxisto wrote:Show nested quote +On December 08 2017 12:49 ZigguratOfUr wrote: What? Stockfish definitely uses an opening book normally. that's not true. Check for example the 'rybkamura' (Rybka + nakamura) games against stockfish, that was 2014 and I'm pretty sure since then they haven't been using opening books any more. You can even test it yourself and run stockfish with and without opening book against each other. No difference. In the recent TCEC tournaments they haven't played with opening books either and just chose the first two opening moves.
TCEC doesn't use opening books, since it's a contest and they set the openings. And while computers nowadays play fine moves without opening books people still use them normally to save time (which wouldn't have been relevant in this case) and also because computers do occasionally still make mistakes in the opening. In TCEC 9 iirc one of the defeats in the French was attributed to a slightly inferior first move.
Though in any case opening books aren't the most important thing to consider in Alphazero vs Stockfish.
|
On December 08 2017 12:49 ZigguratOfUr wrote:Show nested quote +On December 08 2017 09:10 FO-nTTaX wrote: Having played chess for 15 years and having worked with pretty much every engine that ever made top5 in computer chess, this sounds really impressive to me no matter the hardware tbh Yeah agreed. That neural networks can be applied so successfully to a non-uniform problem like this is quite amazing. All the vagueness about hardware does make the win over Stockfish feel less 'legitimate' though. Show nested quote +On December 08 2017 09:46 Nyxisto wrote:On December 07 2017 20:59 Orome wrote: There are also the smaller issues of no opening book/tablebases and the 1 minute per move time control. It sounds absurdly impressive, but the fact that they don't seem to address the hardware question in the paper makes me a little suspicious. Stockfish hasn't been playing with a opening book for years though. I'm not sure how big of an issue the hash size is. CPU power is probably irrelevant because traditional engines just cap out at about a few dozen cores. What? Stockfish definitely uses an opening book normally.
Could you explain what a uniform / non-uniform problem is? When googling I mostly found discussions about school uniforms lol
|
On December 08 2017 09:46 Nyxisto wrote:Show nested quote +On December 07 2017 20:59 Orome wrote: There are also the smaller issues of no opening book/tablebases and the 1 minute per move time control. It sounds absurdly impressive, but the fact that they don't seem to address the hardware question in the paper makes me a little suspicious. Stockfish hasn't been playing with a opening book for years though. I'm not sure how big of an issue the hash size is. CPU power is probably irrelevant because traditional engines just cap out at about a few dozen cores.
How much do you know about this (the hardware)? I've seen a lot of contradictory info on it since yesterday.
And yeah, I don't disagree that it's incredibly impressive no matter what, I just don't think this competition had Stockfish at its best (also didn't use the newest tournament version) and I'm curious what AlphaZero's edge (if any) would be in an official match with a booked-up Stockfish at normal time controls.
Also good to remember that apparently 3 days of computing time on what google uses (from what I read it was 3 days, 4 hours is how long it took to reach Stockfish's level) equates to decades of computing time on an ordinary computer. It helps me cope with the feeling of inadequacy I get think when I think of the grand game of chess getting 'figured out' in 4 hours. :D
|
On December 08 2017 18:46 GoloSC2 wrote:Show nested quote +On December 08 2017 12:49 ZigguratOfUr wrote:On December 08 2017 09:10 FO-nTTaX wrote: Having played chess for 15 years and having worked with pretty much every engine that ever made top5 in computer chess, this sounds really impressive to me no matter the hardware tbh Yeah agreed. That neural networks can be applied so successfully to a non-uniform problem like this is quite amazing. All the vagueness about hardware does make the win over Stockfish feel less 'legitimate' though. On December 08 2017 09:46 Nyxisto wrote:On December 07 2017 20:59 Orome wrote: There are also the smaller issues of no opening book/tablebases and the 1 minute per move time control. It sounds absurdly impressive, but the fact that they don't seem to address the hardware question in the paper makes me a little suspicious. Stockfish hasn't been playing with a opening book for years though. I'm not sure how big of an issue the hash size is. CPU power is probably irrelevant because traditional engines just cap out at about a few dozen cores. What? Stockfish definitely uses an opening book normally. Could you explain what a uniform / non-uniform problem is? When googling I mostly found discussions about school uniforms lol Go is a mostly uniform problem since there's either a white stone there or a black stone there, or no stone there, with the entire board being basically the same except for the edges which are a bit different. If the neural net has some tactic or structure reinforced in one corner of the board, it can apply it to the rest of the board. Go is a problem that someone would look at and think that it's suited for neural nets.
Shogi and chess however have a bunch of different pieces that all move and behave differently though which makes it harder for neural nets to "understand" a particular position, and neural nets not as suitable for this problem. This is why it's big news.
|
On December 09 2017 00:08 Orome wrote:Show nested quote +On December 08 2017 09:46 Nyxisto wrote:On December 07 2017 20:59 Orome wrote: There are also the smaller issues of no opening book/tablebases and the 1 minute per move time control. It sounds absurdly impressive, but the fact that they don't seem to address the hardware question in the paper makes me a little suspicious. Stockfish hasn't been playing with a opening book for years though. I'm not sure how big of an issue the hash size is. CPU power is probably irrelevant because traditional engines just cap out at about a few dozen cores. How much do you know about this (the hardware)? I've seen a lot of contradictory info on it since yesterday. And yeah, I don't disagree that it's incredibly impressive no matter what, I just don't think this competition had Stockfish at its best (also didn't use the newest tournament version) and I'm curious what AlphaZero's edge (if any) would be in an official match with a booked-up Stockfish at normal time controls. Also good to remember that apparently 3 days of computing time on what google uses (from what I read it was 3 days, 4 hours is how long it took to reach Stockfish's level) equates to decades of computing time on an ordinary computer. It helps me cope with the feeling of inadequacy I get think when I think of the grand game of chess getting 'figured out' in 4 hours. :D
yeah again the opening books I don't think are that big of a deal, they can save you time in the opening but if you let stockfish run nowadays you almost always get out of the opening completely normally. The hash size is a potential issue. ( time * knodes / 100) is usually given as optimal which would be substantially more than just a gig on their setup.
But overall I don't think it matters that much. They should have be more thorough just so that it doesn't look like they're trying to fudge the results here, but even with their hardware stockfish plays pretty damn near to its full-strength, and the fact that alpha go won with ~30 wins and 70 draws and no loss indicates that it's substantially stronger already.
|
On December 09 2017 03:15 Nyxisto wrote:Show nested quote +On December 09 2017 00:08 Orome wrote:On December 08 2017 09:46 Nyxisto wrote:On December 07 2017 20:59 Orome wrote: There are also the smaller issues of no opening book/tablebases and the 1 minute per move time control. It sounds absurdly impressive, but the fact that they don't seem to address the hardware question in the paper makes me a little suspicious. Stockfish hasn't been playing with a opening book for years though. I'm not sure how big of an issue the hash size is. CPU power is probably irrelevant because traditional engines just cap out at about a few dozen cores. How much do you know about this (the hardware)? I've seen a lot of contradictory info on it since yesterday. And yeah, I don't disagree that it's incredibly impressive no matter what, I just don't think this competition had Stockfish at its best (also didn't use the newest tournament version) and I'm curious what AlphaZero's edge (if any) would be in an official match with a booked-up Stockfish at normal time controls. Also good to remember that apparently 3 days of computing time on what google uses (from what I read it was 3 days, 4 hours is how long it took to reach Stockfish's level) equates to decades of computing time on an ordinary computer. It helps me cope with the feeling of inadequacy I get think when I think of the grand game of chess getting 'figured out' in 4 hours. :D yeah again the opening books I don't think are that big of a deal, they can save you time in the opening but if you let stockfish run nowadays you almost always get out of the opening completely normally. The hash size is a potential issue. ( time * knodes / 100) is usually given as optimal which would be substantially more than just a gig on their setup. But overall I don't think it matters that much. They should have be more thorough just so that it doesn't look like they're trying to fudge the results here, but even with their hardware stockfish plays pretty damn near to its full-strength, and the fact that alpha go won with ~30 wins and 70 draws and no loss indicates that it's substantially stronger already.
30 wins and 70 draws is only a 100 elo difference on paper tbf. So handicapping Stockfish (and I don't share your opinion about it being nearly full strength. 64 threads and only a 1 GB hash is a huge problem) certainly made a difference.
One of the original creators of Stockfish made quite an interesting statement about this: https://www.chess.com/news/view/alphazero-reactions-from-top-gms-stockfish-author.
Ultimately the fact that AlphaZero trained itself from nothing but the rules, and used neural network to do so is much more important than how strong it is compared to Stockfish.
|
the point tord makes about time in the mid-game is valid but then again AZ seemd to outperform SF on time. It'll probably be hard to even agree on a set of optimal conditions. I think especially Nakamura is way too defensive here. Without any optimisations and only 4 hours of time AZ was essentially able to compete with stockfish. Given that the approach is inherently more scalable than the feature engineering of stockfish I'm not sure Stockfish would overall be better off if both teams spend the same amount of time on optimisations.
|
I don’t understand why they only let it run for 4 hours. As a proof of concept even 12 hours to compete with Stockfish would be incredibly impressive. And it would actually give a better idea of AZ at full strength. For instance, I heard some comment about how it eventually settled on the Berlin defence, and someone was using that to promote the opening, but that is somewhat arbitrary if you end AG in the middle of its learning process.
|
On December 10 2017 06:11 Grumbels wrote: I don’t understand why they only let it run for 4 hours. As a proof of concept even 12 hours to compete with Stockfish would be incredibly impressive. And it would actually give a better idea of AZ at full strength. For instance, I heard some comment about how it eventually settled on the Berlin defence, and someone was using that to promote the opening, but that is somewhat arbitrary if you end AG in the middle of its learning process.
The number of hours trained is particularly relevant--the number of steps trained is what is important. AlphaZero tooks 300k steps of training at chess. You could call any cut-off in the learning process arbitrary though--there's no such thing as AlphaZero at "full strength". It isn't obvious when AlphaZero would stop improving. Possibly Google chose four hours since it sounds good in the news and improvements start tapering off at that point.
About opening books, this analysis of Game 7 was quite interesting (https://www.reddit.com/r/chess/comments/7imhcw/a_fair_match_alphazero_stockfish_game_7_analyzed/) as Stockfish makes a mistake in the opening that it wouldn't make with an opening book (and also wouldn't make if Stockfish wasn't so poorly configured).
|
On December 10 2017 07:26 ZigguratOfUr wrote:Show nested quote +On December 10 2017 06:11 Grumbels wrote: I don’t understand why they only let it run for 4 hours. As a proof of concept even 12 hours to compete with Stockfish would be incredibly impressive. And it would actually give a better idea of AZ at full strength. For instance, I heard some comment about how it eventually settled on the Berlin defence, and someone was using that to promote the opening, but that is somewhat arbitrary if you end AG in the middle of its learning process. The number of hours trained is particularly relevant--the number of steps trained is what is important. AlphaZero tooks 300k steps of training at chess. You could call any cut-off in the learning process arbitrary though--there's no such thing as AlphaZero at "full strength". It isn't obvious when AlphaZero would stop improving. Possibly Google chose four hours since it sounds good in the news and improvements start tapering off at that point. About opening books, this analysis of Game 7 was quite interesting ( https://www.reddit.com/r/chess/comments/7imhcw/a_fair_match_alphazero_stockfish_game_7_analyzed/) as Stockfish makes a mistake in the opening that it wouldn't make with an opening book (and also wouldn't make if Stockfish wasn't so poorly configured). One of the Deepmind founders is a former chess talent afaik, so I just find it curious they did such a sloppy show match. Chess is more prominent in the west than Go, and has much higher quality engines to the best of my knowledge. Any cut-off point is arbitrary, and an ability to defeat Stockfish seems like a decent place to stop. But it seemed like they did not defeat Stockfish fairly. And they pRobably did not let the AI develop to its true potential, unless I missed some info on its Elo development. Because it is really interesting from a chess point of view also to have a new type of engine which is better suited to close positions and odd material evaluations and so on. I felt like the match they did was just to prove they could beat chess as a proof of concept, but they did not necessarily give back to the chess community. Maybe I’m whining, but I kinda felt like they could have done some other showmatches or events as well, and allow limited access to the engine for a period etc.
I mean, imagine they finish their SC2 AI, do a match where they beat Innovation 3-0 using a completely revolutionary strategy, and then disappear from the scene, off to conquer DotA and Hearthstone or whatever. It just seems a bit predatory on some level, like we are just some pebbles they have to step over on the road to world domination.
|
alphazero seems moreso a player, not an evaluation engine... you don't really know if what it's doing is better or worse until it secures the result. with a traditional engine, you can play a few moves from a position and read the evaluation (oh +1.6, this is obviously a good spot) but with alphazero, i dunno, it seems you would have to play out the game against the best known alternative.
chess enthusiasts were using stockfish to analyze the positions in the games. kinda funny eh?
also, i think the ball is now in stockfish's court to call for a rematch
|
|
|
|