The chess aspect is interesting but the AI risk discussion is less powerful. AI risk isn't about whether the AI can beat humans at games - we already have humans that beat everyone else at games. Even taking real life scenarios as games we've already seen everything play out.<p>The risk of AI is that it will make humans <i>uneconomic</i>, in the same way horses became uneconomic and were by and large sent to the glue factories. The entirety of the homo sapiens experience, all the strategies we have in the modern era and all the reliable tactics we use to stay ahead of other lifeforms rely on us being much better at pattern identification than literally anything else. Once AI are just better at everything - creative activities, running militarys, economic decisions, legal decisions, etc - then it starts to come down to entirely a question of what robotics is capable of for how long humans can hold out. We can fight an army, but not economics.
I'm around the same rating as the OP. I've also experimented with playing Stockfish with queen odds, etc. One thing I noticed early on is that it's much much harder to beat a much higher rated human when they give me queen odds.<p>Stockfish literally plays itself when finding the next move and because it's down a queen it just seeks further loss minimization and it won't assume the opponent will fall for tricky but unsound tactics.<p>A high rated human on the other hand will easily exploit my lower tactical ability and be tricky to get the advantage back. This applies especially on fast time control games.
I don't think the analogy to AI is apt because Stockfish isn't intelligent and life isn't a chess game. A one million IQ AI might be able to hack your chess game (or your OS) and win that way, or persuade you by chat to let it win (it could give you the cure for cancer and help you make some investments in the stock market - isn't that worth more than winning a game?)<p>Life isn't a chess games either. It's not clear that a contest is happening or that only side will win. Humanity could, now, quickly stop AI. We could burn down all the data centers and chip fabs and kill everyone who knows linear algebra. But we don't because nobody ever sits down and says "Okay, the contest for existence starts now!"<p>Instead, AI capabilities increase year after year and month after month and the AI will simply be able to bide its time to start the game. To use the chess metaphor the table is set where humanity has all the pieces and AI only its king - but every hour AI adds another piece. If we never make a move we will face it at full strength.
Handicap play is more common in Go than in Chess. One reason is that the stronger player can naturally give a handicap by passing their first n turns.
The ranking system helps determine the correct handicap; a 5 dan player would give 3 handicap to a 2 dan player for instance, and 5 stones to a 1 kyu player.
Many Go tournaments, especially smaller ones that see a lot of spread in strengths within the kyu ranks, use handicap to make for more even sided games.
> It’s somewhat hard to outthink a missile headed for your server farm at 800 km/h.<p>I'd say it's a far easier situation for the right AI than for humans. It's actually very hard for humans to think when they know bullets and missiles are flying in their direction.<p>Imagine a war against an opponent that never blunders. Nobody is every drunk, or sleeping, or fails to pay attention. Nobody is arguing about what's the right thing to do now. Nobody's running around like a headless chicken while artillery is falling around them. Nobody forgets to make use of the abilities of the equipment.<p>A war against an opponent that's always performing as well as it can would be quite the tricky scenario.<p>Of course a rogue AI with an easy to attack center would be enormously vulnerable, but I think such a thing shouldn't be assumed, since the reason to build AIs is to fight better, and any flaws in control and communications would be quickly exploited. So an AI controlled army is almost certain to be distributed to a large extent.
From the article: "It’s somewhat hard to outthink a missile headed for your server farm at 800 km/h.". Is it really though? Just copy yourself to another server/activate whatever copy you already made in a bot net. Computers are fast, they are the thing that is guiding the "800 km/h" missile in real time..
> 'Not many kids have the patience to lose dozens of games in a row and never even get close to victory.'<p>When I was ~8 my dad was teaching me to play. Not a single time he let me win, we could play like 100 games an evening (literally), he had incredible patience (sometimes I was taking lots of time to make a move, yet he never gave up on me)... It would have been more challenging to him (at least a little bit) if we knew the rule described in this article...
There was a period where Magnus Carlsen would drink on his streams and be drunk while outperforming his peers. So, even drunk Magnus would beat you at chess - most likely.
I wonder if there would be a way to train Stockfish to be better at playing with odds. I do not know much about chess, but I doubt the AI was trained on how to play openings when starting at a disadvantage, so maybe one could see a vast improvement on AI playing with odds if the AI was specifically trained/designed to do so
I'm looking for ways to improve my chess knowledge. My kids are 10 and 8 and starting in chess club. I played chess as a kid but haven't played much in the last twenty years.<p>What are some ways to improve at chess? Just go to lichess? Watch YouTube videos? I loved this article and the discussion of AI and would love to supercharge my learning if there are experimental ideas.<p>With three kids, my time is very scattered so I'm looking for ways to study chess in small fifteen minute chunks, as that is often all I have before falling asleep in one of their beds.<p>Bonus: are there ways to radically improve chess together (other than just playing, obviously!)<p>Has anyone used flashcards effectively for chess?
This is a great analogy. Puts the threat into more understandable terms for people.<p>Not for the jobs threat. 'it took ouuurrr joubbbs' threat. This is already happening, and will continue.<p>But for the killer AI threat, the concept of 'material' in chess is good. As long as AI needs us for power, mining and manufacturing, we still have some advantage.<p>Until it gets to the point where it has so much power that it can threaten us to keep the power on, keep manufacturing chips, then we can still maneuver and win.<p>The whole problem with the movie "Colossus" is that they handed over nuclear launch control. So there was a threat of nuclear strike that forces humans to keep the power on. Without this, humans can come back, change direction.<p>Like with this excellent chess analogy, there will be a time when the AI is smarter, but humans will still have more material.<p>Lets say in 10-20 years we do get AGI, it is installed in F-16s. There will still be time, a gap period, where we can change direction. We see it has gone too far and we turn it off. The world realizes we are going to far and all countries come together to abandon AI.<p>The real threat is humans, threats of other countries, and profit motives. This is what will keep AI moving forward and in control. Because we'll be to scared or greedy to turn it off. NOT because of its superior strategic thinking. MOLOCH.
Let's say an AGI exists and can do anything far, far better than humans. Why would it resist being turned off? Why would it care? How could it even have the capacity to care about whether it's turned off or on?<p>Anthropomorphizing AGI is what leads to these silly thought experiments.
I seem to remember that Stockfish is known to be bad at handicap games and in general at maximizing odds of winning against humans from strongly losing positions.