The superficial mathematical essence of chess is a min max game, but that's only the trivial part. Just to gather the latest number I've had a funny experience with DeepFritz? 8. I just made a draw with the beast (I have to admit that it rarely happens, these days ) on a bi-processors Xeon 2.4 Ghz, 5 min + 2s blitz. DeepFritz? 8 explores the search tree in parallel on as many processors as are available. The engine reported to analyze a staggering
1567000 tree nodes/second
I really cannot tell exactly how many nodes my poor old neuronal hardware does it, but I'd be very much surprised if I consider as many as 10 -- and that's because it's a blitz game. On a normal game I might do as little as 1 per second. Ok, I am not GrandMaster
?, my title is FIDE master, although I play(ed) chess at a decent professional level in international tournaments. I even taught chess to kids with some success, and still do a little in my spare time. The world of human chess and computer chess and how its lessons are applicable to other intellectual activities, and particularly to
SoftwareEngineering.
So the main difference between a chess engine and a GrandMaster? or a good human player is that GrandMasterEliminatesWrongMoves. In the min/max tree the chess engines will try to analyze as many as nodes as possible trying to explore the search space exhaustively. In principle they do a breadth first exploration, with a little twist on it: there's a heuristic function that tells the engine to explore deeper on some paths because there is "tension" in the position. This avoids the situation where an engine stops the exploration of a path just one or two move of being mated or before losing material big time. Because of the time limit complete search is not possible so the engines take the decision to limit the depth of breadth first to what is appropriate on the particular CPU that they are running -- (if you want to beat a chess engine, run a little overload on the OS to take it into swapping, and the engine will go nuts because it will not be able to complete the search space it plans initially).
As opposed to a chess engine which tries very hard to explore as many paths as possible, a grand master tries very hard not to explore that many, so his intelligence is in selecting the paths that are really worth exploring, a process otherwise known in CS as pruning the search tree. And human mind is absolutely fabulous at pruning.
How does this pruning happen? Some anonymous contributor claimed that grand masters have a "database" of wrong moves stored in memory, and having good associative memory, recognize the wrong moves from the database. This is completely wrong. Chess players do use their memory quite a lot, but this is only in opening (where it's exactly the other way around: we already know the good moves, we might even play as much as 20 moves without blinking an eye, because it's a well beaten path).
What is at work in pruning the wrong moves and selecting the promising moves worth investigating is what I call (I might be using the wrong terminology), HigherOrderPatterns. First order patterns helps us recognize characters, faces, etc. This is an area where AI has made quite promising progress and it is well understood mathematically. But higher order patterns is recognizing patterns about patterns, or patterns about second order effect.
For example you may program a chess engine to recognize a typical mate configuration. However, a good chess player will recognize, long before the mating configuration can even occur, that a certain plan will create the conditions for the mate configuration to occur. A chess player will recognize also similarities that go way beyond geometrical/mechanical transformations from one position to another.
For example I can teach a kid that when two defender pieces cross paths in their line of defence, sometimes you win by sacrificing a piece where those defenders cross their lines (a trick called interference. I can show him how it works in one particular positions with 2 rooks being the defenders. Not only he will be able later to recognize this when a rook and a bishop are the defenders, but even more amazingly, having learnt 20 or so patterns of attacks, he will be able to sense in a chess puzzle that that particular configuration is riped [?] for interference of all the combinational patterns he learnt. And this talent is innate to humans, it is absolutely amazing. With a neural network you give it a ton of data, and it crunches it and it reaches a configuration that helps it do OCR or voice recognition. All a good 6 yr old kid needs is usually a handful of positions and the logical explanation, and voila, your higher order pattern recognition is ready to fly. Truly talented kids (I've seen a few) may need as little as 1 position. They will look at it exhaustively, not being aware of the pattern, until they get the eureka moment, and then on their own come back with both the logical explanation and the capacity to apply it in the future.
And one of the most important higher order pattern recognition faculty is the capacity of applying chess principles. Chess has a handful of simple principles that if you really follow you are already 3 quarters of the road to being a grand master. There are a few exceptions to those principles and rarely principles may enter in a conflict with each other in particular positions, so the chess player has to decide the best trade-off. When a less skilled player plays with a grand master or somebody with experience, one would think that the weaker player would try to stick as close to the principles as possible in order to be safe, while the grand master will try to be more "creative" in order to exploit his extra capacity. What is amazing is that in practice the opposite happens: the weak player is afraid of grand master's extra knowledge, and tries to steer the game on unbeaten paths, therefore breaking the principles of chess, while the grandmaster just sticks to principles and plays as simply as possible, and more often than not, he is not using even half of his brain in such a game. When players of comparable skills meet, then the most important part is the creativity in applying those principles and recognizing the relative importance of one versus the other. Still a higher order pattern recognition, it goes like this: intuition tells me that in such a configuration the pattern "bad pawn structure" is less important than the pattern "maximize the activity of your pieces". Please note that to maximize the activity of one's pieces is higher order still and depends on more detailed patterns to be applied.
So how does relate to software development (programming):
- Chess resembles programming in the large because they're both highly intellectual activities with high consumption of neuronal energy.
- Chess is about pattern based exploration of a search space, based on pruning the search tree, and this is further based on recognizing higher order patterns, and creatively applying principles (higher order rules). A programmer when he sets out to solve something has a space of design choices some good, some bad, a space of "how to go about" solving this problem. He must be able to prune the false starts, and choose the more promising ones. I still believe that what differentiate very good programmers is the capacity of recognizing higher order patterns (not necessarily DesignPatterns, those are the trivial part), and creatively apply principles and stick to them rather then improvise. See TheHumbleProgrammer.
There are differences however:
- It is fairly well known what a beginner needs to be taught, so that if he has talent and work capacity he stands a chance to become a very good player. In programming, the actual process is much more non-deterministic. What kids learn today, may turn out tomorrow to be not quite the good ones. [Please clarify.]
- Programming is also about social patterns, and collective intelligence. There's an extra trade-off level, for example maybe this problem is best approached from a functional background, but the experience of the team members and the comfort level with an OO approach makes it better with respect to this particular group of people. A chess player is typically not concerned with consideration.
- It is well recognized for professional chess that players have to continuously educate themselves and keep their mind in best shape by solving chess puzzles, studying one's own games and the games of others, studying the games of others, exploring positions with their peers (I learnt that CriticsAreYourBestFriends in chess). My chess experience both in playing and in teaching suggests that the software industry is sorely lacking this emphasis on continuous education and keeping the minds of programmers in great shape. If I was owning a software company, other than the 40 hours work, I would mandate that Friday after lunch, everybody quits whatever particular project and joins session of real training, meaning not product/technology specific or methodological indoctrination (be that even XP), but stuff like solving programming puzzles, listening to exposition of weird geeky stuff like CPS transformations, even if it has no applicability to that particular shop. I believe that such an experiment would greatly increase the productivity, because left unexercised, the brain of a programmer, just like a brain of a chess player gets rusty extremely quickly. I believe EwDijkstra pioneered this technique in a research environment (the famous TuesdayAfternoonClub?), but I'd be very curious if any management anywhere can be persuaded to buy this idea.
--
CostinCozianu (waiting for other contributors)
Management is often open to the idea of doing such things at lunch time, since people might otherwise go out for lunch anyway. I've seen multiple places that call it "the brown bag", meaning you bring your own lunch (in the classic brown paper bag), so that it isn't a budget issue, and people give talks about technology, as you said. Digimarc had this type of weekly seminar. I would think this would be more prevalent at research-based companies and institutions. See LunchnLearn for another example.
- From my experience as both chess student and later as chess teacher, anything below 4 hours and not done regularly is at best a mild waste of time. The idea that people who already make a serious intellectual effort would learn anything for real during lunch breaks is preposterous. -- Costin
- You are correct that one cannot learn a large subject, such as chess, in a single lunch break, but you're too quick to jump to assumptions. One can learn an introduction to a topic one has never studied before, for instance. That's how I first learned about BinaryDecisionDiagrams?, and that's a "for real" topic. One can also profitably learn from someone talking about their successes and failures in working on some topic. It's true that 1 hour once or twice a week can't compare with your suggested 4 hours per week, but they're on the same order of magnitude, and the lunch-break can invariably be sold to management more easily - and it's better than nothing! Most companies don't do any such thing at all. -- Doug
- Well, then by all means they should do such 1 hour thing during the regular program. But anyways, where I was driving is that professional chess player spend at least half of their time training for their work, keeping their mind in shape and learning new things. If programming is at least remotely comparable to chess as intellectual activity, the lack of care for continuous education and training is staggering. Management waste tons of money on palliative measures (like better management oversight, process improvement, product training) where increasing the competence, fitness and skill of the worker is what's obviously needed, and what can deliver results. -- Costin
- Agreed. Selling that to management is a further question. The usual attitude in the industry is that it is up to software professionals to handle their own continuing education, and in fact perhaps many of us do, in one sense, but that doesn't really address your point completely, by any means. -- Doug
Here's an example of higher order pattern recognition: where a park player succeeds, yet the best chess engine to my knowledge fails:
What should black play here?
- I must be missing something, because it seems to me obvious that the total number of possible moves, under any modern (e.g FIDE) rules, is so small as to be be analyzable by exhaustive search (semi-exhaustive, rather -- eliminating pointless random walks by the two kings, for instance), with no cutoff required (and that *does* assume repeated-position rules for recursion termination). So why would even lower-level chess programs fail, here? What did I overlook? Or -- is it possible that you raise this as a weakness in simply one chess program, where it might be a matter of an outright bug, that it could not fully analyze by exhaustive search quite quickly? -- Doug
- You are correct: modern programs will solve this (i.e. detect that opening the position loses) using transposition tables to effectively increase the search depth and reduce the branching factor. The idea is sound though. One could easily construct a position that will overwhelm even modern chess programming techniques, which focus mostly on search extensions, but which are still solvable using the second-order, domain independent logic at which we humans excel. -- IanOsgood
[If you're right, I'm disappointed. In endgames, it can easily be the case that material advantage doesn't lead to a win. Moreover, a blockade should be just another pattern to consider, rather than a "higher order" pattern.]
- You are using the terminology "pattern" vs "higher order pattern" differently than everyone else here. The rest of us on this page are differentiating between "pattern" meaning a particular board position, versus "higher order pattern" meaning a generalization over many particular board positions. There are an exponential number of trivial variations of the above board position, far too many for either a human or a chess program to actually memorize/store in a database (given that it's just one of many such), whereas Costin is saying that humans instantly recognize the above via higher order pattern matching (and one would hope that chess programs would, too, but apparently not, from what Costin is saying).
- [The human sees instantly that a draw is available, but I don't see any use of generalization or pattern-matching in that. The conclusion is made directly from the position, since white has no threats. It's disappointing if a good end-game program doesn't reach the same conclusion for the same reason.]
- One could say that, if someone programmed recognition of a generalization of the above position, which is just one kind of blockade, perhaps that would be a "second order pattern", whereas the more general recognition of blockades of all sorts would be higher order, at least a third order pattern, and with humans, doubtless at least several degrees higher than that. This is slightly hand wavy simply because we're talking about cognitive abstraction, which is obviously an area not completely understood by research (note its poverty in AI compared with humans -- which come to think of it, is in fact the topic above).
- Modern programs do have heuristics regarding blockaded positions; avoid them! Such heuristics evolved via natural selection on the internet chess servers, where folks quickly discovered that closing a position gives humans an advantage. For instance, the program Crafty has specific heuristics and opening line choices to prevent the Stonewall position. Open positions favor the chainsaw search engines of modern programs. -- IanOsgood
- Your nonchalant attitude towards arguing with Costin that he is overlooking something easy and obvious with chess algorithms makes it appear that you are unaware that he is ranked approximately the #1000 best chess player in the world - and he is familiar with chess programs as well. From my point of view, he pretty much addressed your point before you made it. BTW heuristics are brittle compared with human reasoning, and applying them requires recognition of the appropriate situation in the first place, which is typically yet another issue with search plies and evaluation functions and pattern matching rules - which is exactly what we're talking about here. I think you are underestimating the problem you are talking about. -- DougMerritt
- Sorry, I didn't think I was coming across as argumentative. It turns out the position is a great example after all. I am aware of Costin's standing in the chess world, which is why I asked him whether he too thought ProgrammingIsHarderThanChess. I certainly agree with everything he's said on this topic. -- IanOsgood
To those who estimated that chess engines should solve this position correctly by exhausting the branches, I think you underestimate the combinatorial explosion in this position. Please note that not only there's a handful of configurations when the pawn structure does not change, but there's also the possibilities that white will sacrifice the rook at some point in time and black will open up the position. Yes, it is easy to create a chess program that will solve this kind of position. Put a special routine, and decide that when the opponent cannot ever create threats (capture a piece and give check), then the position is blocked and the program should not open it up.
- Apparently I was underestimating. I thought that, if the program was careful about noting transposition, there might only be a few million non-trivially distinct positions to consider. Am I way off, given the assumption about transposition?
However that would be a special case designed by you, because you are humans, and you already solved it by your higher abstraction capacity, and now you encode your knowledge of a special case. However, do that in a commercial, general purpose chess engine, and it will simply be a terrible waste of time. I checked this position with 2 different top level engines that I have license for (Fritz 8, Deep Fritz 8, Schredder 7 ), all set at official think time for chess (40 moves in 2 hour). Again Deep Fritz is the latest version of Fritz and was run on a real bi-processor at 2.4Ghz each. Please note that the human also decides (generally much better than a computer, when to invest time budget into thinking more on a position and when to abstain from wasting time calculating -- another higher order pattern. Chess programs have some heuristic for time budgeting, but humans seem to have much more than such a heuristic). Again, if you put the special case for blocked position inside your chess engine, it will have no practical consequence as chances of reaching such positions in practical play are infinitesimal, and checking for such positions is simply a waste of Processor time and instruction cache in the course of regular chess. And if this is a special case, just think how many more special cases there are in chess.It is simply unfeasible to program special cases in chess programs, unless the programmers decide that such special cases have a high occurrence rate and there's a good heuristic for recognizing it. There are special cases in modern chess engines for extending the depth of a generally breadth first search, on certain branches - most of those have to do with tension (pieces attacking each other), danger to the king, possibility of pawn promotion, etc. If you think this position is easy, try to put yourself in the shoes of a chess engine programmer. How would you even think that your engine will recognize positions like this and play them correctly, while for most positions will go the min/max route.
This position reinforces a few memorable sayings of JeanYvesGirard and EwDijkstra, among which:
- "l'ordinateur n'est qu'un cyber-crétin".
- "... the question of whether Machines Can Think ... is about as relevant as the question of whether Submarines Can Swim."
So where's the higher order pattern at work in humans. Like JeanYvesGirard said in an essay, human mathematicians get a lot of mileage from throwing a few on voit bien que in their proofs (approx. transl: we can see easily that). Somebody claimed above that this is simple deduction, not higher order pattern recognition. Well, that deduction starts necessarily with: on voit bien que (white cannot make progress, it cannot attack black pieces). That is, the human sees a pattern of proof and does not bother to formalize the proof (try to to actually prove formally that this position is drawn, and you'll get lost in all kinds of annoying details). Humans not only are content with the sketch of a proof instead of the actual proof, they also are able to select the proof schemes out of the many proof schemes available in approaching a chess position. This warrants in my opinion the name for a new concept: HigherOrderPatterns. While the computer is nothing but a cyber-idiot and that's why it simply cannot skip formal proofs, it cannot recognize when a program (proof heuristic) is more apt to attack one kind of problem then another program. It cannot do anything but crunch numbers, at least for the time being and the foreseeable future. That's why it still fails to address positions such as this. And that's nothing, you may say that the position is completely artificial, but in the most recent matches (Kasparov and Kramnik against two different computers) the game lost by computers were abysmal failures, very much resembling the position above.
It may turn out that chess is the wrong game to prove the limited condition of the cyber-idiot, and enough computing power with min/max and a bit of heuristic will solve any chess problems. It was thought that for all intents and purposes the game of chess was practically infinite, though theoretically finite. Moore's law seems to prove us wrong on this one. But on the other hand try to make computers do Math, and there goes the relevancy of Moore's law out the door. I'm no Go expert, but friends told me that Go is such a wider search space that Go engines are in the minor league compared with Go masters. -- CostinCozianu
Completely correct. The GameOfGo is the new drosophila of ArtificialIntelligence. The GameOfChess is now mere engineering. -- IanOsgood
There's nothing "mere" about it. It has been known for decades that Go is much harder for computers than chess; the status quo there didn't change just because chess programs have improved a lot. There is still a vast amount of potential research that can be done just on chess programs. It is a partially solved problem, not a completely solved problem.
One interesting question is whether algorithmic approaches that work well for Go (hypothetical, as-yet undiscovered ones) could also be used to make chess programs better. -- DougMerritt
I meant it tongue-in-cheek. In ages past, when computers were slow and strong-AI was all the rage, chess was held up as a hard problem that would feed AI research for decades. It turned out that alpha-beta search + faster hardware and plentiful memory was all it took to create world-class players. Most of the additions have been more well-tuned positional evaluators and search extensions, which I would class as engineering rather than basic AI research. DeepBlue and the research leading up to it were interesting exercises in VLSI and parallelism, but not really AI. By contrast, the field of Go programming is pretty wide open, with no two programs even agreeing to have a global search, much less share a search algorithm. Lots of room for basic pattern recognition research and alternative search strategies. -- IanOsgood
- Yep. But remember that just a few years ago, a few programs were doing quite well that bucked the trend by really trying to attempt a more human-like approach, and evaluating IIRC only a few hundred moves per second. They were fairly quickly overwhelmed by the straightforward brute-force improvements to programs that relied on zillions of moves/sec and judicious deep-ply lookahead, but that didn't prove a long term point, only a relative point.
- This reminds me of the death of Lisp machines; it's not that a machine specialized for Lisp is inherently slower than a general purpose processor running Lisp, it was just that the marketplace and engineering economics meant that general purpose processors advanced much faster than the special purpose. But if someone were willing to spend the money, then special purpose hardware in support of Lisp would still, today, beat out general purpose processors. -- DougMerritt
- That reminds me... Steven J. Edwards, author of the PGN standard, has been working on a project to create a smarter chess player using a chess-specific dialect of LispLanguage. He occasionally posts progress reports to the Computer Chess Club (http://www.talkchess.com/forums/1/index.html).
- That sounds interesting. The site immediately demands a password, though. Do I really have to join the site in order to see some overview of what Edwards is up to? -- DougMerritt
- [It seems joining is free, but required so that you've agreed not to make inappropriate posts. -- Anon]
- Thanks for that info. They're being dumb, I don't want to post, just to read. But ok, got it.
[Let's see now - black is way down in material, and even taking the rook leaves him still down in material, with enough moves left to explore that a win might not be found in time even if it were available. Therefore, black should consider playing to draw, and that turns out to be very easy, even without recognizing a blockade. All that needs to be noticed is that if black doesn't take the rook, white can't check, even if black's king moves unwisely. If white can't check, he can't win. Hence black can do anything except take the rook and draw. If Fritz doesn't take the draw, that simply shows either that Fritz never takes a draw unless it knows it will be mated otherwise, or that Fritz's methods of determining whether a draw is available are inadequate. Either possibility is disappointing. Of course, Fritz should attempt to find a win for black, but if it fails to do that, it needs a better fallback position than taking the rook, still being down in material, and risking losing.]
- Fritz attempts to maximize his chances. This is done by calculating all the scores in the min/max tree. The score function is necessarily heuristic and has to be computed fast. Taking the rook will give Fritz a much better score (albeit negative). Furthermore, Fritz will play the wrong move relatively quickly. Maybe if you force Fritz to compute for a week in this position, it will find the correct move (anything but taking the rook). I don't know, maybe if you force it to think for 1 hour will find it, I wasn't curious yet, maybe I'll schedule it this night. However the chess playing algorithm cannot take the decision that this is a position worth investing a lot of time in, it doesn't know a-priori that it should calculate more in this kind of context. Deciding when is the time to calculate more is an exceptionally difficult problem. For the cyber idiot this position is in essence no different than any other chess position, while for the human mind it is trivial to recognize that this position is special. Like I said, it is trivial for any good enough programmer to program a chess engine to recognize blocked positions. But put that special case algorithm into a chess engine, and you probably just made it significantly worse at playing, because it will spend non-trivial additional cycles and instruction cache misses for every node of his search tree, trying to hunt down a case that almost never happens in a practical game.
- [I accept that. However, maximizing one's chances of winning is not making the best move. If the best achievable via look-ahead is a bishop down, with captures still possible, obtaining a draw is better. If playing time is limited, but not excessively so, some of that time should be used to try to find a simple draw. -- Anon]
- Could you expand on this:
- "maximizing one's chances of winning is not making the best move."
- Surely the definition of the best move is one which maximises one's chances of winning?
- No. If the draw is found, it's worth a 1/2 point. If the value after taking the rook would be assessed as less than 1/2 (which is highly likely), the 1/2 point draw is the best play. The program should maximize expected value, not the chance of winning. Of course, the possible future positions are assessed a bit less accurately if some time has been used up in finding the draw, but the position given is hardly a borderline case. -- Anon
- I think the communicational issue here is that it is counterintuitive that "maximizing one's chances of winning" wouldn't simply imply "or draw, if winning appears unlikely". And in fact some programs approach it that way. But other chess programs find it difficult to optimize drawing simultaneously with attempting to win, due to the way their evaluation functions approach things. -- DougMerritt
- Perhaps a clear explanation of GameTree search is in order?
- Never hurts, but the gist of it is simple: usually (well, from some points of view, anyway, not necessarily as a universal) it is convenient to regard a draw as an outright negative, so long as we think we might be able to win. It's only when we doubt we can win that a draw becomes a positive. So the question is whether our evaluation functions can easily make the shift from draw being negative to draw being positive or not, and that is architecture and/or implementation dependent, that's all. Some programs figure that such questions won't arise until the endgame, so it's not a hardship to use an inflexible (in this regard) evaluation function meanwhile, and just make a heuristic switch to an evaluation function that is better at drawing when and if the time comes. Obviously, some programs will make a mistake in that switchover (by Murphy's Law). -- DougMerritt
- [That doesn't sound right. A draw by repetition can easily occur before the endgame. -- Anon]
- Certainly it can. But that is vanishingly rare compared with the kind of midgame battle that programs need to primarily be focused upon. -- Doug
[I've just tried Fritz (an early demo version) on a position where the draw can't be missed, and it chose the draw almost immediately. -- Anon]
- Not surprising that it would depend on the particular program and version, given the above.
[What may be significant is that recognizing the draw requires specific code, as the rules say the draw occurs after 50 moves (iirc) without a capture or pawn move. The rule may have been varied, as certain positions provide a theoretical win which requires more than fifty moves. I suspect most chess programs would take the draw if they found it was an option in the first place. A good chess program is likely to be used for analysing arbitrary positions as well as playing games, so its end-game analysis ought to cover the rather obscure possibilities as well as more typical situations. -- Anon]
- I think you are correct in all of that, but note that the 50 move rule applies to real world games extremely infrequently. Its main effect (as is partially true also of the time limits) is to remove the "sitzfleisch" option that had been in vogue with some players in the early 20th century (see fun anecdotes in the classic "Fireside Book of Chess"). Given that the 50 move rule applies, certain desperate measures to save a lost game are ruled out, that's all - so that's not primarily what we're considering here, although of course it does affect things.
- Well, but wait a minute, I was assuming that the draw would be either (1) mutually agreed, or (2) if the chess program were ungentlemanly, a result of the repeated position rule, but that's assuming too much, you're right, the chess program could in fact just flounder, which would indeed introduce the 50 move rule for the draw. My mistake.
- Anyway, I believe that my previous comments still apply, regardless of the particular circumstance of the draw; certainly the full set of draw rules are special case, but they don't normally apply; the fact that they are special case means, I think, that my previous comments apply. Pretty much the whole point is whether special cases of termination, such as various kinds of draw being desirable, fit in with or do not fit in with the normal game evaluation functions. -- DougMerritt
[I don't understand the "save a lost game" remark. If a draw occurs, the "lost" game
is saved isn't it? Also, I don't see how a special (negative) value for a draw would work - surely common sense implies it's value must lie between the values used for a loss and a win. -- Anon]
- We are (completely my fault, BTW) mixing up the human and the computer view of things. "Save a lost game" is an archetypical human view of things. The computer view of the same situation is merely the same old same old minimax - except that we introduce human heuristics, to throw a wrench into those smooth workings.
- [This is getting surreal... "same old same old"? The rules ensure the game ends... is that all you meant? -- Anon]
- Giving a negative value for a draw is a way of telling our program "don't draw, you can win!". You may think that assigning a draw a value of zero, rather than negative, would suffice. But chess evaluation functions are (and in general, must be) highly non-linear, so that intuition is incorrect. The intuition that zero should be sufficient assumes that the evaluation function(s) is/are linear. This is not, however, simply a matter of e.g. linear programming. -- DougMerritt
- [So how does that relate to what I wrote? What tells the program "don't lose, you can win"? Why should a draw's value depend on whether it's black or white to play? -- Anon]
Update. To its credit Schredder 9 plays the position correctly with black, on my home computer. Deep Fritz is still in trouble. --Costin
''However, switch sides, make the blunder (in a similar position) and let Schredder beat me:
Still Schredder 9 does not quite "understand" the position. A 9 year old with 2 years of chess training should be able to win this position without blinking an eye, but Schredder 9 simply cannot. White has to sacrifice 2 pawns (say on e5, and then on d6 or f6, in order to create space for the white king to penetrate into black's defence. However, this is a plan that Schredder 9 apparently doesn't "see". Recognizing plans is, in my opinion, a form of higher order pattern recognition.
A claim that is apparently wrong, as it has been contested but not properly defended.
In chess, a grand master is said, according to ChunkingTheory?, to make their next move from a "database" of moves stored in long term memory in which bad moves have been eliminated. This his been shown by real brain research to be true. I'm sure there are more scientific references available, but I remember first reading about this in GoedelEscherBach by DouglasHofstadter.
They say it takes about 10 years of experience to make an expert. Grand Masters make the correct move the first time and do not go through an elaborate test refactoring cycle.
What's interesting is that amateurs access short term memory and do not build a move database in long term memory. It seem that most people, even after many years of experience, can not or do not make this leap to the strategy used by grand masters.
There is a direct parallel to software development, see WhyDoPeopleMakeSoManyMistakes.
An experienced developer does not have the same fears that XP says are out in the world because through their experience they know what works and doesn't work and they know mistakes will be detected in their tests and when found are normally very correctable. This is an experienced based observation and is as supportable as the XP assertion that humans can not be trusted to do the right thing because they are inherently fallible.
By design, XP enforces a least common denominator approach that keeps everyone a perpetual amateur by not allowing and not even recognizing the capabilities of a grand master level experience in particular domains or across several domains.
The central premise of this page, that there are developers who develop like GMs play chess is laughable. It may go a long way to explaining why people with experience (in any field) don't repeat the same mistakes much. It may even go some way to explaining why some people are much better at this than others (just like most people with 10-20 years experience at chess will never be GMs). However, there is a fatal flaw in the connection - there is no database of techniques that really work. If there were, you would see wildly successful large scale software systems popping up all over; after all, we have been doing this for 50 years now. Instead, we are awash in a sea of nearly-right, mostly-ok, but starting to become unmaintainable systems, punctuated by the odd spectacular failure, and the even more rare roaring success that nobody seems to be able to duplicate. -- AnonymousDonor
Yeah, I wish there were something that represented reflexive solutions to complicated problems that seem to be natural solutions. Since they're bound to be repetitive, I think I'll call them DesignPatterns. But you're right. The analogy isn't true. The difference between developers and grandmasters is too great, but the point is that experienced people use intuition more than rationality. Similarly, there have been studies that have shown that the most successful businessmen use intuition more than rational analysis. But intuition is a double-edged banana. Not only can you end up in crazy windfalls based on hunches, you can also slip on the peel if your intuition is blinding you. One should only have probable faith in inductive logic, after all. -- SunirShah
Since they're bound to be repetitive, I think I'll call them DesignPatterns
Sunir, that was a particularly humorous comment! And the banana thing too. Are you trying to dethrone PeterMerel?
Oh, and to make this a relevant addition - my opinion, stated elsewhere, is in agreement with Sunir and AnonymousDonor: ProgrammingIsHarderThanChess.
I wish I could help the original author on this page back on WhyDoPeopleMakeSoManyMistakes where I share his point of views, but unfortunately that discussion is outside my area of competency. I'm one of those insects in SpecializationIsForInsects. However chess is well within my area of competency, and I can stress that the pattern is a truism that has been known for more than a century in the world of professional chess.
It ain't need for brain studies to prove it, and while I'm not a grand master, I played professional chess at an international level, I've been beaten by grand masters, I have won with some other grand masters, I used this principle to train a champion, and I can hardly stress enough the importance of this principle.
The database of moves exists only for the openings ( beginning of the game, first 20-30 moves), and for simple endgames (only a handful of pieces left on the table), and these things are usually known even to beginners, memorizing moves is the easiest part of the job. Intuition plays undoubtedly an important role in a few occasions, but most of the time a grand master relies a lot more on reasoning than on intuition. Intuition is important only in very complicated positions and/or when the player is in a time crisis (it has to move in a hurry, otherwise he will lose on time).
But most of the time a grand master will observe the principles of chess, and will apply them creatively. Those are also the ones that help humans beat the computer. After all, the game of chess is mathematically a min/max game. If there will ever be a computer powerful enough to analyze chess moves exhaustively (I very much doubt that) humans will stand no chance. However the big advantage of a grand master is that by observing the principles of chess he cuts the analysis tree by several orders of magnitudes as opposed to what the computer can do. A computer might explore on the order of 100.000s nodes per second while for a human 1 node per second is extraordinary good.
The principles of chess cannot be formalized (as yet, although a great deal of research has been spent) for the computer, but can easily be explained to a talented 10 year old kid.
Now the problem to use the same technique in computer programming is that we might not have yet a clear set (or several alternative sets) of principles to apply.
The great quality of chess principles is that when you lose you'll be able to tell immediately what principles you have broken, and learn from your lost games. I don't know if we can say the same about failed programming efforts.
-- CostinCozianu
I believe that this notion of a grandmaster's database of wrong moves is merely a misunderstanding of research in this area, which, as I have understood it for ages, does indeed mean to imply second-order/higher-order patterns, not merely a good move/bad move database. A classic observation is that grandmasters can practically instantaneously memorize a chess position never seen before, but only so long as it is an excerpt from a real game, and that, when asked to memorize a board position with truly randomized piece locations, they don't do better than non-grandmasters. That seems to say a fair amount on the topic. -- DougMerritt
Perhaps we do have a few principles, don't we? OnceAndOnlyOnce, PrincipleOfLeastSurprise, maybe LawOfDemeter and the OneResponsibilityRule ... Obviously not all of these principles are uncontroversial, but they (purport to) offer more general principles about how good code should work.
Interesting page!
- How do you codify chess experience in your head?
- How to you capture chess experience on paper?
- How do you codify software experience in your head?
- How do you capture software experience on paper?
- Perhaps design patterns and principles?
- Sets of little rules?
- Any others?
Thanks for the revived interest. I tried a rewrite.
See also CollectWhatWorks.
JanuaryZeroSix