Golf.<p>You play 'against the course' under the exact same conditions as every other competitor to see who can get the lowest score over the course of a week. Shoot low scores week after week, win tournaments, receive a ranking. Pretty straightforward.<p>Golf is unique among mass-market sports in that your performance is not subject to the effects of your opponents' play (other than psychologically). Nobody's blocking you or hitting shots at you that you must return. It's just your decisions, your ability, and the course in front of you.
A field does not evaluate skill. People do. People are subjective. Some skills permit more objective quantitative evaluation than others. Chess, in your example, looks like a skill we can objectively measure and rank, over a career of matches with scores kept. Even so we’re measuring a person’s performance in chess matches as a proxy for chess skill. A mediocre player might get lucky and catch Garry Kasparov on a day he has a migraine.<p>Most skills aren’t so easily evaluated, measured, ranked. Part of the problem is what we measure. For a programmer do we look at lines of code per day? Commits? Bugs found? Profit earned from the code? We might have a subjective opinion about a programmer’s skill, relative to other programmers, but that’s hard to quantify. I think most skills present this kind of problem.<p>I don’t think any professional or technical skill lends itself to direct objective, quantitative evaluation. Instead we usually look at results and consistency. How we rank those factors for multiple people is partly subjective.
Skill is a very delicate topic.<p>For "quantitative", assuming we're talking about STEM, I don't think many people would strongly defend any of the established measures (job interviews? performance evaluation? corporate pay scales? academic publication metrics?). I think the consensus among tech workers (maybe not their managers) is: it's all bad.<p>Now, for "objective", I would say it's also all bad, but there is bad and much much worse. In my limited experience, the larger and more experimental the field, the worse things get. Established mathematicians and theoretical computer scientists often agree on who is "strong" and who is not, in their community (they are at times all wrong). In software engineering, it's often even less clear. In astrophysics, you need to be in the right lab to do anything (maybe it is a skill?). In animal biology, you need lots of luck with your experiments (although poor ethics can sometimes help make luck happen). I'm joking here, but here's my very rough feeling: when success depends a lot on external factors, people underestimate those factors to varying degrees, adding tons of noise and bias to the consensus.
Measurement-based Olympic sports (100 meter dash, speed skating, high jump, etc.) The winner is not only determined by pure physical capacity, but is a result of all the skill in training and mental preparation before and during the event.
In some areas, like trading and even chess, there is an element of randomness that makes it impossible to isolate pure skill. Maybe in some kinds of racing, either in cars, bicycles or on foot (or the million other kinds like kayaking etc), come closer to either pure skill or athletic prowess anyway. Something like guitar hero (III is the only one I played), where it's the same every time is pretty much a pure test of skill.<p>Significantly, none of my examples really are real world things - anything interesting always involves some element of chance that cannot be isolated from skill.
Juggling.<p>It isn't more objective than the "measured sports" mentioned in other comments, but it does have the advantage of its measures being mostly integers (eg. how many balls can you juggle? With one hand? Etc.), which means there is always a strict pass/fail test to "level up".