I agree with your premise. While scoring systems have helped increase interest in wine, ultimately it is a highly irresponsible way of imparting information because it implies there is an actual hierarchy of wines in respect to quality, when in reality nothing of the sort exists. I've always said that scores are like saying Bach is "better" than Beethoven, the Beatles are better than the Stones, or Lafite is always superior to Margaux or Lynch Bages. It's complete nonsense, yet this is exactly what scores are all about. The finest wines are appreciated for where they are grown and their artistry, not by quantitative measurements of sensory qualities disguised as objectivity. If anything, scores do a great disservice to the public by leading them on the wrong path, and in the end this will only hurt the industry.
Should be noted, though, that music hasn't found itself free from a scoring system, either. All music reviews "grade" albums or releases, either on a scale of 1-100 or a school grade of A-F. Pretty much no industry has discovered a way to relate their take on a product that an average consumer is willing to engage with without a summarizing score of some kind. The vasy majority of consumers will not read a thorough write up, to fully understand that write up in relation to what they're looking for.
Dave, the operative term is "vast majority." I'd say, the percentage of people who don't need a "review" to decide what music they prefer is more than 99.9%. Same with books. There are lots of book reviews, almost none with ratings or scores. People do read reviews, but primarily to find out what books are about, not to know which books are "better." Movies and TV shows have quick "ratings," but almost no one pays attention to ratings; they rely on their own instincts based upon what they read or word of mouth. I've always contended that the primary reason the wine industry depends so much on numerical scores is because so-called "critics" are lazy. They don't want to devote the brain power or space to inform readers of how a wine tastes. It's too much trouble to differentiate wines by their origins, their artistry, comparisons to other wines, etc., and so much easier to slap on numbers with cursory notes, which is why (with some exceptions) the notes in wine reviews sound all the same. In short, simple lack of effort and imagination on the part of writers and publications. Which is a shame because, clearly, the everyday reviews of books and movies show that it isn't difficult at all to differentiate works. More than anything, I'd ascribe that to the simple fact that the wine industry is barely out of its baby stage. People understand books, music and movies because they've grown up with them. Wine is more challenging. Yet someday wine, too, may become more "everyday." The industry will grow up and be able to communicate with consumers more like adults, with a good grasp of language and concepts. I'll probably be dead by then, but I'll die knowing that it's bound to happen just because people are, in fact, very intelligent, and eventually to get around to things.
The snarky answer here is that book reviews are for people who actually *read*, but what about the rest of us? ;P
The more honest answer is: good point, though a lot of readers do still use Goodreads and Amazon, etc. to see 5-star rating reviews, aggregate scores, etc. We can argue how useful or relevant to the actual quality such scores are 'til the cows come home, but those scores are widespread and available, so it's difficult to say how useful consumers actually find them. (I would actually love a controlled poll on this topic some day - some already exist but do not divide ratings and reviews, but instead lump them together as the same thing and say "98% of consumers depend on ratings and reviews" which isn't helpful to determining the need for one but not the other.)
I do think consumers can (and do) eventually begin to translate reviews into their own tastes - someone might hate a movie or book but then the reasons they state for hating it sound like your cup of tea! With food and beverages, people have a much more tenuous grasp on *why* they like or dislike a product. This leads to a lack of confidence in translating talk of minerality, acidity, earthiness, etc. And at the end of the day, there isn't a lot of difference between a reviewer saying "I love this wine" and giving it a high point score. Neither statement is guaranteed to translate to another person's opinion/taste, but what people are looking for is simply: is this a recommendation, and if so to what extent? From there, they can choose to read further about the product/review or not. And they'll have to take the risk either way, to see if they agree, it's the same result with or without a point score.
I get the argument that point scores suggest objective truth, but if we think consumers are smart enough to translate a write-up, why do we not give them the same intelligence to translate experiences with point scores? After which the fundamental question boils down to: do consumers prefer a point score and find it useful or not? If they do, why are were hand-wringing against it?
My original point is that scoring systems are irresponsible. They simply don't lead consumers to making the best choices for themselves; taste in wine, obviously, being extremely personal, like taste is movies, books, music, etc. If, however, you want consumers to become more sophisticated, you have to begin somewhere, which is by making obvious points differentiating wines without demanding "mastery" of terminology (appreciation of the arts, books or music, for instance, don't demand mastery, just just increased familiarity with one's personal taste). Wine scores are a crutch which the industry and media have grown to rely on. Like I said, mostly because of laziness. Yet those of us in the industry all know what the finest wines are. They are the ones that express distinctive qualities, quite often sense of place, and often the same kind of originality everyone appreciates in artistic mediums. If we want to help consumers come to a better understanding of this in the context of their personal preferences, we have to start by communicating these distinctions. Numbers simply don't do it. They might be helpful, like the "dots" with movie reviews. But everyone reads movie reviews the same way. You look at the list of actors that you might find interesting, you read about the plot or storyline, descriptions of the cinematography, the basic gist of what the artists are trying to say, and then you make a decision about whether you want to see it or not. The "dots" become neither here nor there, in the same way that numbers in wine reviews are useless when it comes to imparting information that actually matters to the consumers. Ultimately, wine consumers deserve to know and understand sensory distinctions, or the "stories" behind wines that help people appreciate artistry or places of origin. This more than anything is beneficial for decision making. and beneficial to the growth of the industry. The sooner we move towards that the better.
The best rejoiner to what I call Score Madness is "I don't know what a point tastes like!" Unfortunately, we are stuck with scores as they sell. But that doesn't mean that sensible wine writers need to be part of the game. We just need to point out to our readers regularly that we don't think scores impart useful information about wine. There is also point of context. On a hot day I will always prefer a '85 point' anonymous rose to a '95 point' Bordeaux. Wine has many dimensions it cannot be captured with one number.
For the last ten years or so, it feels like scoring has become increasingly generous, and wine publications are venturing further afield (Texas!) in an effort to capture more attention. The more wineries that can tout their ‘90+ point’ wines, the more visibility that publication gets, as its name gets plastered across winery marketing materials. It’s a mutually beneficial cycle—but one that has also inflated scores to the point where they don’t carry the same weight they once did. If everything is a 90+, does the number still mean anything?
A piece of analysis I'd like to do, if I ever get the time, is to compare scores across the last few decades to see if I can prove the score inflation argument.
Waiting for a very rainy day before I get busy with that!
I’ve been a wine drinker for 50 years. In my younger years, not knowing any better, scores/stars highly influenced my buying decisions however in the last 20 to 30 years I’ve seen scores/stars as very much a marketing tool used by winemakers and wine stores to flog mass production wines (in particular). “Best Cab Sav from the horrible region of outer Woop Woop rated 5 stars by a team of reviewers who also happen to publish an annual wine guide where wineries have to pay to get featured. Stars or points tell you nothing about the wine. One man’s ethereal, light, almost colourless but incredibly true to grape Pinot Noir is another man’s picked too early, acidic, airy fairy natural winemakers mistake.
I quite like the concept of rating wine as follows:
- taste and spit out quickly
- one glass is enough
- a couple of glasses if there’s nothing better on the list
- let’s get a bottle
- keep the empty bottle as a reminder as it was so bloody wonderful
Disclosure: our family makes wine (“natural” for want of a better name), we also have a natural wine bar and shop and taste quite a lot of wine.
I've never understood the 100-point scoring system, nor why only the top 20 points are ever used (so thanks for explaining that to me). That said, I can easily live without scores. In my own notebooks, I use a 5-point score, but it's not something I ever share in my articles - I just use it as a personal reminder of how much I liked (or not) a wine.
I think the points in this article were extremely well made. To me, this feels like a question regarding the level of rigorous specificity that a person reading a review is looking for, or how much information they would like in order to try something new. I think the current and broad 100 point scoring regime exhibits the weaknesses named in the article, and a more accurate solution is a bigger lift for the reviewer and for the reader, with the a certain subset of readers appreciating something more numerically accurate.
In my work in beer I focus on adding structured scoring to certain dimensions of the flavors present. To translate that to the case of wine, I could envision a system where a particular note, Blackberry as an example, could be rated on the dimension of Intensity, Sweetness, Palate Position (Front, Mid, Finish) and Surprise (whether that note was expected as "standard" according to varietal and region). I personally find this type of scoring useful and have been advocating for its use within the brewing industry. This may be similar to the Tannic Panic method that was named in an earlier comment, but I'll have to read further to explore that. I think that sensory evaluation is still our best scientific method for determining the taste of a wine, and I think there are more rigorous and useful ways to express sensory scoring. I also think that a talented and thoughtful narrative description may be able to communicate the dimensions I've named above in the same clarity as a more mechanical numeric scale. It may simply come down to an optimal medium of communication established over time between the reviewer and the reader.
I am with your last point. I think breaking a score into smaller 'sub-scores' makes it even harder to digest and less meaningful than one overall number.
Words are just better!
The only exception I would make to the above is that it may help you as a taster to use a complex scoring mechanism. I see it as a kind of decision making matrix: I gave the tannins on that wine 90 and on that wine 92. So I will recommend the second wine for people who want something big and structured. But I just don't think the numbers need to be shared with the end consumer of your review.
If the difference in presumed quality between a wine which scores 93 points and another which scores 94 cannot be explained in words, then probably the difference does not exist. I think your title "Are Wine Scores a Waste of Time" understates the issue. Scores on a 100 point scale are more than a waste of time. The pseudo-objectivity of points scoring is actively misleading and cultivates a false idea about the nature of wine and wine criticism
I don’t place much emphasis on scores. For me, Wine Critics’ ratings or descriptions serve mainly to reinforce my own opinions when needed. Since my palate tends to prefer lighter wines than those favored by Wine Advocate, I find that I usually enjoy wines rated between 88 and 93. Jancis Robinson’s tastes align more closely with my own.
Regarding your thoughts on Leoville Barton 1993, this perfectly illustrates what I aim to convey to my students: an average vintage can yield exceptional wines at great prices. If it received a lower rating from Wine Advocate, it likely indicates a lighter style that remains excellent, just maturing faster than its higher-rated counterparts.
I am still astonished by the younger generation that adheres to this rating system, but we must acknowledge that it assists some in navigating the vast world of wine.
A really interesting read. Thank you. I've always wondered how much scores (and prices) are inflated by the investment side of the wine industry. The only thing that makes any sense to me are lucidly written reviews that also carry somewhere nearby a statement explaining the types of wines/flavours/profiles that the writer tends to prefer.
After two decades of conspicuous resistance, I long ago made my peace with scores. A cynic could say, with truth: “Parker was making him an offer he couldn’t refuse.” But you can follow my reasoning and judge for yourself, via the relevant portrions of an extensive piece that I wrote a decade ago for The World of Fine Wine. [ https://worldoffinewine.com/uncategorized/the-role-of-the-critic-an-untimely-assessment-4683894 ]
The most important points to bear in mind are, I believe, these:
• Any set of preferences can, if one desires, be numerically expressed. So, if a wine critic or that critic’s audience considers the critic’s preferences important, scores can be a useful shorthand means of expressing those preferences.
• Even so, and most especially for any attempt to invest scores with significance beyond that of conveying individual preferences, the same pitfalls and limitations apply as in matters of grading educational performance (whence the whole notion of numerical scoring first arose, as discussed in my aforementioned piece). As such, pushing points past the status of “useful shorthand” risks exposing oneself to circularity and or justified ridicule.
• Just as in matters educational, rampant grade inflation has led not only to what was once known tongue-in-cheek as the “Lake Woebegone Paradox” (“all the students are above average”) but to the range of scores considered to reflect something more than damning with faint praise having become so narrow (whether “89-100,” “16.5-18.0” or whatever) as to seriously inhibit that ability to quantify one’s preferences which constituted scores’ primary raison d’etre.
• To be sure, one can by fiat assign any intuitive significance to a given range of scores (i.e. “90-100 = has fully mastered the material,” “60-70 = has just barely mastered the material,” “0-60 = has failed to adequately master the material and should repeat the course/grade”). But such assignments can only make any sense when relativized to an individual or at most an institution, and in the realm of wine I can conceive of none that would be both useful and operationally definable. (“90-95 = had me salivating,” “95-100 = couldn’t spit,” “0-79 = ought never to have been bottled” “0-50 = gagged on it”?)
Points are useful shorthand for people that are not well versed in wine, but just looking for a bottle to bring to a dinner party they can feel confident about.
For the last seven years I wrote a varietal of the month column for the winery members of the Wine Road in Sonoma. No scores. Just my opinion of the wine with regard to the varietal characteristics. I think the most honest and useful reviews are for bad wines, wines to avoid, but you never see those because well, diplomacy.
I want to read the reviewers words not a score. I trust your opinion and judgment on wines I’m interested in, that’s why I’m here.
I know this is the pretext, but I still think as an industry we are selling a bit of a lie to those customers. Perhaps it's like placebo based medicine or health supplements. It may not have any efficacy but if it makes you feel good....
I totally agree with you about bad reviews. This is another massive issue in the wine industry. Diplomacy is a nice way to put it.
Well done on a fantastic bit of work- very long overdue the need for a robust analysis and critical commentary on an absurd evaluation mechanism which has negatively impacted wine appreciation for far too long.
I usually agree with your stance on wine scores. However, your academic analogy might actually push me in the other direction. Though I don't have in-depth analysis to offer, from talking to the handful of academics I know in arts and humanities, and having completed too many such degrees myself, I suspect the same issues and grade distributions apply in marking essays in arts and humanities against what can only be relatively loose marking rubrics. I don't think that makes them meaningless.
Can someone compare my degree mark with 100% reliability against a similar one from the same subject at another university? No. Is that comparison still useful? In some contexts, it seems so.
Do grades at many institutions cluster around certain markers? In my experience, yes. Does that make them "fitted" to some spurious extent? I don't think so. They are fitted to point to what the rubric says. So a wine score with a rubric has some limited value, maybe?
Personally, I'm not sure this persuades me to use scores. But it has made me wonder whether there is more to them than I previously thought.
A very fair point. And I guess the further one goes through education, the more subjective grading gets.
"limited value" might be right. But as I mentioned elsewhere in the comments, it is also about which scoring system one adopts. Publishing big numbers in the 90s is so meaningless when really you are using a 10 point rubric or less.
I agree with your premise. While scoring systems have helped increase interest in wine, ultimately it is a highly irresponsible way of imparting information because it implies there is an actual hierarchy of wines in respect to quality, when in reality nothing of the sort exists. I've always said that scores are like saying Bach is "better" than Beethoven, the Beatles are better than the Stones, or Lafite is always superior to Margaux or Lynch Bages. It's complete nonsense, yet this is exactly what scores are all about. The finest wines are appreciated for where they are grown and their artistry, not by quantitative measurements of sensory qualities disguised as objectivity. If anything, scores do a great disservice to the public by leading them on the wrong path, and in the end this will only hurt the industry.
Well put Randy, I like the musical analogy!
Should be noted, though, that music hasn't found itself free from a scoring system, either. All music reviews "grade" albums or releases, either on a scale of 1-100 or a school grade of A-F. Pretty much no industry has discovered a way to relate their take on a product that an average consumer is willing to engage with without a summarizing score of some kind. The vasy majority of consumers will not read a thorough write up, to fully understand that write up in relation to what they're looking for.
Dave, the operative term is "vast majority." I'd say, the percentage of people who don't need a "review" to decide what music they prefer is more than 99.9%. Same with books. There are lots of book reviews, almost none with ratings or scores. People do read reviews, but primarily to find out what books are about, not to know which books are "better." Movies and TV shows have quick "ratings," but almost no one pays attention to ratings; they rely on their own instincts based upon what they read or word of mouth. I've always contended that the primary reason the wine industry depends so much on numerical scores is because so-called "critics" are lazy. They don't want to devote the brain power or space to inform readers of how a wine tastes. It's too much trouble to differentiate wines by their origins, their artistry, comparisons to other wines, etc., and so much easier to slap on numbers with cursory notes, which is why (with some exceptions) the notes in wine reviews sound all the same. In short, simple lack of effort and imagination on the part of writers and publications. Which is a shame because, clearly, the everyday reviews of books and movies show that it isn't difficult at all to differentiate works. More than anything, I'd ascribe that to the simple fact that the wine industry is barely out of its baby stage. People understand books, music and movies because they've grown up with them. Wine is more challenging. Yet someday wine, too, may become more "everyday." The industry will grow up and be able to communicate with consumers more like adults, with a good grasp of language and concepts. I'll probably be dead by then, but I'll die knowing that it's bound to happen just because people are, in fact, very intelligent, and eventually to get around to things.
The snarky answer here is that book reviews are for people who actually *read*, but what about the rest of us? ;P
The more honest answer is: good point, though a lot of readers do still use Goodreads and Amazon, etc. to see 5-star rating reviews, aggregate scores, etc. We can argue how useful or relevant to the actual quality such scores are 'til the cows come home, but those scores are widespread and available, so it's difficult to say how useful consumers actually find them. (I would actually love a controlled poll on this topic some day - some already exist but do not divide ratings and reviews, but instead lump them together as the same thing and say "98% of consumers depend on ratings and reviews" which isn't helpful to determining the need for one but not the other.)
I do think consumers can (and do) eventually begin to translate reviews into their own tastes - someone might hate a movie or book but then the reasons they state for hating it sound like your cup of tea! With food and beverages, people have a much more tenuous grasp on *why* they like or dislike a product. This leads to a lack of confidence in translating talk of minerality, acidity, earthiness, etc. And at the end of the day, there isn't a lot of difference between a reviewer saying "I love this wine" and giving it a high point score. Neither statement is guaranteed to translate to another person's opinion/taste, but what people are looking for is simply: is this a recommendation, and if so to what extent? From there, they can choose to read further about the product/review or not. And they'll have to take the risk either way, to see if they agree, it's the same result with or without a point score.
I get the argument that point scores suggest objective truth, but if we think consumers are smart enough to translate a write-up, why do we not give them the same intelligence to translate experiences with point scores? After which the fundamental question boils down to: do consumers prefer a point score and find it useful or not? If they do, why are were hand-wringing against it?
My original point is that scoring systems are irresponsible. They simply don't lead consumers to making the best choices for themselves; taste in wine, obviously, being extremely personal, like taste is movies, books, music, etc. If, however, you want consumers to become more sophisticated, you have to begin somewhere, which is by making obvious points differentiating wines without demanding "mastery" of terminology (appreciation of the arts, books or music, for instance, don't demand mastery, just just increased familiarity with one's personal taste). Wine scores are a crutch which the industry and media have grown to rely on. Like I said, mostly because of laziness. Yet those of us in the industry all know what the finest wines are. They are the ones that express distinctive qualities, quite often sense of place, and often the same kind of originality everyone appreciates in artistic mediums. If we want to help consumers come to a better understanding of this in the context of their personal preferences, we have to start by communicating these distinctions. Numbers simply don't do it. They might be helpful, like the "dots" with movie reviews. But everyone reads movie reviews the same way. You look at the list of actors that you might find interesting, you read about the plot or storyline, descriptions of the cinematography, the basic gist of what the artists are trying to say, and then you make a decision about whether you want to see it or not. The "dots" become neither here nor there, in the same way that numbers in wine reviews are useless when it comes to imparting information that actually matters to the consumers. Ultimately, wine consumers deserve to know and understand sensory distinctions, or the "stories" behind wines that help people appreciate artistry or places of origin. This more than anything is beneficial for decision making. and beneficial to the growth of the industry. The sooner we move towards that the better.
The best rejoiner to what I call Score Madness is "I don't know what a point tastes like!" Unfortunately, we are stuck with scores as they sell. But that doesn't mean that sensible wine writers need to be part of the game. We just need to point out to our readers regularly that we don't think scores impart useful information about wine. There is also point of context. On a hot day I will always prefer a '85 point' anonymous rose to a '95 point' Bordeaux. Wine has many dimensions it cannot be captured with one number.
I agree 100%
For the last ten years or so, it feels like scoring has become increasingly generous, and wine publications are venturing further afield (Texas!) in an effort to capture more attention. The more wineries that can tout their ‘90+ point’ wines, the more visibility that publication gets, as its name gets plastered across winery marketing materials. It’s a mutually beneficial cycle—but one that has also inflated scores to the point where they don’t carry the same weight they once did. If everything is a 90+, does the number still mean anything?
Yes this is a massive problem.
A piece of analysis I'd like to do, if I ever get the time, is to compare scores across the last few decades to see if I can prove the score inflation argument.
Waiting for a very rainy day before I get busy with that!
I’ve been a wine drinker for 50 years. In my younger years, not knowing any better, scores/stars highly influenced my buying decisions however in the last 20 to 30 years I’ve seen scores/stars as very much a marketing tool used by winemakers and wine stores to flog mass production wines (in particular). “Best Cab Sav from the horrible region of outer Woop Woop rated 5 stars by a team of reviewers who also happen to publish an annual wine guide where wineries have to pay to get featured. Stars or points tell you nothing about the wine. One man’s ethereal, light, almost colourless but incredibly true to grape Pinot Noir is another man’s picked too early, acidic, airy fairy natural winemakers mistake.
I quite like the concept of rating wine as follows:
- taste and spit out quickly
- one glass is enough
- a couple of glasses if there’s nothing better on the list
- let’s get a bottle
- keep the empty bottle as a reminder as it was so bloody wonderful
Disclosure: our family makes wine (“natural” for want of a better name), we also have a natural wine bar and shop and taste quite a lot of wine.
I've never understood the 100-point scoring system, nor why only the top 20 points are ever used (so thanks for explaining that to me). That said, I can easily live without scores. In my own notebooks, I use a 5-point score, but it's not something I ever share in my articles - I just use it as a personal reminder of how much I liked (or not) a wine.
I think that is the best use of scores honestly.
I think the points in this article were extremely well made. To me, this feels like a question regarding the level of rigorous specificity that a person reading a review is looking for, or how much information they would like in order to try something new. I think the current and broad 100 point scoring regime exhibits the weaknesses named in the article, and a more accurate solution is a bigger lift for the reviewer and for the reader, with the a certain subset of readers appreciating something more numerically accurate.
In my work in beer I focus on adding structured scoring to certain dimensions of the flavors present. To translate that to the case of wine, I could envision a system where a particular note, Blackberry as an example, could be rated on the dimension of Intensity, Sweetness, Palate Position (Front, Mid, Finish) and Surprise (whether that note was expected as "standard" according to varietal and region). I personally find this type of scoring useful and have been advocating for its use within the brewing industry. This may be similar to the Tannic Panic method that was named in an earlier comment, but I'll have to read further to explore that. I think that sensory evaluation is still our best scientific method for determining the taste of a wine, and I think there are more rigorous and useful ways to express sensory scoring. I also think that a talented and thoughtful narrative description may be able to communicate the dimensions I've named above in the same clarity as a more mechanical numeric scale. It may simply come down to an optimal medium of communication established over time between the reviewer and the reader.
Great article, thanks for your work!
I am with your last point. I think breaking a score into smaller 'sub-scores' makes it even harder to digest and less meaningful than one overall number.
Words are just better!
The only exception I would make to the above is that it may help you as a taster to use a complex scoring mechanism. I see it as a kind of decision making matrix: I gave the tannins on that wine 90 and on that wine 92. So I will recommend the second wine for people who want something big and structured. But I just don't think the numbers need to be shared with the end consumer of your review.
This article is a 98/100.
Thanks Simon!
If the difference in presumed quality between a wine which scores 93 points and another which scores 94 cannot be explained in words, then probably the difference does not exist. I think your title "Are Wine Scores a Waste of Time" understates the issue. Scores on a 100 point scale are more than a waste of time. The pseudo-objectivity of points scoring is actively misleading and cultivates a false idea about the nature of wine and wine criticism
I couldn't agree more. Succinctly put!
I don’t place much emphasis on scores. For me, Wine Critics’ ratings or descriptions serve mainly to reinforce my own opinions when needed. Since my palate tends to prefer lighter wines than those favored by Wine Advocate, I find that I usually enjoy wines rated between 88 and 93. Jancis Robinson’s tastes align more closely with my own.
Regarding your thoughts on Leoville Barton 1993, this perfectly illustrates what I aim to convey to my students: an average vintage can yield exceptional wines at great prices. If it received a lower rating from Wine Advocate, it likely indicates a lighter style that remains excellent, just maturing faster than its higher-rated counterparts.
I am still astonished by the younger generation that adheres to this rating system, but we must acknowledge that it assists some in navigating the vast world of wine.
A really interesting read. Thank you. I've always wondered how much scores (and prices) are inflated by the investment side of the wine industry. The only thing that makes any sense to me are lucidly written reviews that also carry somewhere nearby a statement explaining the types of wines/flavours/profiles that the writer tends to prefer.
I addressed this same issue in my nationally syndicated column in January, and completely agree with your premise. https://www.gusclemensonwine.com/wine-scores-1-8-2025/#more-19836
After two decades of conspicuous resistance, I long ago made my peace with scores. A cynic could say, with truth: “Parker was making him an offer he couldn’t refuse.” But you can follow my reasoning and judge for yourself, via the relevant portrions of an extensive piece that I wrote a decade ago for The World of Fine Wine. [ https://worldoffinewine.com/uncategorized/the-role-of-the-critic-an-untimely-assessment-4683894 ]
The most important points to bear in mind are, I believe, these:
• Any set of preferences can, if one desires, be numerically expressed. So, if a wine critic or that critic’s audience considers the critic’s preferences important, scores can be a useful shorthand means of expressing those preferences.
• But scores only make sense if relativized to a particular critic, as I again argued not too long ago in a WFW column [ https://worldoffinewine.com/news-features/wine-competitions-points ] that was written in response to an insightful piece by Simon.
• Even so, and most especially for any attempt to invest scores with significance beyond that of conveying individual preferences, the same pitfalls and limitations apply as in matters of grading educational performance (whence the whole notion of numerical scoring first arose, as discussed in my aforementioned piece). As such, pushing points past the status of “useful shorthand” risks exposing oneself to circularity and or justified ridicule.
• Just as in matters educational, rampant grade inflation has led not only to what was once known tongue-in-cheek as the “Lake Woebegone Paradox” (“all the students are above average”) but to the range of scores considered to reflect something more than damning with faint praise having become so narrow (whether “89-100,” “16.5-18.0” or whatever) as to seriously inhibit that ability to quantify one’s preferences which constituted scores’ primary raison d’etre.
• To be sure, one can by fiat assign any intuitive significance to a given range of scores (i.e. “90-100 = has fully mastered the material,” “60-70 = has just barely mastered the material,” “0-60 = has failed to adequately master the material and should repeat the course/grade”). But such assignments can only make any sense when relativized to an individual or at most an institution, and in the realm of wine I can conceive of none that would be both useful and operationally definable. (“90-95 = had me salivating,” “95-100 = couldn’t spit,” “0-79 = ought never to have been bottled” “0-50 = gagged on it”?)
Points are useful shorthand for people that are not well versed in wine, but just looking for a bottle to bring to a dinner party they can feel confident about.
For the last seven years I wrote a varietal of the month column for the winery members of the Wine Road in Sonoma. No scores. Just my opinion of the wine with regard to the varietal characteristics. I think the most honest and useful reviews are for bad wines, wines to avoid, but you never see those because well, diplomacy.
I want to read the reviewers words not a score. I trust your opinion and judgment on wines I’m interested in, that’s why I’m here.
I know this is the pretext, but I still think as an industry we are selling a bit of a lie to those customers. Perhaps it's like placebo based medicine or health supplements. It may not have any efficacy but if it makes you feel good....
I totally agree with you about bad reviews. This is another massive issue in the wine industry. Diplomacy is a nice way to put it.
Thank you for being here!
Well done on a fantastic bit of work- very long overdue the need for a robust analysis and critical commentary on an absurd evaluation mechanism which has negatively impacted wine appreciation for far too long.
Thank you!
I usually agree with your stance on wine scores. However, your academic analogy might actually push me in the other direction. Though I don't have in-depth analysis to offer, from talking to the handful of academics I know in arts and humanities, and having completed too many such degrees myself, I suspect the same issues and grade distributions apply in marking essays in arts and humanities against what can only be relatively loose marking rubrics. I don't think that makes them meaningless.
Can someone compare my degree mark with 100% reliability against a similar one from the same subject at another university? No. Is that comparison still useful? In some contexts, it seems so.
Do grades at many institutions cluster around certain markers? In my experience, yes. Does that make them "fitted" to some spurious extent? I don't think so. They are fitted to point to what the rubric says. So a wine score with a rubric has some limited value, maybe?
Personally, I'm not sure this persuades me to use scores. But it has made me wonder whether there is more to them than I previously thought.
A very fair point. And I guess the further one goes through education, the more subjective grading gets.
"limited value" might be right. But as I mentioned elsewhere in the comments, it is also about which scoring system one adopts. Publishing big numbers in the 90s is so meaningless when really you are using a 10 point rubric or less.
Sure, and I think academia suffers from this compression too. Perhaps to a lesser degree. And I still agree with you overall.