Let’s start with a reality check: Medals in wine don’t matter much in the 21st century, save for the wineries who win them. It wasn’t always this way. {Insert old geezer voice here…} Back in the 1980s, wine competitions were actual news—press releases were issued (and picked up); stickers from various competitions adorned bottles on retail shelves; gold medals got displayed in tasting rooms, received respect, and medal-winning wines frequently experienced a bump in the marketplace.

Competitions tended to be based regionally (San Francisco, San Diego, Orange County, L.A., Dallas, Atlanta, Buffalo are cities that pop to mind), sometimes connected to specific fairs, groups or publications, and the entrants were predominantly American. Indeed, success at these competitions helped raise the quality profile of California wines in particular. And why not? Winning wines—awarded GOLD, SILVER or BRONZE medals—played right into the American mindset of fair, Olympic-style competition. Hundreds upon hundreds of wines were judged blind at these affairs, by panels, with wines assessed in peer groups. The cream rose to the top, right?

But the tide shifted in the 1990s. Scores ascended, driven in part by Wine Spectator courting retailers to use their scores, and Robert Parker offering no objection when they clipped his as well. Plus, copycats joined the 100-point fray, meaning there were simly more numbers being bandied about. At the same time, medals seemed to hit a saturation point, symbolized by the publication of an annual paperback guide that aggregated medal-winning wines. The guide helped draw attention to the fact that there were simply more of these competitions than most consumers even imagined…and .

Gold, silver and bronze came to be seen as less precise—and therefore less valuable—than numbers, and the mixed panels assembled for marathon-like competitions were considered less accurate and reliable than the individual critics devoting themselves full-time to the task of gauging wine quality. {For an interesting take on the nature of competition wine-judging, check this post  at www.rebeccachapa.com.} Proliferation of medals—from diverse sources—compounded the perception of competition judges being softer than the numbers-wielding critics. There was also the lingering question of judging standards. What did silver or bronze mean anyway; and was one competition’s gold on par with another’s silver?

We are no closer today to standardizing definitions of gold, silver or bronze than we were 10 or 20 years ago. But the question came to my mind recently when I participated in the Hudson Valley competition {see post  here, complete with “10 things not to say aloud at a judging”}. I was pleasantly surprised with the overall quality of the wines I tried, but except for the “best of” winners that were revealed to judges that day after a taste-off of previous-round winners, I had no idea of how our group’s judgments translated into the medal hardware.

When the final results were announced, I reacted with a shrug and a sigh. A total of 80 wines had been entered and judged; 57 had earned medals. That’s a whopping 71%. Ah well, I thought, perhaps this is the Hudson Valley’s way of boosting its own stock in the big, bad wine universe. This thought stayed tucked away harmlessly until I received, a few weeks later, a press release about the inaugural edition of the Sonoma Valley Wine Competition. Guess how many wines had been entered… 131. Guess how many took home medals… 103. That’s an even whoppinger 79%.

That’s when I started to ask myself: is this a case of grade inflation, or are the wines really that good? And how does the grading curve for medals compare with those of current major wine magazines.

So, I checked a Wine Spectator (May 31, 2009). Alas, 54% of the 490 wines reviewed in this issue were rated 90 points or higher, 37% were rated 85-89, meaning: in all, 91% of the wines published in the issue are deemed “very good; a wine with special qualities.”

Wine Enthusiast, curiously, demarcates 87-89 as “very good” and 83-86 as “good;” yet, like WS, the Enthusiast draws an effective cutoff at 85 points {one difference: Spectator actually publishes wines scoring 80-84, and puts those below 80 online; WE puts all sub-85 wines in the proverbial online attic, and never goes below 80}. Given this commitment to keeping the printed page essentially a pure, 84-or-below-point-free zone, the percentage of published scores rating 85 or higher in the February 2009 issue was a perfect 100% (354 wines in all). Digging online, I discovered that a total of 923 wines had actually been tasted and reviewed for that issue of Wine Enthusiast, and 709, or 77%, were rated 85 points or higher.

I can hear the argument now: So what? All this really shows is that the magazines dish out a lot of praise for middling wines that no one cares about, just like competitions dish out lots of bronze medals that no one cares about. In response, I would say that we are missing the ultimate irony here: merely talking about the abundance of bronze-medal and/or 85-point wines suggests that we are truly living in the Golden Age of Wine. Strip away both the “points” and the “metals” alike; now what is the message? Let me put it a few ways…

  • If you coaxed a bull from the china shop next door into your average wine emporium, he’d be able to pick out a smashing nice mixed case, within which 9 out of twelve wines had been “awarded.”
  • If all of the award-winning or well-rated wines in a shop were to be illuminated with a light bulb, you’d be covering at your eyes in about 15 seconds.
  • A monkey could pick up a wine magazine and more than three out of every four wines he points to will have been professionally critiqued as being a fine example of its type.
  • If you assembled a tasting panel of ocelots, and served them 50 wines, almost 40 of them would qualify for some grade of metal.

Ratings are reaching a saturation point. Just as people, intuitively, grew skeptical and weary of the proliferation of medals, so too are they feeling about ratings. And they are also realizing that the idea of precision in ratings is fool’s gold; give one wine to five critics and you may well wind up with five different scores, just like giving the same wines to different panels may result in completely different medals awarded.

Here, 30+ years into the wine-grading era, the pertinent question is no longer whether the grades are inflated or the wines are that good. The question now is: When will Americans come to accept the truth that this thing called wine is immune to finite analysis and—more important—darned well-made? I think the answer is sooner than most cynics think. Indeed, arriving at that truth is as simple as approaching wine with style and personal taste and context in mind, not medals or points.

About these ads