Postings on science, wine, and the mind, among other things.

Automatically generating wine tasting notes with Markov chains

Creating pseudorandom wine back labels customized by price, rating, type, or region using data from Wine.com

Automatically generated tasting note:

If you've ever glanced at the back label of a bottle of wine, you've likely been greeted with an array of descriptors promising flavors like "licorice, black cherry, cedar and spice" to prospective purchasers. Others may have heard similar strings of wondrous scents uttered by their local wine snob. To the uninitiated, such descriptions can be intimidating - I can't count how many people have told me that they "couldn't taste" because they couldn't take a sip of wine and compose a daunting list of adjectives. However, the ability to write convincing tasting notes is not reserved for the gifted. There are a lot of "tricks" which make an experienced winos' observations sound a lot more sophisticated. For example, as I've explored in my earlier work, many descriptors are inherently tied to each other. Thus, if you can pick out even a single flavor - say, "blackberry" - you can safely append several others (black currant, plum, etc.) with virtually no one able to gainsay you.

However, as it turns out, even such tricks as these are too complicated by half. Human intelligence isn't required - a quite simple computer algorithm can produce reasonably coherent tasting notes. Data scientist Tony Fischetii demonstrated last year that a Markov Chain could produce remarkably realistic tasting notes. If you want to learn more about Markov chains, there's a fantastic interactive explanation here.

For our present purpose, however, the most accessible example is your smartphone's predictive text feature. When you're typing a message to someone, most modern phones will anticipate what your next work will be before you've even started it. The phone does this by remembering pairs (or more) of words that you've written in the past, and suggesting the most frequent ones. This process can actually proceed without human intervention - the system simply chooses the next word randomly, weighting its choice by how many times it's seen particular paris, triplets, etc. in the past. By consecutively adding new words in this way, long pieces of text can be generated. The process is locally smart but globally dumb: small sets of words tend to make sense in isolation, but both grammar and meaning may be lost over longer runs. Nonetheless, Markov-generated text often achieves surprising verisimilitude.

When I read Fischetti's work, I immediately wanted to try something similar with the large dataset I previously retrieved from Wine.com. While Fischetti's scraped about 9000 reviews directly from Wine Spectator, I had a considerably larger corpus of 63,012 usable winemaker's notes from Wine.com. Moreover, I had easy access to a considerable amount of meta-data about each wine, such as its price, critical scores, varietal, and region of origin. Using these data to cross-section the reviews, I was able to train Markov chains separately on different types of wine. I implemented my program in R rather than Python, but otherwise it's essentially identical to Fischetti's approach. You can access tens of thousands of pre-generated Markov-based winemaker's notes using the menus above. The price and rating toggles in the menus above refer to median splits on the full set of wines. The available regions were those with a sufficient number of notes to teach the Markov chain, when split by price, rating, and winetype. I've done my best to remove any references to particular wineries, but inevtiably some will have slipped through, so apologies to anyone inadvertently thus featured.

Overall, the results of the Markov generation process were mixed - some are nonsensical, self-contradictory, or ungrammatical, but many are quite coherent and convincingly human. Ultimately, perhaps they're best considered as the tasting notes a critic might write after finishing the bottle.