Rethinking the value of Scrabble tiles – When Alfred Butts invented Scrabble in 1938, he based the values and distribution of letters on the frequency of their appearance on the front page of the New York Times.
Today, Butts’ distribution is still the standard for English play.
What has changed in the intervening years is the set of acceptable words, the corpus, for competitive play.
As an enthusiastic amateur player I’ve annoyed several relatives with words like QI and ZA, and I think the annoyance is justified: the values for Scrabble tiles were set when such words weren’t acceptable, and they make challenging letters much easier to play.
So what would a modern distribution look like?
To find out, I’ve developed an open source package called Valett for determining letter valuations in word games based on statistical analyses of corpora.
In addition to calculating the frequency of each letter in a corpus, Valett calculates the frequency by word length and the incoming and outgoing entropy for each letter’s transition probabilities.
One can then weight these properties of the corpus based on the structure of the game and arrive at a suggested value for each letter.
For Scrabble, Valett provides three advantages over Butts’ original methodology.
First, it bases letter frequency on the exact frequency in the corpus, rather than on an estimate.
Second, it allows one to selectively weight frequency based on word length.
This is desirable because in a game like Scrabble, the presence of a letter in two- or three-letter words is valuable for playability (one can more easily play alongside tiles on the board), and the presence of a letter in seven- or eight-letter words is valuable for bingos.
Finally, by calculating the transition probabilities into and out of letters it quantifies the likelihood of a letter fitting well with other tiles in a rack.
So, for example, the probability distribution out of Q is steeply peaked at U, and thus the entropy of Q’s outgoing distribution is quite low.
Intuitively, I’ve long felt that letters like Z and X were overvalued in Scrabble, especially X since it is prevalent in the two-letter word list: XI XU AX EX OX.
In contrast, V and C seem undervalued, with no two-letter words.
Using Valett with an even weighting of letter frequency, frequency by length, and transition entropy I’ve generated a new value distribution that roughly matches my intuition:
A: 1 B: 3 C: 2 D: 2 E: 1 F: 3 G: 3 H: 2 I: 1 J: 6 K: 4 L: 2 M: 2
N: 1 O: 1 P: 2 Q: 10 R: 1 S: 1 T: 1 U: 2 V: 5 W: 4 X: 5 Y: 3 Z: 6
Note: This distribution is calculated with TWL06. G drops from 3 to 2 using SOWPODS.
For the weighting of frequency by length, I most heavily favor two-letter words, three- and seven-letter words, and eight-letter words, in that order.
Incoming and outgoing entropy are weighted evenly.
There are several things I like about this new distribution.
Looking at the statistics, Q is clearly an outlier both in frequency and entropy, and in this distribution it is also an outlier in value.
V bumps up to five points to match X, and close to J and Z, which have dropped to six.
U, as the most challenging vowel, jumps up to two points.
Overall there is downward pressure on the valuations to keep the justified separation from Q at ten points.
Most mysteriously to me, C drops to two points despite its absence on the two-letter word list.
G jumping to three points is also surprising, though it stays at two using the SOWPODS corpus instead of TWL06.
(As a side note, it’s nice that the distribution changes are minor from TWL06 to SOWPODS, as they should be for word lists based on the same language.)
While this distribution is interesting, I’m not suggesting that it’s the best one.
I’m an amateur player and my perspective on the relative importance of frequency vs. transition entropy and frequency at various word lengths is informed by my imperfect knowledge of the game.
By publishing the code, which easily allows one to set all the weights, I hope to enable a data-driven discussion around letter valuation in Scrabble.
More broadly, I think Valett can provide the foundation for answering other interesting questions in word games, such as how to quantify the difficulty of Boggle boards (perhaps useful in a tournament setting as a means of normalization).
To that end I would welcome any pull requests on GitHub that add to the statistics generated from corpora, or add game-specific analyses like the included Scrabble analysis.
If you’d rather not write code but have ideas regarding Valett, just drop me a line!