The time has come to address a concern/comment that comes up regularly in the meta-list universe. As those who read this site regularly know, I create meta-lists in a very simple way: First, I collect as many “Best of ___” lists as I can find. I favor critics’ lists over amateur lists, but I don’t discriminate based on the length of the list – a Top 1000 list is just as good as a Top 10 or Top 5 list. Second, I take each item on each list and give it one point. Then I add up all the points to see which items are on the most lists, and I arrange them accordingly.
There are some in the list-verse who disagree with my methods. The issue arises in two contexts. First, some commenters (and meta-listers) believe in weighting the ratings of each list. For example, they give the number 1 item on a Top Ten list 10 points, with 9 points to the number 2 item, etc. As I explain below, this is an example of “a little knowledge is a dangerous thing” and usually skews the meta-lists in horribly wrong directions. Second, some commenters believe that I shouldn’t combine lists of different lengths or shouldn’t include the lower-rated items from longer lists. For example, it’s OK to give points to all 10 movies from a top 10 list but not to all 1000 movies from a top 1000 list. Once again, this is based on a misunderstanding of a mathematical truth.
Both complaints are based on a simple mathematical fallacy. The commenters believe that they are dealing with a universe that consists of the list and the items on it. If the universe of movies consisted of the 10 movies in a top 10 list, then it would make sense to say that those 10 movies make up 100%. Since the movies are ranked 1 through 10, it would make sense in that universe to take that 100% and divide it up according to the rankings. So, the first movie on the list would get the highest number of points and the rest of the movies should receive percentages based on their rank in the list. In such a case, the difference between the number 1 item on the list and the number 10 item on the list would be HUGE. Also, if you believed that a Top 1000 list was the entire universe of that list, then the difference between item 1 and item 1000 would be even HUGER. In that case, I could see why people wouldn’t want me to give equal points to the items on a Top 10 list (where 100% is divvied up between 10 items) and a Top 1000 list (where 100% is divided up among 1000 items). Those items near the bottom of the Top 1000 list would seem hardly fit to share space on a list with the big numbers of the Top 10 lists. BUT THIS IS ALL WRONG!!!!!
[NOTE: You’ll notice that I didn’t give exact percentages, even for the misguided theory that a list is a universe to itself. That’s because the math is beyond my meager capabilities. That practice of listers who give 10 points to the highest, 9 points to the next, etc., has no basis in math as far as I can tell – it’s the mathematical equivalent of winging it. To get the correct percentage score out of 100% for each ranked item in a top 10 list, you would need to do something like the following:
EQUATION 1: a + b + c + d + e + f +g + h + i + j = 100
“EQUATION” 2: a > b > c > d > e > f > g > h > i > j AND
EQUATION 3: a/100 – b/100 = b/100 – c/100 = c/100 – d/100 = d/100 – e/100 = e/100 – f/100 = f/100 – g/100 = g/100 – h/100 = h/100 – i/100 = i/100 – j/100
Forgive me if I don’t solve for the 10 variables.]
You may be asking now, what is wrong with weighting the ranked items on a list (besides the impossible math)? And how can you possibly give equal points to items on lists of different lengths? Physicists will understand when I say, for the same reason that Newtonian physics works in almost every situation you and I will ever encounter. Because in certain universes, you don’t have to be exactly accurate. The fundamental flaws in Newtonian physics only reveal themselves in rarely-encountered situations, such as near the speed of light.
The problem (really, the solution) is that a list is not a universe. Think of it more as the cream that rises to the top of the milk bottle. You wouldn’t define milk based only on the cream, right? Well, you shouldn’t measure the “best” of something by comparing it to itself, but instead to the entire universe of items that exist. So, taking movies as an example, it is estimated that there are more than 500,000 movies that exist in the world. So when I see a list of the best 10 movies of all time, I am comparing it to those 500,000 movies.
[Some readers may object that the people making these lists haven’t seen every movie, read every book, seen every work of art, etc. If we reject the objective standard, then (using movies as an example) I’d have to know how many movies each lister has seen, so I know the universe we’re dealing with. For example, I have rated 2,355 movies on IMDB.com. If I made a top 100 list, could I only compare it to lists by people who’ve seen 2,355 movies, or could I expand it to people who’ve seen at least 2,355. Or, worst case scenario, would I only be able to compare myself with other listers who have seen exact same 2,355 movies as I have? What would I do about lists made by groups of authors or editors? Would I need to know their specific, unique universe of movies? I believe this approach would make meta-listing obsolete and would rather not go there.]
If there are 500,000 movies, then a Top 10 list contains 0.002% of all movies. The movies on a top 1000 list constitute 0.2% of all movies. While .002% and .2% are very different numbers when compared to each other, they are both well under 1% of all movies ever made and so they are essentially equivalent. Maybe it would be better if I said that I only included lists when the items listed constitute less than the top 1% of the total population of items being rated. In the real universe, then the number 1 movie on a Top Ten list and the 999th movie on a Top 1000 list are equal for all relevant purposes because (assuming 500,000 total movies, which may be low) they are both talking about movies in the top 0.2% of all movies ever made. Sure, there may be slight percentage differences between the ratings on each list, or between lists, but none of the differences even comes close to overcoming the fact that all the items on all the lists are within the top two-tenths of one percent of all movies ever made. I could repeat the experiment using works of art, photographs, musical recordings, works of literature, athletes, famous individuals, inventions, scientific discoveries and other lists, but I won’t.