Fixing Reviews: The Lying Score

Hey hey! Before you go ahead, help me out and answer this poll ( http://www.easypolls.net/poll.html?p=4f03227fc2e1b0e4e257c294 ), please.

Thanks! The poll will be a great help later, when we start talking about the Critic Score. But for now, let’s turn our eyes to the User Score and the main problem it possesses.

In the end, it all comes down to this: review scores are tricky; they are not for everybody. For a scoring system to have any worth, it must have consistency. Not everybody is ready for that. You can’t call a game a master-piece only to call it a disappointment at the end of the year. Review scores must also be honest and, believe it or not, even less people are ready for that. Here, I’m not talking about the flawed notion some outlets have that the average between 0 and 10 is 8. That’s just being mathematically deprived. Instead, I’m talking about Metacritic, Amazon, App Stores and whatever other place that aggregates scores from users in order to present a single information: that the cosmos has voted and decided that game X is a 8.6 out of 10.

Guess what? They are all lying.

They are lying because they encourage users to lie in their reviews. Yes, that means the liar is ultimately you, Mr. User. It works like this:

Take a game you feel strongly about, like Super Mario Galaxy 2 (8.9 of 10) or Child of Eden (7.8 of 10). How could the Metacritic user score be so highly/lowly? How dare them? This could be dangerous! Someone might buy/not buy the game because of this! Not to worry! You can fix this!

I can do it!

Like any other voting process, people would rather see that their votes don’t go to waste. That’ why gamers would rather write a review with a 2.0 score rather than their true belief, a 4.5 score. By adopting extreme positions you are more likely to influence on the average final score. In order to lower that Super Mario Galaxy 2 score, it’s more effective to rate it absurdly lower. That is the problem with averages: it takes into consideration both the direction (I like it/I dislike it) and intensity (how much I like/dislike it) of your vote. Voters, however, only have the incentive to tell the truth about the direction, but not the intensity.

The solution for this is simple. Screw the average. Just use the median instead. The median position is determined by the direction of voters’ tastes alone, not their intensity. Here voting according to your own belief is the dominant strategy as no voter has any incentive to twist their preferences. If the voters who hated the game exaggerate in their preference, the result will not change; if they exaggerate in the other direction, the result will be worse. Same goes for voters who loved the game. As the final result is already exactly what they desire, there will be no incentive to change.

Let’s do a practical example! Let’s say a game was rated by 10 users with the following scores:

5; 5; 6; 6; 7; 7; 8; 9; 9; 9

The average will be 7.1. You, however, believe the game is worth an 8.0. But if you give the game a 8.0, the average will become only 7.2. If you say the game is 10.0 instead – in order words, if you lie – the average will become 7.4, which is closer to your ideal score. In the median score scenario, the original score will be a 7 and it will remain 7 regardless of whether you vote 8 or 10. In fact, the only way it will stop being that 7 is if you score the game LOWER than 7, which would actually INCREASE the distance between the Aggregated Score and your Ideal Score, thus negating any incentive you would have to lie in your score.

 

 

Want to join the fight to fix reviews from incompetent reviews, whining gamers and stupid review systems? Stick it to the Man with a suggestion on the comments below!

2 Comments

  1. Casey

    That’s a very interesting idea! I think that’d be a very clever way to go about it 🙂

  2. Devin

    Yes! This is so correct. I want this now.