Happy It’s Still Winter Day, readers! Get snug and comfy in this cold weather* and curl up with a good website as I discuss the weighty subject of review scores. Weight might be the operative term, because these little numbers (or stars, or whatever) arguably carry more importance than the sum of the review text. The opinion of the reviewer must needs be boiled down to a single character summation of every concept, nuance, and exception detailed in the review process. The simplicity of reading a review score versus the time and effort required to read and digest the review text means most people will effectively place the score (often placed at end of the review) above the body of the review, letting it color perspective. Everything is reviewed on a cursory level, reviews are themselves reviewed by the readers, judged often on the appropriateness of the review score as it compares to expectations and the text of the review. To say contention arises because of this, both among consumers and developers, is an understatement. The subject has jumped to the fore again with popular games site Joystiq choosing to follow in our illustrious footsteps and leave their future reviews unscored.
When I was still only a reader of Lcom, one of the aspects I appreciated most was the scoreless reviews. Scores had always struck me as reductive, to use a recently popular term, and as attempting to boil down a mostly subjective process into an objective declaration. Aside from whatever difficulty the reviewer might have in finding a good fit for their review in score form, the real issue is at the collateral damage these scores produce. The point at which it becomes most worrisome is when the scoring process infects the review process before a score is even applied. Likewise, when the review is defended while working backward from the originally posted score then something is being lost in the application of that score. It is a comically frequent occurrence, familiar to any who pay attention to game reviews posted on mainstream sites, of consumer outrage purely focused around the number of the review. Often it is not that the review and the score are deemed “wrong” or disagreeable, but that the review and score do not match. Claims of reviews “reading like an eight” but getting a seven, or some similar scenario, pettifog the hard work of both the reviewer and game creators. Instead of consumers being informed about the product, they are only informed about a number that reviewer tied to that product. The review process, intended to arm consumers to be better consumers is instead used to arm arguments about the reviewer or their process.
Other industries, famously the movie industry, also offer scores for their reviews and all of these are filtered through score aggregators, the most well known of which is Metacritic. This tool on its own is simply that, a tool collating data from around the internet (and what remains of print media). But it is not always so innocent as the site has to apply a weighted average to the scores based on different scales from the reviewing outlet (scales that span from five to one hundred points) as well as apply scores from reviews that do not give a clear score or apply no score at all. In the process, even should the outlet abstain from the scoring process, their work is ultimately still interpreted in a scored manner. Never mind the loss of control this represents for the reviewer, it proves an industry-side insistence on these scores. With stories about Metacritic averages being wielded as a cudgel to withhold benefits and bonuses from developers, it is not difficult to see why that insistence is not entirely healthy.
Anything that can be done to get more people to read more of the review text, instead of skipping to the score and then maybe skimming with that number in mind, can only help games development. With more people becoming more familiar with the actual critical opinions of the products, people will buy and develop games in a smarter manner. So too will the reviewing process be done in a smarter and more honest fashion as any need to wedge a game’s critique into a certain score (or even the other way around) will be eliminated. Joystiq staff have described the scoring process in the past as arduous and one they would not consider until the very end, thankfully, but one that was nevertheless difficult and seemingly arbitrary. Concerns over the unintended impact of scores and the development process were also mentioned, and are ones I wholly agree with. A review should not have more or even a similar impact to the consumer purchasing practices involving that game. A review can impact those purchasing practices, but in that case it is still the money from sales being accounted for and not numbers from scores. And in a time when more readers than ever are sensitive to industry collusion, I feel any steps to avoid this should be seriously entertained and examined.
The pressure to be a part of the scoring process all but assures that most outlets will be apprehensive about ditching it, considering that little number is so vital not only to game sales but to page views. So when a larger site decides to take a stand against such a practice, even to their detriment, I take a degree of notice. It is the current position of Lusipurr.com that our reviews will go unscored, in any sense however loose, and it is this editor’s intent to see that remain the case as long as I remain in place.
Now it is your turn, scoreless readers. Averagely weigh in on the matter. Give me a thumbs up or a thumbs in the butt down on this stance as it applies here and elsewhere. Are you perhaps more neutral regarding review scores? Do you firmly favor them? Comment, or I will give you an F!
*Does not apply to SiliconNooB