Lusipurr has ruined RPGamer’s podcast one too many times, and the staff of RPGamer finally take matters into their own hands, turning the tables on Lusi and his minions. Sabin hosts, with guest panelists Paws and Firemyst, in this CatFancy-Crossover-Cast!
16 Comments
-
That was a little strange hearing Chris start up the show.
I think I have 4 different copies of the original Final Fantasy. I might have a problem.
What is it about Namco’s name that makes it so easy to Spoonerize? I always want to call them Bamco Nandai.
The interpretation of the review system has become broken. So many sites now grade on an academic curve so a 7/10 is seen as mediocre when that should still be seen an above average game. This is the same problem that sealed the fate of Alpha Protocol. Unfortunately, I don’t see how this is going to be fixed anytime soon, though.
-
@Dan (and before listening to the podcast): I was recently thinking on reviews and scores, and I came up with an idea. Instead of huge 10 or even only 5 point scales, what if there were only 3 points? 1 for below average games, 2 for average games, 3 for above average. This gives the reader something to see at a glance if they wish, but not SO much info that they may as well skip the reviewer’s actual REVIEW which clarifies and justifies the score (not to mention exemplifies their hard work reviewing the game). It would be intentionally vague. I also like the idea of simply NO review score, but I think this might be a happy medium. Perhaps it’s too simplistic? Or maybe it risks lumping outright unplayable games in the same category as ok but sub-par efforts?
-
I don’t know if I’d call scoring laziness on the part of the reviewer but fear on the part of the person or company publishing or hosting the review. Fear that their review will get passed over because it’s not quick enough to digest. Fear and conformity that if they don’t measure a game numerically that their review will not be ranked on the aggregation sites, which gets their name freely advertized.
Review scores and their constant re-tooling and re-thinking are perhaps people attempting to build a better mouse trap in a world where mice don’t exist. In general, classification of things like movies and games are a limiting factor. More often people turn things down based on their classifications than take them up. OC Remix, which you link to on the bottom of the site, doesn’t offer a way to search their remixes by genre because the guy who runs the site (DJ Pretzel) thinks that this would only lead to people listening to fewer remixes than more.
-
@Mel: A 3-point scale could be used effectively, but I admit I’m really a shades of grey kind of person, so I’d often find myself wanting to waffle between two scores. My site uses a 10-point scale and I still find myself wanting 0.5’s. There are times I really hate putting scores, but I know that there are some people who just won’t read a whole review, and that’s my compromise. I won’t put the pro/con lists like some sites because that’s a guarantee that most of your readership will skip over the wall of text.
@Lusi: I don’t think that the numbers themselves are necessarily the problem, and I’m not sure just flat scrapping the system is the best answer. Much like nuclear energy, it’s all in how you use it, and right now review scores aren’t being used productively, they’ve been weaponized. Unfortunately, human beings have an innate ability to find the worst way to use a good idea. (This is why we can’t have nice things!) If things were standardized from site to site and reviewer to reviewer, Metacritic would be a great resource because it would show whether a game is really divisive or everyone agrees that it is either terrible, mediocre or great. Unfortunately, that’s just a pipe dream.
-
You don’t link to any of the funny images mentioned in the podcast, Lusi, I give this episode’s post a 3.11 out of 2.
-
LOL
I’ve seen the Sony and MS units before, but they bear reposting. That 3DS XLpro mockup, however, is pretty damn funny. Put that d-pad right next to that other d-pad!
-
@Lusi: I think having a tool to allow for rough comparison between games is very important to the industry. I also think that a single review score by itself really doesn’t hold a ton of merit, but the ability to look at two games and determine their relative quality is a nice option. That being said, people only report averages, they never report standard deviations (SD) between scores, and I think that information like that would further enlighten people, if they were willing to look at it.
As a thought exercise, say Game A had an average score of 7.2/10 and an SD of 0.6 (indicating a decently big spread of review scores, assuming a large sample size) while Game B had a mean of 8.1/10 but an SD of 0.1 (meaning everyone roughly agrees on the score). The easiest thing to see is that virtually everyone agrees on how good Game B is, but Game A seems to only strike a chord with a smaller segment of reviewers.
Looking a little deeper into the stats, you would find that the games are not significantly different by research standards. Many times, people use 2 times SD for significance so:
7.2 + (2*0.6) = 8.4
8.1 – (2*0.1) = 7.9
And because these two scores cross over, by definition, they aren’t that different in score.If the numbers were, say, 3.8/0.4 and 7.1/0.6, you’d get
3.8 + (2*0.4) = 4.6
7.1 – (2*0.6) = 5.9
And because they don’t cross over, you could safely say that one game is better than the other.Of course, sites like Metacritic would never go through enough effort for something like this, and only a small segment of the population would probably even give a crap. But to me THAT is really where review scores are helpful. Not as a floating data point that some fanboy can trumpet in a forum somewhere.
-
@DCS: I think this is a pretty intelligent use of review scores, but I take issue with the idea that (even when accounting for standard deviations) one game’s average score being higher than another means that first game “better”. It’s still up to the subjective likes and dislikes of the player.
How I tend to utilize reviews is by saying “What does this reviewer NOT like about the game and can I personally overlook enough of those things/do enough of those things not bother me to justify playing the game?” And of course I read multiple reviews when I do this. Usually all the good points a game has to offer are things I already know or else I wouldn’t have been interested enough to read the review.
-
@Mel: Perhaps “better” isn’t the most appropriate word for me to use there, but it does provide some measure for analysis of general opinion. And just because something gets a low score doesn’t mean you can’t derive enjoyment out of it. Mega Shark vs. Giant Octopus is a wretched movie by any metric, but there are people who absolutely love it because of how ridiculous it is. I completely agree with you that scores alone should never be the deciding factor in a person’s opinion of whether to watch, read or play anything, that’s why we have words with our reviews.
-
Ethos, you touched upon a real evil in this industry with the Metacritic-related bonuses. It puts everyone, from the player to the reviewer to the developer, in a terribly unprofessional position at times.