Many Game Review(er)s Have Precarious Rating Systems

Have you ever read or listened to a game review, or a review for anything for that matter, and wondered how on Earth the reviewer came up with their diagnosis? Maybe they gave it their highest rating, but you find yourself baffled at how they might have been able to reason it through in the first place. Perhaps you saw a mod with a “Best Of” award, although the mod didn’t seem particularly outstanding. In the overwhelming majority of cases I’ve seen, mod reviewers in particular employ a practice of judgment and reasoning which is only as sound as the extent to which they can properly judge and reason, which is not saying very much given that it’s difficult to say that a review based off of reason does much more than discreetly display the reviewer’s biases and preferences. Not to mention that they might have never exposed their methodology which they use in order to come to a decisive conclusion in their review. The collection of these problems makes things confusing.

Firstly, since there is no sufficiently rigid framework for what defines quality, a reviewer can “feel” as though one mod is better than the next. The reasoning by which they say one is better than the other can hardly be said to be identical if they can’t describe the framework through which they felt one was better than the other. The result in effect is two different ratings based off two different (un)reasonable conclusions. It would be the equivalent of doing the following: looking at two different works and judging the first solely on its visual quality, and the second solely on its auditory quality. The first gets a far better rating than the other. That’s not a fair rating because, technically, what the reviewer actually did is give one a rating based off System A, and the other based off System B, whose components are totally independent, so that there is no comparison to say which one is better, yet one got a better rating. What many reviewers do is similar. They have no rubric and as a result, they can step into all sorts of mistakes that skew their review process. It’s not particularly credible. In the end if you are comfortable with that kind of phenomenon, it’s fine, but realize it’s simply the “Person’s Opinion Awards” and not an endeavor for true quality control, measurement, and comparison.

Leave a Reply