What is different about Realmworks?

Typically the methodologies of a review source for level and game design follow no particular quantifiable standard by which others can attempt to validate or approximately replicate their critique. Therefore how they score a work of art can be and often is esoteric. Furthermore their analyses need not rely on any defined substructure and thus not only might their valuation of a work vary among other critics, but it may also vary among their own reviews. This creates a case in which readers are unable to see but a low-resolution summary of what the critic reasoned the work’s value to be. In addition their critiques may be swayed by their own opinion or biases to weigh attributes of a work over others for no exact reason beyond their subjectivity. For instance, averaging the scores of critics that used no particular rubric or used different rubrics results in an erroneously calculated overall score, given that there is no manner by which we can trace the origins of each element individually, which signifies a poorly engineered statistic.

How are scores calculated?

Scores for reviews are calculated using a comprehensive rubric composed of various elements. Each element contributes to the overall score. Depending on the scoring framework, elements may be weighted or unweighted.

Why are there different scoring methodologies?

We want to provide reviewers with some legroom by allowing them to use a rubric which was approved by us. We do this in cases where reviewers are more capable of articulating values on different elements based on the work they are grading. It may be that an element is omitted because the game in question does not use it. Usually the standard rubric is used. In the cases where a different rubric is used, reviewers must undergo an approval process in order to verify the validity of the proposed scoring rubric. Omitted elements will result in normalized review ratings. The goal is to distinguish different aspects of a work of art and isolate them in order to analyze them. Therefore, each element must be distinguishable from every other element. If one element “collapses” into another (in effect it is the other element expressed in a different way), we consider the rubric and the rating given through it erroneous.

What does your standard rubric look like?

As of the moment the standard rubric for level design is as follows:

  • Architecture (2)
    • Structure (1)
    • Innovation (1)
  • Atmosphere (2)
    • Visual Immersion (0.75)
    • Auditory Immersion (0.75)
    • Detail (0.5)
  • Gameplay (2)
    • Entertainment (0.5)
    • Intensity (0.5)
    • Novelty (0.5)
    • Flux (0.5)
  • Visual Impact (2)
    • Concept Impression / Grandness (1)
    • Visual Awe (0.5)
    • Visual Creativity (0.5)
  • Storyline
    • Character Development (0.75)
    • Plot Development (0.75)
    • Depth (0.5)

How do you score each subcomponent?

It’s very difficult to quantify complicated information that doesn’t have a clear mathematical framework, but we can at least provide divisions among subcomponent values to distinguish different levels accordingly. Hence we use the following general rule:

3/4 = Baseline, 1/2 = Sufficiency, 1/4 = Necessity

In other words, a score of 3/4 the full points means it’s equal in quality to the comparison baseline. A score of 1/2 means the subcomponent does the bare minimum to maintain adequacy. A score of 1/4 means it does just enough to not be completely broken or absent. The baseline used is almost always the original game on which the mod was created.

What does the star rating mean?

Here the star rating is not the same as the review rating. The star rating signifies what tier the work was placed in.

What are tiers?

In addition to the review system we separate works into five tiers enumerated 1 to 5. The following table shows you how to convert a review rating to a tier rating. Our review rating system tends to reflect a Pareto distribution, so the tier approximates what percentile the particular work has been placed in based on the review rating it received.

Review RatingTier Rating

What is the point of the scoring method?

The goal of the scoring method is twofold. Firstly, we want to stray away as much as possible from individual opinion and focus on what can be determined from the work objectively. Second, we aim to improve and maintain a scoring method which can be largely replicated and has tractable empirical validity to it as opposed to pure reasoning based on individual assumptions. We seek to isolate elements of level design into distinguishable aspects in order to determine the quality of each of them individually in a review, thereby calibrating to some degree the author’s or authors’ skill and creativity in developing each one of them. Our goal is also to engineer a standardized rubric that has some empirical utility behind it; we hope to, eventually, mold our standard method in order to capture the aspects of level design that stimulate the human psyche independent of opinion and we would like to use what research exists in data visualization and psychology to create sub-elements of a scoring method which can provide a quantification of visual quality.

Why did the reviewer score different parts of the mod or game?

For very large mods and games, a reviewer may partition sets of levels and score each group of levels, then use the average of those scores as the overall rating.

Can I apply to become a reviewer?

If you are interested in writing reviews you should contact design@psychomech.com. Ideally we want individuals with a long history of gaming experience in the games they seek to write reviews for. In addition we want reviewers to demonstrate their history in visual design, so you should send your portfolio or references of your most recent works as well. If we are interested we will ask you to write a review on any particular level(s) in any game using our scoring method. If we are satisfied with the review you make you will be invited to write reviews. You may find more information about this on the reviewers page.

Why are some reviews not in video form?

We allow reviewers the freedom to either cover their review via video or writing.

Do you review games as well?

Our scoring method is aimed at making critical reviews for level design, so while our scoring method may still apply to whole games, it’s more appropriate to use it for individual levels or sets of levels. It is also the case that some elements may require a baseline in order to attribute a value to them, and we typically use the collection of levels (or a portion thereof) of the game itself as a baseline.

Leave a Reply