With the problems mentioned in part one, some are likely asking, how can we fix these issues? Some of the problems mentioned are obviously not able to be changed. Companies want to release games to make money, and there’s only so much money to go around. Likewise time is limited as well. So for three of our five problems, we have very limited things we can do to impact them. All we can really focus on are the reviews themselves and work on how we perceive reviews.
A major hurtle is the focus on review scores. Review scores whether we like it or not are some of the most important numbers in the video game world. Obsidian actually missed out on a bonus due to getting an 84 instead of an 85 on Metacritic for Fallout: New Vegas. The most radical idea is to remove scores altogether and have the review stand on the content without any number attached to it, a few sites actually do this. The problem is review scores have been around for so long now it would be quite difficult to change this. Another problem with this idea is there is no longer any quantitative data to examine with reviews. Additionally, a small section of a review or issue in a game could have quite an impact if the reviewer doesn’t adequately explain why whatever it is doesn’t ruin the game or the gamer doesn’t read it.
A much more practical option is to limit review scales to 10 or 20 points. This would allow delineation without trying to differentiate by 0.1. This change would also allow games to expand the scale used. Most game sites will have a 7.0 or 7.5 be the low portion of their good scale. Once you get down to 5 in many cases you would be in the area where there’s little to be found to like. This can be difficult as a bland game that’s technically sound is different than a game that just flat out doesn’t work like it’s supposed to. However, the true 1.0 and 2.0 games don’t come around as often as 7 to 9 games. Extending the range of scores considered good could be an interesting imitative. Of course gamers that visited the site would have to adjust.
Just to give a bit more of an example. Let’s say 7.0 to 10 are various levels of good to amazing games. Then the 6s are various levels of mediocre games. Currently 5.9 to 1.0 or 0 are used for what most sites consider bad games. Imagine even expanding the good side of the scale even a little, Say 6.0 to 10 is considered good to amazing. 5s are considered for the mediocre and then once you start going into the 4s you are into dangerous territory. While the difference may be a small one, it could theoretically give reviewers a bit more room to divide titles than the typical 3 points. While this may not seem like much of a difference, psychologically it could help.
Obviously you could break the scale down even more. Say something like “buy it” or “skip it”. Could even add a third option at least in the US, “rent it.” Systems like this are used by some sites, however this does limit granularity. It also makes it tough for a gamer to figure out where his gaming dollars can go. Five “buy it” scores don’t really tell you which games are the best ones of the group. While the gamer may want them all, they may only have the cash for one at least currently. Ultimately “best” is subjective, but more variance in scores than 2 or 3 can help a gamer figure out which game is for them.
The biggest step can be taken by gamers themselves. Stepping back from the fanboy wars that argue that Game X is better because it got a 9.3 rather than Game Y that got a 9.2. Also, looking over the content of the review rather than just the score can make a huge difference. Refer to my comparison of Just Cause 3 reviews on IGN in part 1. The first patch on the PS4 version of Just Cause 3 fixed the loading issues. There are definitely still times where the game could run better, however what if that’s not as important to the gamer? What if the PS4 is their only option to game? Should they then completely avoid Just Cause 3 just due to those issues? Without really reading the review the player wouldn’t realize that was the issue. Plus, think about this, you’re going to buy a game based off a number that is purely subjective that is added at the end of an opinion that you may or may not agree with. Furthermore, you are inherently trusting the reviewer’s opinion that the score matches the content of the review. Sometimes you can see a review score and look through the review and it seems to take a very different tone than the score would indicate. This may not happen often, but it is something to keep in mind.
What are your thoughts? What are your ideas on how reviews can be fixed? Sound off in the comments below.
Originally posted by me on Exmainer.
More Game Articles