Responding to a article yesterday detailing how it weighted its scores in terms of importance, Metacritic has come forwards claiming that the research is “wholly inaccurate”. Having come under fire and a great level of criticism in the past as a result of how it scores its games, many websites were more than willing to accept the research by Adams Greenwood-Erickson, a course director of Full Sail University.

As detailed on Gamasutra, in a talk titled “A Scientific Assessment of the Validity and Value of Metacritic” at the Game Developer’s conference in San Fransisco, Greenwoord-Erickson claimed to have discovered that the website applies a number of weighted values to reviews from major websites. Having based his research upon information pulled from Metacritic itself and months of work, comparing modelled scores to the actual scores found on the website, Greenwoord-Erickson and his students found that there were supposedly six sets of values to publications contributing to a game’s overall metascore.

Metacritic has in turn responded to this by claiming that it does not reflect their actual methods.

In a post on their Facebook page the website claimed that:

They used far fewer tiers than Greenwoord-Erickson suggested,

The disparity between tiers in the is far less extreme than what is actually stated in the research,

And that the article “overvalues some publications and undervalues others” while even ignoring some altogether.

The post used remarks stating that “this isn’t anywhere close to reality” and that areas of the research were off to the point of being “comically so”. It did not go into details but did have a number of backers similarly remarking upon the supposed inaccuracy of the analysis. Amongst them was Neils Keurentjes, former owner of the now defunct Xboxic who stated that “I laughed my ass off at the claim that ‘our’ reviews would be in the highest ranked tier with a 1.5 multiplier” citing that their last serious review on the site had been published over four years ago. He went so far as to describe the article as a “steaming pile of bollocks”.

While many of these points lead towards suggesting that Greenwoord-Erickson’s research was wrong, the problem is that there is no real way to confirm if it really is this inaccurate. Due to Metacritic’s refusal to actually reveal how it weights its scores or even how it comes up with an overall metascore, all of this is being based purely upon their word. The inability to disprove the research completely with counterpoints based upon fact and proof means it is extremely difficult to fully disprove the article.

Having been alleged to have continually changing algorithm for how each website’s review is reflected upon a game, along with suspicions how accurately the site combines a number of vastly different methods of rating games, few people are willing to simply accept the response. Many replies to the Facebook demanded that Metacritic back their claims with actual proof of their sincerity and reveal how their scores are created. Others such as co-workers have similarly come to defend Greenwoord-Erickson’s article citing previous reliability when responding to Metacritic’s post. Further answers cited how the website’s scores are adversely affecting elements of the industry, with payment for development teams on occasion being based upon metascores. One such example being Fallout: New Vegas which had the team failing to be given a desperately needed bonus due to being one mark off of the agreed score of 85 as a result of a questionable review giving it a negative score.

It is currently unknown how, if at all, Metacritic will respond to these comments or try to back its claims; only that for the time being it is stating the research into its methods is false.

Leave a Reply

Your email address will not be published.