The Verge’s Review Scale - 3 minutes read




Product reviews can be a tricky thing. Every reviewer has a different style and different way of assessing a product, which is why no two reviews of the same product will ever read the same. At The Verge, we have built a reviews program that strives to standardize our reviews without abandoning the uniqueness of each of our reviewers. Below is a short guide to our methods and an explanation of our rating practices.

Our reviews are, first and foremost, centered on real-life experience with the product. They are firmly based on the reviewer's experience of using the product for a substantial amount of time; they are absolutely never written off of a spec sheet or a fleeting experience with the product. Whatever the product is — a phone, laptop, TV, app, etc. — we strive to work it into our everyday lives and give the reader a picture of what it’s like to use the product in the real world.

In many cases, we crossbreed that anecdotal experience with systematic (or synthetic) benchmarks, especially in the realm of performance. Many of the tests are used industry-wide, with the exception of how we evaluate battery life.

Our current battery testing method is entirely based on real-world usage. We use devices as we would use them in the real world and then come up with an evaluation based on how a device performs compared to devices we’ve tested previously. In the case of laptops, we set the screen to 200 nits of brightness (or as close as we can get to that using the laptop’s brightness adjustment tools) and then include what activities we did on the machine while we were evaluating its battery stamina.

This form of testing can produce different results for different users (which why we include how we used the device during the test), but we feel that our real-world evaluation of battery life is more indicative of how a device will actually perform when you buy it as opposed to other rundown tests that don’t reflect actual device usage.

Every reviewed product (unless otherwise noted) is given a score. We score a product based on a variety of performance, value, and subjective criteria. Since the score is not a weighted average, the editor reserves the right to adjust the average to better reflect the overall assessment of the product, including the price and other qualities that aren’t always included in rigid rubrics. A score is best viewed as a snapshot in time that is compared to other devices available at the time the review is published; a device would likely not receive the same score if it were to be reviewed six months later, for example.

It’s possible we may change the score in a review after it is published due to software updates or other changes. We’ve only had to do that in exceedingly rare cares, and we will always include our reasoning about why a score changed if and when it happens.

We assume the 10-point scale is relatively straightforward, but below is a short guide as to how we view the numbers. All review scores are whole points. We no longer use half points or decimals when scoring a product.

Last updated August 1st, 2022, by Dan Seifert.



Source: The Verge

Powered by NewsAPI.org