Data on score development
Printed From: Progarchives.com
Category: Site News, Newbies, Help and Improvements
Forum Name: Help us improve the site
Forum Description: Help us improve the forums, and the site as a whole
URL: http://www.progarchives.com/forum/forum_posts.asp?TID=42565
Printed Date: June 07 2025 at 11:24 Software Version: Web Wiz Forums 11.01 - http://www.webwizforums.com
Topic: Data on score development
Posted By: Sofagrisen
Subject: Data on score development
Date Posted: October 13 2007 at 08:54
Does anyone have data on score development for albums here on PA? I am kind of trying to gather some myself, but if someone already has, I don’t need to. What I am trying to prove, is my point that as albums gets more votes, their score will go down. I have to wait a few months before my data gets any good. Anyhow, the only useful data so far is:
2nd Hands
8 votes: 4,59
26 votes: 4,12
Voice in the Light
3 votes: 4,75
30 votes: 4,16
This is typical development of album scores. They will start high, drop pretty fast, but ever slower … My point is, this must be the main consideration in the rating algorithm. At the moment it is not taken enough into consideration, so albums with many votes are effectively punished. I have also noticed this effect is considerably larger here than on for example Rate Your Music, but scores tend to be lower there, because of the larger diversity in people voting.
|
Replies:
Posted By: Easy Livin
Date Posted: October 13 2007 at 13:12
Mike is your man for this. I sure he will be able to reassure you that this is not the case.
|
Posted By: Sofagrisen
Date Posted: October 13 2007 at 13:36
Haha, this is the case. I have observed this effect for months, but I haven't started writing down my observations until recently. The effect is very logical. People who vote early on an album will be more likely to be fans and so on, and fans give better grades …
|
Posted By: Easy Livin
Date Posted: October 13 2007 at 15:55
Ah, well that would be a different reason, not simply because there are more ratings. What you say from that perspective does make sense though, an album will not find its true position for a while after it is added.
|
Posted By: Sofagrisen
Date Posted: October 13 2007 at 16:15
Easy Livin wrote:
Ah, well that would be a different reason, not simply because there are more ratings. What you say from that perspective does make sense though, an album will not find its true position for a while after it is added.
|
Well, that is correct in some sense and wrong in another. The thing that is correct is that the more votes an album has, the less is the variation in rating, it practically stabilises. However, it is not like as if half of albums start low and climb and the other half start high and decreases in rating. I have mostly studied high scoring albums, so I am just going to say what happens to them, because I am not equally certain in the case of low scoring albums, and anyhow high scoring albums are the most important. They will generally start by getting very good ratings. After 10 votes, they might have an almost perfect score. As time passes, more balanced reviews will come in. In the first 50 votes, the score of an album usually drops quite a lot. In the next 50 votes the score will drop, but not nearly as much. The score continues to drop less and less, and almost stabilises. That’s how it goes over and over and over again, and it needs to be taken into consideration, because it is basically what happens to every high scoring album …
|
Posted By: rileydog22
Date Posted: October 18 2007 at 23:47
The ranking algorithm, as MikeEnRegalia will confirm if he checks this part of the site, incorporates both the rating and the number of ratings (I believe the formula is log(ratings)*rating, but I'm not sure). Things like the P.A. top 100 and a band's "Key Albums" are thus somewhat stable despite the volatile activity of under-reviewed albums.
-------------

|
Posted By: Sofagrisen
Date Posted: October 20 2007 at 20:54
rileydog22 wrote:
The ranking algorithm, as MikeEnRegalia will confirm if he checks this part of the site, incorporates both the rating and the number of ratings (I believe the formula is log(ratings)*rating, but I'm not sure). Things like the P.A. top 100 and a band's "Key Albums" are thus somewhat stable despite the volatile activity of under-reviewed albums.
|
I know, but the formula still underevaluate albums with many votes. When people read this, they tend to think what I mean is that popularity should count more. But it is not about popularity at all, it is about how as albums get more votes, they tend to drop in rating. But yeah, I get it, you people won't understand what I am talking about until I give you some "hard evidence", some numbers, and not even then, I am sure you are going to get it ...
|
Posted By: rileydog22
Date Posted: October 20 2007 at 20:56
Yes, and as the rating element of the album drops, the log(ratings) element increases. That's the idea of the algorithm.
-------------

|
Posted By: Sofagrisen
Date Posted: October 20 2007 at 20:59
rileydog22 wrote:
Yes, and as the rating element of the album drops, the log(ratings) element increases. That's the idea of the algorithm.
|
And I totally get it, I know it works that way, but the effect is not large enough, it should be bigger, and it used to be bigger. This is all about finding the right balance, and you haven't found it ...
I can find countless examples in the top list. Look at A.C.T. album Last Epic compared to Tool's Lateralus. The first album has 54 votes and 4,46 in rating. The second has 237 votes and 4,33 in rating. Which album is highest in the top 100 list?
Last Epic, of course, even though there is no chance in hell the album would have had 4,33 or more in rating at 237 votes.
Lateralus is punished because it has so many votes.
|
Posted By: Sofagrisen
Date Posted: December 13 2007 at 08:04
I have collected some more data now. Does anyone see the pattern?
album title
votes: rating
Fear of a Blank Planet
215: 4,25
221: 4,22
226: 4,23
231: 4,23
238: 4,21
240: 4,20
250: 4,16
259: 4,15
Remember That Night: Live At The Royal Albert Hall
14: 4,74
17: 4,60
18: 4,55
24: 4,33
27: 4,28
Colors
18: 4,50
21: 4,33
26: 4,27
29: 4,34
32: 4,30
35: 4,28
Nil Recurring
68: 4,39
71: 4,23
78: 4,16
82: 4,00
84: 3,91
87: 3,93
91: 3,93
Night
48: 4,35
51: 4,26
53: 4,24
56: 4,21
60: 4,20
64: 4,18
A Time of Day
46: 4,23
48: 4,21
52: 4,19
54: 4,18
Shadows of the Sun
10: 4,68
13: 4,59
17: 4:46
21: 4,34
25: 3,84
27: 3,71
30: 3,97
Frames
9: 4,78
16: 4,52
19: 4,44
22: 4,27
24: 4,33
28: 4,30
Voice in the Light
3: 4,75
30: 4,16
36: 4,22
40: 4,32
43: 4,27
46: 4,22
2nd Hands
8: 4,59
26: 4,12
29: 4,13
32: 4,05
34: 4,06
37: 4,01
In Rainbows
7: 4,09
15: 3,88
26: 3,97
34: 4,01
39: 3,94
42: 3,84
46: 3,92
49: 3,94
51: 3,94
55: 3,93
59: 3,86
|
Posted By: clarke2001
Date Posted: December 13 2007 at 08:38
The descending pattern is obvious, but not quite clear because of different ratings people are giving to an album. Now, if you would count the "usual" (non-number-rating sensitive) average simply by counting the arithmetical "middle" and compare those figures to these you observed, that would be nice.
(I overlooked the rating weight of review/non-reviewer ratings.)
------------- https://japanskipremijeri.bandcamp.com/album/perkusije-gospodine" rel="nofollow - Percussion, sir!
|
Posted By: Easy Livin
Date Posted: December 13 2007 at 10:17
Indeed, it would be useful if you could shown in the lsit what the star ratings were each time for the additional review.
|
Posted By: Sofagrisen
Date Posted: December 13 2007 at 10:59
Easy Livin wrote:
Indeed, it would be useful if you could shown in the lsit what the star ratings were each time for the additional review.
|
Why would that be interesting? The point is ratings of albums that have high ratings decreases, almost exclusively. And my point then is, when top lists are made, this should be very heavily taken into account. I think the whole algorithm should be based around it, to be honest.
|
Posted By: MikeEnRegalia
Date Posted: December 13 2007 at 11:25
^ how could this phenomenon be taken into account? It's only natural that as an album "ages", reality starts kicking in. Another possibility is that once an album reaches a critical number of review which put it in the top 100 or at the spot of "album of the week", bashers begin submitting 1 star ratings. Regardless of the reasons, how can this be taken into account ... are you suggesting that we should artificially tune album averages depending on how old they are in relation to the release date?
------------- https://awesomeprog.com/release-polls/pa" rel="nofollow - Release Polls
Listened to:
|
Posted By: Seyo
Date Posted: December 13 2007 at 12:24
Yes, the pattern of declining average ratings, as shown on those examples, is clear. But, is it something that we should be worried about?
|
Posted By: Logan
Date Posted: December 13 2007 at 12:51
I understand what's being said, but I don't really understand the problem with this. Initial reviewers commonly rate too high (especially when an album is new). And others play games with the ratings to bump one album over another. Perhaps early ratings could be weighted down... No, I don't believe in that.
Anyway, I'd prefer a straight average rating while the number of ratings affect the placement on the list (and ratings without reviews could not count for much in terms of placement).
To go completely on a tangent:
One big problem, though, is that there is not a standard rating methodology being used by enough people (I think there should be more detailed criteria for doing a rating, I know that reviews can justify one's rating, but...). Perhaps a table with various criteria, each with a different rating which is then averaged: Originality, creativity, technicality progressiveness, how it rates versus other albums by the artist (if there are any and the person has heard others), how it rates against other artists in the category, how it rates generally compared to the rest of Prog, subjective rating, "objective" rating, or whatever... and then an average of those. Whatever criteria one chooses,; just ideas and some redundancy). One can justify the rating in a review, but the rating process itself should be meaningful.
------------- Watching while most appreciating a sunset in the moment need not diminish all the glorious sunsets I have observed before. It can be much like that with music for me.
|
Posted By: Easy Livin
Date Posted: December 13 2007 at 15:34
Sofagrisen wrote:
Easy Livin wrote:
Indeed, it would be useful if you could shown in the lsit what the star ratings were each time for the additional review.
|
Why would that be interesting? The point is ratings of albums that have high ratings decreases, almost exclusively. And my point then is, when top lists are made, this should be very heavily taken into account. I think the whole algorithm should be based around it, to be honest. |
It would be of interest because, as Mike suggests, it may actually be lower ratings which are reducing the average. Showing what the ratings were would help you to prove otherwise (if that is the case).
|
Posted By: Sofagrisen
Date Posted: December 14 2007 at 05:25
The thing is, you have to take this into account to realistically compare an album with 50 votes to an album with 500. 5,40 at 500 votes, is better than 4,50 at 50, for example. With 450 extra votes, the album with only 50 votes, is expected to maybe have a rating of 4,00. The differences are pretty huge. Personally I would like to use some kind of regression analysis in determining expected score development based on number of votes, and then the expected score given by a function of score development at maybe 100 votes should be what the top lists are based on. But I am not an actual mathematician. Hopefully there are easier ways to do it.
|
Posted By: MikeEnRegalia
Date Posted: December 14 2007 at 05:51
^ suppose we would "correct" the score of newly released albums based on our expectancy of how the score is affected by the number of votes ... wouldn't then users adapt to the corrected score? Usually people are motivated to submit their rating because they see the current average, don't agree with it and want to change it. So by "correcting" the average you always influence the voters ...
------------- https://awesomeprog.com/release-polls/pa" rel="nofollow - Release Polls
Listened to:
|
Posted By: Sofagrisen
Date Posted: December 14 2007 at 06:12
MikeEnRegalia wrote:
^ suppose we would "correct" the score of newly released albums based on our expectancy of how the score is affected by the number of votes ... wouldn't then users adapt to the corrected score? Usually people are motivated to submit their rating because they see the current average, don't agree with it and want to change it. So by "correcting" the average you always influence the voters ...
|
I wouldn't like to correct the score you see, but I would like the top lists to be based on for example the expected score at 100 votes, given by a function. It's like that now too, the score that is shown, is the score at this moment...
|
Posted By: Dean
Date Posted: December 14 2007 at 06:23
Logan wrote:
To go completely on a tangent:
One big problem, though, is that there is not a standard rating methodology being used by enough people (I think there should be more detailed criteria for doing a rating, I know that reviews can justify one's rating, but...). Perhaps a table with various criteria, each with a different rating which is then averaged: Originality, creativity, technicality progressiveness, how it rates versus other albums by the artist (if there are any and the person has heard others), how it rates against other artists in the category, how it rates generally compared to the rest of Prog, subjective rating, "objective" rating, or whatever... and then an average of those. Whatever criteria one chooses,; just ideas and some redundancy). One can justify the rating in a review, but the rating process itself should be meaningful.
|
I agree with what you saying, but I don't think it will make a huge amount of difference - people will still use their own criteria when rating an album - how many really rate "Essential: a masterpiece of progressive music" when what they actually mean is "I love this band/album to bits" for a 5* rating. The only thing you can really compare a rating to is other ratings by the same reviewer.
Perhaps what is needed is a two level rating system like on http://www.planetmellotron.com/reviews.htm - Planet Mellotron - one for the Goodness of the album and one for its Proginess content.
------------- What?
|
Posted By: MikeEnRegalia
Date Posted: December 14 2007 at 06:31
^ not only on Planet Mellotron, but also at Ratingfreak.com. I'll also add one tag for "relevance" shortly:
1. Rating (0-100%) 2. Progressiveness (0-5) 3. Relevance (0-5)
The relevance can be used to say "this is an important album" (level 3), "this is a milestone" (level 5) or "this isn't really an important album" (level 1). High relevance + high rating would indicate a masterpiece, or an "essential album for every collection". But you could still assign low ratings together with high relevance to indicate that you simply don't like an album but recognize its importance, or you could assign very high rating to a favorite album of yours but give it a low relevance to indicate that it's really just for fans, but nevertheless you like it very much. And if you add the progressiveness rating you can also distinguish "prog masterpieces" from "non-prog masterpieces", or list both as "masterpieces regardless of prog status".
In short: Rating Freaks will love this, others probably won't.
------------- https://awesomeprog.com/release-polls/pa" rel="nofollow - Release Polls
Listened to:
|
Posted By: Sofagrisen
Date Posted: December 14 2007 at 07:08
Anyhow, I was playing around, and found out you can use geometric series. The formula S(n)=4,9-0,005*(0,995^n-1)/(0,995-1) fits pretty well with the rating development of Fear of a Blank Planet, for example, where S is score and n is number of votes. If you study this, you will be able to find the best k (0,995) and a1 (0,005). 4,9 here is supposed to indicate which level the album was expected to begin scoring at.
|
Posted By: MikeEnRegalia
Date Posted: December 14 2007 at 08:21
^ I think if you explore this further then you'll discover that the development varies a lot for different genres ... for example Prog Metal albums tend to get bashed more heavily than Symphonic Prog albums.
------------- https://awesomeprog.com/release-polls/pa" rel="nofollow - Release Polls
Listened to:
|
Posted By: Sofagrisen
Date Posted: December 14 2007 at 09:24
MikeEnRegalia wrote:
^ I think if you explore this further then you'll discover that the development varies a lot for different genres ... for example Prog Metal albums tend to get bashed more heavily than Symphonic Prog albums.
|
Yeah, I am sure about it, but theoretically you could find the function that fits each album best. I don't know if there is a relatively easy way, or if it will be very demanding. I am not a mathematician, but that being said, I think geometric series could be applied to basically any album. That was the main idea, not the details of the formula. And just because all of this isn’t figured out and it is difficult, doesn’t mean we should just ignore it, and keep what we have. What we know is that ratings decline. When we make top lists, we compare albums with different numbers of voters. But because ratings decline, it’s not enough to exclusively just look at rating, you have to take the number of votes into account, or else it would be unfair to the albums with more votes. And I know it is already taken into account on this site, but being taken into account, is not the same as being well enough taken into account.
Part of the problem is the whole issue is considered from a bad perspective. It is often considered from the perspective one can be more certain about the “true level” of an album with many votes, while an album with fewer votes will be less stable, and you can’t really know if it will go up or down. It has yet to prove itself. To be honest, this is ridiculous, because the rating of the album will decline. There really is no uncertainty about that. Then there is yet another alternative perspective, that number of votes is an indicator of popularity, and that this popularity should be awarded. The last perspective gives formulas that counteract the effect of declining ratings, but not at all as precisely as I would like. The focus is wrong.
|
Posted By: Sofagrisen
Date Posted: December 14 2007 at 19:56
I just have to mention this, now both Frames by Oceansize and Doomsday Afternoon by Phideaux is ahead of Fear of a Blank Planet on the 2007 top list. And it's really quite provoking, because none of these albums would even be close to having the score of FoaBP at equally many votes. FoaBP is rating wise just one level above, and yet it is not reflected by the top list. Is it strange this annoys the crap out of me? Honestly it’s just maddening to see, there is obviously something very wrong with the rating algorithm. How can anyone defend it?
|
Posted By: MikeEnRegalia
Date Posted: December 14 2007 at 20:34
nobody should take top N lists *that* seriously ... especially not the topmost positions - it doesn't really matter whether an album is #1, #2, #3 etc.. Ratings will always fluctuate ... and like I said above: If you try to compensate for these fluctuation people will simply adapt their voting behaviour.
BTW: don't think there's anything wrong with the algorithm ... people can see that FoaBP has much more votes than the other albums and decide for themselves what to make of it. The simple truth is that there is *no* way to combine two separate/independent numbers (# of ratings, avg rating) in one number/"score".
------------- https://awesomeprog.com/release-polls/pa" rel="nofollow - Release Polls
Listened to:
|
Posted By: Easy Livin
Date Posted: December 15 2007 at 16:19
After discussion with the thread starter, this thread has enjoyed a Groundhog day. In other words it has been wound back a bit to allow the debate to continue.
|
Posted By: Ghandi 2
Date Posted: December 16 2007 at 04:28
MikeEnRegalia wrote:
^ how could this phenomenon be taken into account? It's only natural that as an album "ages", reality starts kicking in. Another possibility is that once an album reaches a critical number of review which put it in the top 100 or at the spot of "album of the week", bashers begin submitting 1 star ratings. Regardless of the reasons, how can this be taken into account ... are you suggesting that we should artificially tune album averages depending on how old they are in relation to the release date? |
I don't think it's 1 star ratings (unless it could be statistically demonstrated). It's just that the super fans are the ones who are going to get and review the album first, and since they're die hards they're going to give it a higher rating. Then the people who are less interested come along and rate it lower/more fairly. I don't think it's anything we need to correct, it's the natural way things go. The early adopters are always going to be more enthusiastic, and we can't punish them for being first (or fanboys).
The top lists don't matter at all. I'll never see a Henry Cow album on the top 100, but it doesn't bother me.
------------- "Never forget that the human race with technology is like an alcoholic with a barrel of wine."
Sleepytime Gorilla Museum: Because in their hearts, everyone secretly loves the Unabomber.
|
Posted By: Sofagrisen
Date Posted: December 16 2007 at 05:37
Ghandi 2 wrote:
MikeEnRegalia wrote:
^ how could this phenomenon be taken into account? It's only natural that as an album "ages", reality starts kicking in. Another possibility is that once an album reaches a critical number of review which put it in the top 100 or at the spot of "album of the week", bashers begin submitting 1 star ratings. Regardless of the reasons, how can this be taken into account ... are you suggesting that we should artificially tune album averages depending on how old they are in relation to the release date? |
I don't think it's 1 star ratings (unless it could be statistically demonstrated). It's just that the super fans are the ones who are going to get and review the album first, and since they're die hards they're going to give it a higher rating. Then the people who are less interested come along and rate it lower/more fairly. I don't think it's anything we need to correct, it's the natural way things go. The early adopters are always going to be more enthusiastic, and we can't punish them for being first (or fanboys).
The top lists don't matter at all. I'll never see a Henry Cow album on the top 100, but it doesn't bother me. |
Does it really matter why ratings are declining? Although I agree with your theory, which has been my theory all along, it does beg interference, contrary to what you write. Just because something is natural, doesn’t mean you should let it go on exactly how it’s always been going on. You make a strong argument for why the rating of albums with different numbers of votes is not directly comparable, and then the approach I argue for is that we should convert ratings, so that we can directly compare them. That you compare albums in a top list well is kind of important, or else it won’t reflect the meaning of the people. If these top lists in themselves are important is not really relevant, it’s not at all what we are discussing in this thread. But I guess people have the need to point out how they are raised above them.
|
Posted By: Dean
Date Posted: December 16 2007 at 05:45
Sofagrisen wrote:
I have collected some more data now. Does anyone see the pattern? ::snip::
|
All data will do that, even randomly generated data will demonstrate a similar trend as the calculated average score will tend towards the predicted average as the population increases. All this shows is that the more ratings an album gets the more accurate the average score is.
To guage the effect of fanboism and album bashing you need to look at the distribution of star-ratings. Comparing the top and bottom album from the 2007 Top 100 gives:
which to my eye, shows no irregulaties in the distribution of ratings for either album and gives no evidence of a surfit of 1* and 5* ratings all. What it shows is that many of the people who voted Fear of a Blank Planet liked it and those that voted for Systematic Chaos were more divided in thie opinions.
------------- What?
|
Posted By: Sofagrisen
Date Posted: December 16 2007 at 05:58
darqdean wrote:
Sofagrisen wrote:
I have collected some more data now. Does anyone see the pattern? ::snip:: |
All data will do that, even randomly generated data will demonstrate a similar trend as the calculated average score will tend towards the predicted average as the population increases. All this shows is that the more ratings an album gets the more accurate the average score is. |
I mentioned this perspective earlier, and I do agree the rating over time will tend towards the predicted average. That being said, what you see is that that how this happens is not random. It’s not 50-50 between the ratings going up and down. The ratings are basically always going down, but at a decreasing speed. Sure the data does not conclusively prove anything, but it is all the data I have, and none of these albums starts out low and then climbs, and I really think that is no coincidence at all. I would like to have more data, and I am pretty sure Prog Archives themselves could generate it, but that was never a possibility for me. That being said, I do not base my opinions solely on this data. I have observed this affect myself over and over again in many albums I never collected any data for. Only reason I collected this data, was to make a stronger argument for my case in a thread like this, because obviously, people won’t believe my observations, they have to see it for themselves.
|
Posted By: Dean
Date Posted: December 16 2007 at 06:20
Sofagrisen wrote:
darqdean wrote:
Sofagrisen wrote:
I have collected some more data now. Does anyone see the pattern? ::snip:: | All data will do that, even randomly generated data will demonstrate a similar trend as the calculated average score will tend towards the predicted average as the population increases. All this shows is that the more ratings an album gets the more accurate the average score is. |
I mentioned this perspective earlier, and I do agree the rating over time will tend towards the predicted average. That being said, what you see is that that how this happens is not random. It’s not 50-50 between the ratings going up and down. The ratings are basically always going down, but at a decreasing speed. Sure the data does not conclusively prove anything, but it is all the data I have, and none of these albums starts out low and then climbs, and I really think that is no coincidence at all. I would like to have more data, and I am pretty sure Prog Archives themselves could generate it, but that’s not an option for me. That being said, I do not base my opinions solely on this data. I have observed this affect myself over and over again in many albums I never collected any data for. Only reason I collected this data, was to make a stronger argument for my case in a thread like this, because obviously, people won’t believe my observations, they have to see it for themselves. |
Voting data is not random so therefore has a bias or skew applied to it - as people have said, that bias is possitive because of the way people vote for new albums. However this bias is the sum of many different biases that are applied at different times for different reasons and changes with time. It is impossible to predict what those biases are and when they occur so it will be impossible to create a general model that fits all data. You have demonstrated that you can create a best fit formula on historical data but that does not predict future data since future votes will be based on the anti-bias of the formula.
Any formula would have to take into account the specific distribution of ratings (ie the Standard Deviation) for each individual album and you can only do that with a statistically viable population. Unfortunately it is not practical to apply "correction" to low populations beacuse there is insuficient statistical evidence (ie Confidence Factor) present in that small amount of data to do that. In statistical terms, even a population of 100 is too small to give accurate results and most albums have less than 30 ratings.
------------- What?
|
Posted By: Sofagrisen
Date Posted: December 16 2007 at 06:36
[QUOTE=darqdean]You have demonstrated that you can create a best fit formula on historical data but that does not predict future data since future votes will be based on the anti-bias of the formula.[QUOTE]
But I firmly believe future scores could be predicted, for example through geometric series. I believe the formula I came up with for the score development of FoaBP also could predict future scores. I don’t know if it is actually possible to use geometric series in a rating algorithm, since I imagine finding an algorithm for the best fit geometric series must lead to a very complicated and extensive algorithm. Anyhow, perhaps there are other ways of doing this, and again, even the old rating algorithm was better than the new one. One way of dealing with the whole issue is to just let the number of votes have a larger positive effect in the rating algorithm, which was basically what the old rating algorithm was doing. I don’t think it is a perfect solution though, but the way it is now, the rating algorithm I CLEARLY bised towards albums with few votes.
|
Posted By: Ghandi 2
Date Posted: December 16 2007 at 06:47
Why does it require interference? People are being honest. Now, if a specific person is trying to abuse the system, then we can usually tell, but I don't understand why we would or how we could accurately "combat" it.
The algorithm has changed quite a bit since it was first started. It used to weight more votes more heavily, and nothing could unseat CTTE because since it is the most famous prog album ever it had way more votes than anything else. Thick as Brick had the highest average by far of the top ten, but it was stuck at 5 because it didn't have enough votes. Now it's number one and has the highest score in the top ten except for Quadrephonia at 7, which only has 50 votes...You may have a point.
But somebody's going to be unhappy no matter what the algorithm is.
------------- "Never forget that the human race with technology is like an alcoholic with a barrel of wine."
Sleepytime Gorilla Museum: Because in their hearts, everyone secretly loves the Unabomber.
|
|