1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
  2. Hi all! No longer will threads be closed after 1000 (ish) messages. We may close if one gets so long to cause an issue and if you would like a thread closed to start a new one after a 1000 posts then just use the "Report Post" function. Enjoy!

PJ Kwong: What is wrong is the failure of some to learn how the sport is scored

Discussion in 'The Trash Can' started by Maofan7, May 4, 2013.

Is PJ Kwong right?

Poll closed Jun 4, 2013.
  1. Yes

    78 vote(s)
  2. No

    60 vote(s)
  3. Don't Know

    0 vote(s)
  1. gkelly

    gkelly Well-Known Member

    Here's what the ISU has to say about what is 1, what is 2, etc.:

    I figure 5 = average is in the context of international-level competition. I.e., 5 is clearly senior level but nothing special beyond that. For younger skaters or those at lower skill levels, 5.00 could be an excellent score.

    This document also relates the numbers to the percentage of the program time that the skater demonstrates each of the criteria.

    I don't find it as useful to think of in those terms. One skater may be pretty good at performance or at interpretation for 100% of the program with no real breaks and no real highlights. Another might be excellent for 80% of the program and only average for the other 20%. How do you translate that difference into actual scores?

    In theory, no. In practice, if skater A is a stronger all-around skater than B, then even when A has a bad day and B has a good day, A's score will and should still be higher. That's the system working correctly.

    If there is not much difference between them to begin with, then when A has a bad day and B a good one, B should score higher, at least in the areas where B does well and A does poorly.

    Reputation effects might come in and prevent that from happening, which would be the system not working correctly.

    The trick is to recognize the difference between occasions when A is still better and occasions where B deserved to score higher. How much do A's struggles on this occasion cancel out superior basic skills, superior program construction, etc.? Often it's a judgment call and different judges, different fans, would disagree with good reasons on both sides. Under 6.0 in those situations, we'd probably see mixed ordinals.
  2. Aussie Willy

    Aussie Willy Hates both vegemite and peanut butter

    I can do an upright spin in a public session and I get wows from those around me. Those who attend public sessions are generally impressed with anyone who can actually stand up.

    I want to see the video of your "iron lotus". :)
  3. Susan M

    Susan M Well-Known Member

    IJS is pretty much only "more popular" among some fans from countries whose skaters have enjoyed more success under it than they did previously and by some Canadians because they keep getting blamed for foisting it onto the world so they have to pretend its a good thing. (In other words, Canadians aside, it is, generally speaking, defended by people who haven't been watching skating for very long and seen as fairly flawed by folks who remember what skating used to look like and how beautiful it could be back in the 70s, 80s & 90s.)

    But as for which is simpler, I think you have it backwards. The 6.0 system was really pretty simple, especially during the period of majority ordinals. People could understand looking at two numbers and seeing who got more points and they can understand the idea of winning 6 judges to 3. (The vast majority of results did end up with pretty clear panel majorities.) It did get a little more obscure when they went to the OBO comparisons, but still most results came down to the winner getting the most points from more judges that anyone else.

    The current system is way more complicated and the proof is very simple. There is no way they ever would have entertained this system before computers. You have element values, then you have + or minus 1-3 GOE except 3 GOE isn't in fact always worth 3. Then sometimes the element is factored to reduce it by some fraction or to increase it by some fraction. Then you get to add up all that mess to 45 other numbers (some of which are also multiplied by two and 10 of which don't count anyway). How can you seriously argue this is simpler than 9 judges with two marks each? I think what you are really saying is that it is mathematically linear. That's not the same thing as simpler.

    I suppose for fans whose skating comprehension is satisfied by looking only at the skaters' total points, then it is pretty simple to see that one number is bigger than the other, but that isn't what audiences find confusing. Why the system seems complicated are exactly that ridiculously over-thought numerical minutia producing results that bear little relation to what they thought they just saw on the ice. Explaining the minutia is still not going to make people think the right skater won when that doesn't fit with their own subjective perception.
    Last edited: May 7, 2013
  4. Aussie Willy

    Aussie Willy Hates both vegemite and peanut butter

    The marks given under 6.0 had nothing to do with a skaters getting more points. It was a placement and ranking system. It had absolutely no accountability nor transparency. And who knows how judges came up with their marks.

    Give me IJS over 6.0 any day. And I don't come from a successful skating country nor am Canadian. In fact Australia only had one competitor at Worlds this year because of the system. And I have been skating for 20 years and been a fan of the sport for longer than that.
    Last edited: May 7, 2013
    algonquin and (deleted member) like this.
  5. VIETgrlTerifa

    VIETgrlTerifa Well-Known Member

    One thing I miss about 6.0 is that the scores were fun to see. I used to imagine the nine judges were like the Justices of the Supreme Court and they were determining which way the competition should go. You'd see a unanimous decision, a majority-minority split, a plurality with splits with more than two skaters with a result coming out of that, etc.

    I can see how IJS will be fun for accounting majors and such. It is fun to try to calculate scores and add them up to see how skaters exploit the system to their benefit or how their failings to do so cost them, with sometimes the actual execution not mattering so much. It's an interesting observation to see how humans are trying to fit into a rigid, mechanical system that doesn't give much leeway outside GOE. Of course there's PCS, that many think is actually being used as a pseudo-ranking system by the judges. If you think about it, IJS is a more interesting study because it has a lot elements one can pick apart and critique or praise.
  6. Andrushka

    Andrushka Well-Known Member

    IMO the judging system and those who manipulate it to give medals to whom they favor even when they don't deserve it.
  7. spikydurian

    spikydurian Well-Known Member

    When we design a policy, we think about the reasons why we need it in the first place. I think professional skating competitions will serve some casual fans better since some fans prefer to look at skating as an art form rather than technical form. Any chance of bringing professional skating competitions back to serve the needs of of some fans?

    If that is so easy, there won't be biitching and whinging on skating boards. ;)

    Same here. But then looks like not all viewers are the same. I just think when one is too emotionally attached whether as a fan or for other reasons, these emotions inevitably marr one's judgement or conclusion on what they see.
  8. lala

    lala Well-Known Member

    OK. Look at the figure skating's popularity...
  9. rayhaneh

    rayhaneh Well-Known Member

    This ^

    This isn't about what's best for my countrymen/women. Being Swiss there is no one competing at the top at the moment from my country, and even if it were, I've never really cared about nationalities anyway, only about the skating they produced. And I don't feel some of my favorite skaters being, on occasion or more systematically for some, fleeced in PCs is the fault of the current system: this was also true under 6.0, in fact maybe even more so, and at least IJS gives you a better, more detailed insight into the marks increasing the transparency in that respect

    I do believe however that there's this aspect of the 6.0 system, the thrill of seeing athletes getting the perfect 6.0 mark that appealed to audiences and that we have no lost and let's be honest, the larger audience doesn't care about knowing the details and the reasoning behind the marks and won't check the protocols on the isuresults website. But as a long-time fan of the sport, and someone who cares about its development, I think overall the IJS is a positive development, although it certainly needs tweeking and some fundamental work with judges as they seem, at least at high-level competitions, to get their CH and IN mixed up with their SS and their TR, or having no clue what performance/ execution is really about
  10. os168

    os168 Active Member

    What ISU should really do is adapting the Hobbit Gollum/Avatar Na'vi film SFX plotting technology and keep the COP as it is.

    They should have a screen blinding the judges and the rink, and a projector playing infront of them with multiple camera views calibrated to bring the most accurate representation of what went on the ice to the skater's avatar.

    Each skater are plotted before hand with a random character so the judges don't know who's who. As the skater skated, their version of Avatar is being replayed to the screen infront of the live judging panel complete with mistake, falls, edge issues, body movement, musical interpretation and all.

    Audience will react to performances as they normally would. Judges wouldn't have the luxury to see if it is Patrick Chan skating or Dennis Ten, whether the skate is from Canada or Kazakhstan, reputation, history and all.

    Then they can judge with total unbiased view purely on the skating. The audience then judge the judges like watching a blind date program. Thumbs up or thumbs down complete with canned laughter, boos, jeers and cheers buttons, and a bucket over judges head with mysterious substance in the case of jeers. Oh yeah I'd pay 1 month salary to see that even if I have to travel to Latvia. I really think this will attract more viewing and grow the sport.
    Last edited: May 7, 2013
    alilou and (deleted member) like this.
  11. dorianhotel

    dorianhotel Banned Member

    The judges would score many of Chan's performances up to 50 points lower if the same skate was done by someone they thought was a junior skater from Australia or Hungary. Even if they thought it was say Denis Ten it would be 30 points lower in many cases.
  12. gkelly

    gkelly Well-Known Member

    I doubt that. If they can see the speed and edge depth and complexity of the steps, they'll reward those no matter who they think is skating.

    And if the technical panels can see which jumps were executed and rotated, which turns and steps and positions were achieved, then the technical content would get full credit, and that's accounts for about half the total scores.

    In reality, there is undoubtedly some reputation effect, but I can't imagine it would amount to more than 10 points even in the most extreme cases.

    Now, if you intend this avatar technology to hide the quality of the skating as well as the identities, and only give a vague representation of what the skater did, then there would be no possible way to judge accurately no matter who the judges thought they were watching. In which case there would be no point in holding competitions to begin with.
  13. os168

    os168 Active Member

    Not really. The technology must be used to accurately depict the exact quality of the skating AND hide the identity of the skater. e.g camera zoom in on the actual skating details similar to what tech panel sees. The advantage of getting everything digitalised is the movements are entirely trackable measurable, tracesable, and make useful data analysis for things like coverage distance, speed, height, power, degree of rotation and UR etc.

    The point is, the judges should have no preconceived notion of who the skater is and what they are capable of, but focus entirely on what they actually did on the day. The panel should still able to see the quality and details in which he/she have skated and make judgement purely on that. The technology need to be sophisticated enough so the emotional aspect of the performance must be able to be translated on to the Avatar while still hide the skater's ID. Such technology might not exactly exist yet, but it sounds way better than putting a paper bag over the skater's head and just cut out 2 holes. (that is perhaps for the B events :D)
  14. gkelly

    gkelly Well-Known Member

    This would be interesting as a test. Digitize several performances by skaters at different general skill levels but with the same jump content, and see whether the scores for the higher level/more successful skaters are consistently higher without the judges having a clue who is who.

    Another way to test the theory would be to look at JGP results, especially the first-time competitors, because most of those skaters have no international reputation yet -- the only real confounding factor is federation affiliation and any lobbying that may go with it.
  15. os168

    os168 Active Member


    This would certainly makes an interesting academic study and probably will validate many of the problems of COP many have already identified. However, it does not resolve the matter at hand which is how to prevent biased judging happen even if judges themselves are probably conscious of it and try their bests not to.

    Bad decisions can easily happen due to things like peer pressure and preconceived knowledge. Little time allowed to think clearly, logically, carefully in detail with time needed to process and make good analysis. Lack of visual aid, data, facts does not help things. You add political, federation pressure, expectation, personal interests... suddenly there create all sort of variables on top of what the rule book allows. COP being a tick box system will create tick box results. A literal essay format like 6.0 that has the breadth of room for intellectual and emotional realization will created more well rounded views rather than merely recite facts, yes and no answers. How will the judging change if the score is only marked AFTER every skater has skated, then the result are finally announced at once. Or how about allow a secondary stage where the Judges can go back and correct the score if they feel like they made a mistake then? Any change in marking is noted, and openly declared. This may add a secondary round of thriller to the sport, and gave the judges the room to update their marks.

    The biggest negligence of the COP is that it treats the judges like some mathematical robots with no emotions or feelings. It assumes they just do as they are told and are 100% accurate all the time, despite this is not humanly possible. It failed to taking in account of the emotional, psychological aspect of human judging, or how social, political, cultural and environmental effects can all affect decision making during pressurized environment. Let alone things like artistry are really unquantifiable and that there should not be a ceiling on it. Or that some how all PCS categories should be awarded on the same scale out of 10 (are all triples the same?) A more evolved model of COP would take all these things into account and find ways to deal with inconsistency in marking between different competitions, between judges, between skaters, by using measurable, quantifiable hard data in support. Carolina is fast, but how fast is she really compare to Yuna Kim in the first half, compare to her past, how about acceleration, speed, height, power, coverage, distance traveled etc.

    I don't think this will ever be resolved unless ISU really want to tackle 'true' fairness. And actually I don't think ISU really want the sport to be truly fair without at least able to dictate its direction as it see fit, since its has its own economic interests to take care of despite the sport itself already seems rather broken economically these days. It sucks, but CPR is completely useless unless the patient him/herself want to live.
  16. gkelly

    gkelly Well-Known Member

    My hypothesis, which we can't validate unless we actually perform the study and replicate it several times is that

    1) such a study would validate that there are some systematic effects of reputation, politics, and skate order, which means that anyone who supports the objectivity of the IJS needs to qualify such claims with appropriate caveats
    but also
    2) the study would also validate that the majority of variance in scores between different skaters executing the same content is in fact attributable to real, visible differences in the skating that trained observers can see irrespective of the skaters' identity. In other words, that

    If, say, the results showed that 90% of the scoring was attributable to factors inherent in the skating itself (signal) and 10% attributable to outside factors such as reputation and skate order (noise), would that satisfy critics who guess the noise percentage is higher because they haven't themselves trained to perceive as much of the signal as the officials?

    As to whether that makes IJS better or worse than holistic ordinal judging, for comparative purposes we'd have to conduct similar experiments using traditional 6.0 scoring or some similar system, with and without judges knowing the skaters' reputations.

    But just anecdotally, and logically, it seems to me that although code-of-points style scoring is somewhat subject to effects of reputation and skate order, ordinal judging is much more so.

    Ordinal judging is, however, easier for lay observers to identify with because it doesn't require detailed technical knowledge.

    So deciding between the two approaches comes down to a decision between fairness for the athletes and pleasure for the spectators (and the subset of competitors who consider the technique merely a means to artistic ends rather than an end in itself).

    As I say, I think this is even more true of 6.0 than IJS. So solving the problems in the existing system would do better to move forward to even more objective -- probably technology-dependent -- approaches.

    The appropriate technology might not exist at all yet, or not yet in affordable form. But it's fun to imagine what such a more evolved model might look like.

    How can the objective details of the performance best be quantified? And how could judges evaluate the various global, subjectively perceived aspects that make up most of the PCS criteria in ways that measure the various aspects appropriately to their importance (which may differ from one skater to the next)? How should those assessments be combined with the objective data to produce results? How can they best be communicated to the skaters and spectators?
  17. Aussie Willy

    Aussie Willy Hates both vegemite and peanut butter

    Sorry but as a judge who uses the system I cannot even put into words how I feel about this comment (and it is not positive). A judging system should provide a set of tools that helps judges apply the rules as written and it should aim to provide an objective outcome. It should also aim to provide a consistent approach. 6.0 was meant to do the same but it just had no transparency. So it gave an impression that it was based on judges feelings and emotions. Is that such a bad thing that a system does take out emotion and tries to provide a more objective criteria? That is actually why the system works so much better than what came before.

    Try judging a field of 30 young skaters at a Preliminary level. These skaters are all about the same technical level, some will be worse and some better. You need something as you watch skater after skater that is going to help you through getting to get a result for that competition. I am not sure how judges managed to judge that many skaters under 6.0 because I became a judge as IJS was being introduced, but as we still use 6.0 for beginner level competitions, even trying to rank a field of 10 skaters can be a pain in the arse. Because as you go along you are not giving skaters the credit for what they do but rather pigeonholing them into a place.

    However you talk to any judge and they will talk about performances in terms of how it made them feel and what they liked about the performances pretty much as any skating fan.
    Last edited: May 8, 2013
  18. alilou

    alilou Crazy Stalker Lady

    This is just nonsense to me. I've been watching skating since the 70's. What I saw then is what I see now - a lot of fairly pedestrian programs, and a few outstanding ones. The judging system doesn't make much difference. I'm frankly puzzled that anyone thinks skating was so beautiful back in the 70's, 80's and 90's. Much of it was literally nothing more than crossovers and elements. And also some of it was extremely beautiful. Just as much of it now is fridge-break material (except I love to watch everyone because I'm thrilled when I see someone's improvements) and some of it is extremely beautiful (Takahashi's best, Abbott's best, heck even Chan at his best, Kozuka, Czinsy - exquisite skater, Ice dance I could create a list a mile long of beautiful programs, S&S's Pina, and the list goes on. To imply IJS has taken the beauty out of skating I find bewildering. What are you watching?
    I'm aware that the system has some flaws that could be addressed, but I much prefer it to 6.0
  19. Zemgirl

    Zemgirl Well-Known Member

    A few months ago I ran a few searches for skating articles on some academic databases and Google Scholar. As it turns out, Shirra Kenworthy (I imagine it's the same one who competed for Canada in the 1960s) wrote a dissertation (2009) on decision-making and cognitive processes of skating judges. I also found this 2004 article, which examined reputation bias in judging (under 6.0, though), and this 2010 study found what the authors termed a "difficulty bias" in gymnastics.

    I haven't read any of these in full - just saved them in my future reading file, so I can't comment about the rationale, methodology etc.

    Replication studies are very difficult to publish ;)
  20. vodkashot

    vodkashot Active Member

    Well, that's one way to frame it. Looking at it another way, 6.0 was away more transparent and accountable than IJS. You knew exactly who was giving what mark. You may not know how judges came up with their marks, but it's not like we know why or how judges come up with their marks under IJS either. You can see where the marks are distributed, but that's about it. Who knows why one judge gives an element a 0 in GOE, but another gives the exact same element a +2? Ditto for wide point spreads in PCS as well. And in some ways IJS is even worse than 6.0 because of the anonymous judging, which really is the epitome of zero accountability and transparency
  21. Aussie Willy

    Aussie Willy Hates both vegemite and peanut butter

    Sorry but I thought not identifying judges came in before IJS and was a result of previous controversies. So how can that be blamed on IJS?

    Actually at most competitions, judges are identified, just not at the top international events. You actually have to go into the software and adjust the parameters to set up so judges are shown randomly on the protocols. So it is not a problem with IJS but rather a decision that has been made about it's application. So don't blame IJS, blame those who made the decision to not identify the judges at that level of competition.

    So for the majority of events in the world, the judges are identified and everyone knows who has given what marks.
    Last edited: May 8, 2013
  22. Marco

    Marco Well-Known Member

    Corrupt / irresponsible / reputational / plain unreasonable judging isn't just a IJS problem. It's the same under 6.0s. But at least under IJS there are more published and even quantified criteria for everything and any clear deviation becomes obvious (not that they are going to do anything about it).

    After a seemingly clean performance, you used to get a 5.6/5.8 afterwards. You left the building not knowing exactly what you need to work on or why you didn't score higher. Now you get a TES and a PCS and then a protocol full of details afterwards. You know you received less scores because your jumps were downgraded or your spin did not count or you did not have sufficient skating skills or transitions. It's definitely more transparent now.

    Values and rules can always be tweaked (and judging will always be a problem in a subjective, judged sport - which isn't as black and white as who crossed the line first) but the concept of IJS is much better if you ask me.

    Back to the original question, I think PJ's point is valid but not full. The failure to learn the system could be due to:

    - its complexity (I personally think the system is simple enough),
    - the details being overly mathematically focused (except if you only focus on the final numbers, a 5.6/5.8 or 70/80 isn't all that different),
    - mass media not properly explaining things (how hard is it to explain that every element is scored based on difficulty and quality, and the skater's qualities and programs are also scored based on quality and complexity - at the end, the person with the most points win) or
    - a simple reluctance to learn (the thrill of the 6.0, reluctance to change, 6.0 being more predictable etc).
    gkelly and (deleted member) like this.
  23. Aussie Willy

    Aussie Willy Hates both vegemite and peanut butter

    I think the thing I see so much of here is people just apply their criticism to the international events. They really don't understand that this system goes right down the grass roots of the sport. So have no knowledge of the day to day application of it for the little club competitions that happen every week.
  24. umronnie

    umronnie Well-Known Member

    Transparent? All the compexity of IJS, the rows after rows of numbers that make up the final score, make it so much easier to cheat.

    Two judges working together can dramatically change the result and not only can you not tell who gave which mark, you cannot even see that they were doing it. All they have to do is give GoEs 1 degree higher than is "warranted" and PCS scores only 0.25 more. Since there is a "marks corridor" anyway, and judges' individual scores can vary from -1 to +2, thus is undetectable, but what is the resulst?

    Suppose those two judges give the highest score on everything. One of their scores will be thrown out but the other will count (and since two judges "agreed" on the high score they certainly don't look out of place). With average GoE factor of 0.5 per element the additional +1s will result is 3.5 points in the SP, 6-6.5 points in the FS (12/13 elements). An additional 0.25 in PCS is worth 1/1.25 points in the SP and 2/2.5 in the FS. A total point difference of 12.5-13.75.
    Divide the additional points by the 7 remaining judges and you get nearly 2 points per skater. 2 points may not seem like much but the men's gold in London was decided by 1.3, the ladies silver/bronze by 0.93 and the pairs silver/bronze by 1.0...

    Now, to be more effective all our dou needs to do is do the same for their favorite's closest rival, but in reverse - 1 less GoE degree, 0.25 less PCS. The net difference is 3.5-4 points and that is a considerable margin, decided actually by one judge's marks (the other's marks get thrown away).

    There can be many arguments for IJS, but transparence and reliability is not one of them.
    Cherub721 and (deleted member) like this.
  25. gkelly

    gkelly Well-Known Member

    Well, I guess it depends what you mean by "transparent."

    Do we want to know the reasons for the results -- both what the judges thought about the elements and the program as a whole, and how much each part of the program was worth? In that case, a system that breaks down separate scores for each element and separate scores for each global aspect of the program is more transparent than one that gives a single number for technical merit and a single score for presentation.

    Do we want to know which individual official gave which set of marks? In that case, we want columns of scores to line up with the judges' names.

    These are two separate questions. It's possible to have one of these kinds of transparency but not the other, or both at the same time, or neither. In the past decade, we've seen all four possible permutations.

    With neither approach do we know exactly what judges (or technical panels) were thinking. But we have a lot more information with IJS than we do with 6.0.

    Neither of those kinds of transparency has anything to do with how easy it is to cheat. Officials who actively want to cheat will find a way to do so regardless of the system. It does make sense to try to build in safeguards to discourage people from trying, to catch them if they do try, and to minimize the effects on the final results if they succeed.

    However, when designing a system in the first place and deciding whether to implement it, the first question shouldn't be "How hard will it be to cheat, or to catch the cheaters?"

    The first questions should be "How well will this system work to evaluate the skating effectively when used honestly and to communicate those evaluations to the competitors and to the public?"

    As Aussie Willy notes, the vast majority of competitions below the senior international level use both forms of transparency: detailed marks AND all judges identified.

    If the detailed marks tell us more about the reasons behind the results, then it's worth using detailed marks.

    OK, so then, after we have a system that works well and communicates well when used honestly, we need to address the question of how to deal with the cases when people actively try to manipulate the system dishonestly. These cases are a minority, the exception rather than the rule, but they do occur more often at the highest levels where the stakes are highest, and so they gain more attention.

    Would minimizing the instances or effect of active cheating be more effective by identifying the judges or by obscuring their identities? The source of the dishonesty (individual judges' initiative vs. pressure from federations) may make a difference. Most of us here think identifying individuals would be better. Some decisionmakers within the ISU think otherwise.

    If we were all to agree on naming the judges and making responsibility for each mark transparent to anyone who reads the protocols, then we come back to the question of which approach to scoring is more effective for evaluating the skating and for communicating the reasons for the evaluations to the skaters and other observers when used honestly.

    Then we can get into questions of reliability in terms of honest evaluations. But "transparency" is a separate issue.
  26. leafygreens

    leafygreens Well-Known Member

    My iron lotus consists of :rolleyes: and skating off.
  27. Seerek

    Seerek Well-Known Member

    Interesting point about the regional levels, though there is the fact that the total raw scores are generally lower at the club level and therefore, the point gaps between the skaters is less, making any potential fluctuations in applying GOE, under-rotations and component scores more influential to the final placements (even with lower level spins and jumps).
  28. Aussie Willy

    Aussie Willy Hates both vegemite and peanut butter

    Cheating comes down to an individuals actions. So that is based on human decision making.

    Those who are arguing that IJS is the problem, they are still using arguments for problems that are based on human decision making to say the system is flawed.

    It is not the system, it is the application and decisions that are made by people about it's application that are the issue.

    Sorry but people need to stop confusing the actual system with the application of it.

    If anyone wants a copy of the software PM me and I will tell you how to get.
  29. Susan M

    Susan M Well-Known Member

    Doh. And how do you think those placements were decided? By adding up the points on that judge's score card. 5.8 +5.9 beats 5.8 +5.8. What they did not do is sum points across the whole panel, but yes points decided the placements for each judge. As I said originally, you won the event by winning the majority of the judges.

    There actually was a time when placements were summed, but they dumped that system when judges from places like East Germany kept giving inappropriately low placements to their skaters' rivals to bring down their totals. Then they went to the majority ordinals system where one or two out of line judges could not alter the result as easily.

    That said, I am not one who pines for the 6.0 system and thinks the sport will wander in the wilderness until they return to it. I think any system will have its advantages and disadvantages and that pretty much any system could be made to work. In the case of the current judging system, it needs to be fixed to work better.

    They need to start by taking a hard look at what passes for choreography these days and figure out what in the current point values and Program Component scoring has led us to these messes.
    Last edited: May 9, 2013
  30. Aussie Willy

    Aussie Willy Hates both vegemite and peanut butter

    It might be points but at the end of the day you are still trying to put skaters into places. And before IJS if you had a field of 30-40 skaters, judges had to remember all the skaters that had come before them to work out where that last skater might fit into the placings. Not exactly fare system is it because it was relying on memory of what had come before, not what the skater did on the ice.