That's an interesting proposal.
Effects of national bias would probably be no worse than is currently the case, since judges from different countries could cancel out the effects of each others' scores on the different components.
However, if only one or two judges are judging each component, then the scores for that component would end up being heavily dependent on those individual judges' personal preferences, pet peeves, and weighting of the different criteria for that component. And also on whether each judge tends to score high or low in general, or to use a particularly wide or narrow range between skaters of similar but not identical ability on those criteria.
There would be no dropping of outlier scores -- or even averaging if only one judge is assigned -- to mitigate any of the "noise" introduced by how any individual judge interprets particular criteria or tends to use numbers.
So I don't think we could say that the scores would be "more accurate" as a result.
What they would be, most likely, is less similar to the scores for the other components. But how much of that dissimilarity is due to separating the evaluation of the different components from each other, and how much to the fact that different human beings are doing the scoring for each, would be difficult to identify.
What might be an interesting experiment:
Get a large group of expert judges together and divide them into three or more groups of at least 9 judges each.
Assemble a group of diverse performances (preferably performed live by skaters of different skill levels and emphases).
Assign one group to judge a these performances normally, GOEs (with input from a technical panel) and all components.
Have the remaining groups each score a different component and only that one component. If there are only two groups, I'd suggest assigning them to Skating Skills and Interpretation, since these have the least to do with each other.
Then look at the results.
If the averaged component scores of the normally judging panel are all close to each other, as is usually the case, and if the averaged scores of the single-component panels are significantly more different from the other components, but fairly similar between judges on the same component, that would argue that separating the scoring of each components produces more accurate component scores. I.e., different experts will approximate the same score for each component if that component is all they need to focus on.
However, if there is still a wide range between the top and bottom scores for each component for each skater (the judges for that component disagreed with each other about what number to assign, for any of a variety of reasons), and a narrower range between the average scores for one component and a completely different component scored by a different panel, that would suggest that differences between individual judges' thought processes would have more effect allowing them to avoid thinking about GOEs and other components.
In the latter case, assigning different judges to different components would lead to less rather than more accuracy.
I'd be curious what the results would be.