Since there are problems in some observations that only an experienced person can isolate, it was impossible to have a reliable system without human inspection. Obviously once a person is involved in making decisions, strict objectivity is sacrificed. Furthermore, the software systems are installed at 5 separate sites, each employing a number of different operators.
To minimize the effects of subjectivity, we chose a number of training sequences, and discussed at great length appropriate flag settings for each source in these datasets. Furthermore, occasionally we conduct ``dispersion'' tests whereby all of the operators did their VI on the same 3 sequences. The results of these tests are then tabulated and discussed to improve our documentation and instructions to checkers. Most of the dispersion amongst operators have come from just a few flags such as 'w' (within extended emission) which understandably involves a subjective decision.