#### appealing eyes == powerful method

To claim results are powerful statistically, astronomers highly rely on eyeballing techniques (need apprenticeship to acquire skills but look subjective to me without such training). Some cases, I know actual statistical tests to support or to dissuade those claims. Hence, I believe astronomers are well aware of those statistical tests. I guess they are afraid that those statistics may reject their claims or are not powerful enough in numeric metrics. Instead, they spend efforts to make graphics more appealing.

One thing I want to point out is the phrase “statistically powerful method” (in the context that they preform statistical data analysis, claimed to be a powerful method) which often misguides me because I cannot find a __powerful test statistic__ in their studies. Please, do not say eyeballing is a powerful test to me. A single outlier could change the result. I do understand the concept of “powerful test” is different as well as the fact that, in terms of good analysis, to make it visually appealing, astronomers provide very elaborated but non mathematical procedures to get rid of outliers, which by itself is very scientific.

I’m not selling this story to all publications in astronomy. There are plenty of papers which describe statistical data analysis in depth. However, there are more claiming that they performed powerful statistical data analysis and produced statistically superior results without statistical inference procedures. Prior to getting rid of unpleasant data points, making graphics appealing and saying an excellent results from statistics, I wish they give a second thought about outliers or statistical lexicons carefully. Also, instead of saying “statistically powerful method,” addressing as it is like “graphics show improved results,” “the result of data analysis is well confined in the predicted region and favors model A,” would make a statistician feel comfortable in reading astronomical journals. Otherwise, it would be better not to put “statistical” or “statistics” in the abstract.

By the way, there is a field called **Exploratory Data Analysis,** offering statistical eyeballing methods or diagnostic tools. (Papers and books by John Tukey, the inventor of Fast Fourier Transform, despite the time elapse , would be useful to people in any discipline requiring data eyeballing in a statistical fashion. He passed away at the turn of this century but I think he had foreseen the projected direction of statistical data analysis.)

## vlk:

Ah, I don’t think that is a correct characterization, Hyunsook. Astronomers do

notuse eyeballs to claim that this or that test works well. I’m afraid that I may be to blame for this wrong impression because I keep saying that there is no statistical test that can match the human eye. By which I actually mean the human brain, in its ability to recognize patterns. The converse to that is that the brain is easily fooled into seeing patters that don’t exist, and yes, we are well aware of that. In fact that is why we ask for more and better statistical analyses, and rely on those to gain confidence in our results.I do admit however that the concept of statistical power is not all that familiar to astronomers. We think of that usually in the context of probability of detection, and not much else. However, it is also true that most statistical techniques that exist currently are far too general and simpleminded to deal with the complexities enountered daily in astronomical data. It would be foolish to expect, say, that a local detect method would work as well as a human to tag and identify coronal loops in a Hinode image. So naturally we develop a bs detector to tell whether some test or technique is not working. But don’t mistake that for rigorous analysis!

09-13-2008, 11:03 am## vlk:

You also make a very good point when you say

Prior to getting rid of unpleasant data points, making graphics appealing and saying an excellent results from statistics, I wish they give a second thought about outliers or statistical lexicons carefullyI have found that astronomers are generally unable to judge the comparative powers of different tests. This is not surprising, because we aren’t trained for it. So for instance when two tests may give apparently contradictory answers, we can’t tell which one to believe. The answer, of course, is to always go with the more powerful one. But of course, that’s easier to say than to do.

09-13-2008, 11:10 am