The AstroStat Slog » Eddington http://hea-www.harvard.edu/AstroStat/slog Weaving together Astronomy+Statistics+Computer Science+Engineering+Intrumentation, far beyond the growing borders Fri, 09 Sep 2011 17:05:33 +0000 en-US hourly 1 http://wordpress.org/?v=3.4 a century ago http://hea-www.harvard.edu/AstroStat/slog/2009/a-century-ago/ http://hea-www.harvard.edu/AstroStat/slog/2009/a-century-ago/#comments Thu, 07 May 2009 19:22:37 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=2470 Almost 100 years ago, A.S. Eddington stated in his book Stellar Movements (1914) that

…in calculating the mean error of a series of observations it is preferable to use the simple mean residual irrespective of sign rather than the mean square residual

Such eminent astronomer said already least absolute deviation over chi-square, if I match simple mean residual and mean square residual to relevant methodologies, in order.

I guess there is a reason everything is done based on the chi-square although a small fraction of the astronomy community is aware of that the chi-square minimization is not the only utility function for finding best fits. The assumption that the residuals “(Observed – Expected)/sigma”, as written in chi-square methods, are (asymptotically) normal – Gaussian, is freely taken into account by astronomical data (astronomers who analyze these data) mainly because of their high volume. The worst case is that even if checking procedures for the Gaussianity are available from statistical literature, applying those procedures to astronomical data is either difficult or ignored. Anyway, if one is sure that the data/parameters of interest are sampled from normal distribution, Eddington’s statement is better to be reverted because of sufficiency. We also know the asymptotic efficiency of sample standard deviation when the probability density function satisfies more general regularity conditions than the Gaussian density.

As a statistician, it is easy to say, “assume that data are iid standard normal, wlog.” And then, develop a statistic, investigate its characteristics, and compare it with other statistics. If this statistics does not show promising results from the comparison and strictly suffers from this normality assumption, then statisticians will attempt to make this statistic robust by checking and relaxing assumptions and math. On the other hand, I’m not sure how much astronomers feel easy with this Gaussianity assumption in their data most of which are married to statistics or statistical tools based on the normal assumption. How often have the efforts of devising the statistic and trying different statistics been taken place?

Without imposing the Gaussianity assumption, I think that Eddington’s comment is extremely insightful. Commonly cited statistical methods in astronomy, like chi square methods, are built on Gaussianity assumption from which sample standard deviation is used for σ, the scale parameter of the normal distribution that is mapped to 68% coverage and multiple of the sample standard deviation correspond to well known percentiles as given in Numerical Recipes. In the end, I think statistical analysis in astronomy literature suffers from a dilemma, “which came first, the chicken or the egg?” On the other hand, I feel setback because such a insightful comment from one of the most renown astrophysicists didn’t gain much weight after many decades. My understanding that Eddington’s suggestion was ignored is acquired from reading only a fraction of publications in major astronomical journals; therefore, I might be wrong. Probably, astronomers use LAD and do robust inferences more often that I think.

Unless not sure about the Gaussianity in data (covering the sampling distribution, residuals between observed and expected, and some transformations), for inference problems, sample standard deviation may not be appropriate to get error bars with matching coverage. Estimating posterior distributions is a well received approach among some astronomers and there are good tutorials and textbooks about Bayesian data analysis for astronomers. Those familiar with basics of statistics, pyMC and its tutorial (or another link from python.org) will be very useful for proper statistical inference. If Bayesian computation sounds too cumbersome, for the simplicity, follow Eddington’s advice. Instead of sample standard deviation, use absolute mean deviation (simple mean residual, Eddington’s words) to quantify uncertainty. Perhaps, one wants to compare best fits and error bars from both strategies.

——————————————————
This story was inspired by Studies in the Hisotry of Probability and Statistics. XXXII: Laplace, Fisher, and the discovery of the concept of sufficiency by Stigler (1973) Biometrika v. 60(3), p.439. The quote of Eddington was adapted from this article. Another quote from this article I like to share:

Fisher went on to observe that this property of σ2[1] is quite dependent on the assumption that the population is normal, and showed that indeed σ1[2] is preferable to σ2, at least in large samples, for estimating the scale parameter of the double exponential distribution, providing both estimators are appropriately rescaled

By assuming that each observations is normally (Gaussian) distributed with mean (mu) and variance (sigma^2), and that the object was to estimate sigma, Fisher proved that the sample standard deviation (or mean square residual) is more efficient than the mean deviation form the sample mean (or simple mean residual). Laplace proved it as well. The catch is that assumptions come first, not the sample standard deviation for estimating error (or sigma) of unknown distribution.

  1. sample standard deviation
  2. mean deviation from the sample mean
]]>
http://hea-www.harvard.edu/AstroStat/slog/2009/a-century-ago/feed/ 0
Eddington versus Malmquist http://hea-www.harvard.edu/AstroStat/slog/2008/eddington-versus-malmquist/ http://hea-www.harvard.edu/AstroStat/slog/2008/eddington-versus-malmquist/#comments Thu, 13 Mar 2008 17:53:17 +0000 vlk http://hea-www.harvard.edu/AstroStat/slog/2008/eddington-versus-malmquist/ During the runup to his recent talk on logN-logS, Andreas mentioned how sometimes people are confused about the variety of statistical biases that afflict surveys. They usually know what the biases are, but often tend to mislabel them, especially the Eddington and Malmquist types. Sort of like using “your” and “you’re” interchangeably, which to me is like nails on a blackboard. So here’s a brief summary:

Eddington Bias: What you get because of statistical fluctuations in the measurement (Eddington 1913). A set of sources with a single luminosity will, upon observation, be spread out due to measurement error. When you have two sets of sources with different luminosities, the observed distribution will overlap. If there are more objects of one luminosity than the other, you are in danger of misunderestimating the fraction in that set because more of those “scatter” into the other’s domain than the reverse. Another complication — if the statistical scatter bumps up against some kind of detection threshold, then the inferred luminosity based on only the detected sources will end up being an overestimate.

Malmquist Bias: What you get because you can see brighter sources out to farther distances. This means that if your survey is flux limited (as most are), then the intrinsically brighter sources will appear to be more numerous than they ought to be because you are seeing them in a larger volume. This is the reason, for instance, that there are 10 times more A stars in the SAO catalog than there are M stars. This is a statistical effect only in the sense that a “true” dataset is filtered due to a detectability threshold. Anyone working with volume limited samples do not need to worry about this at all.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/eddington-versus-malmquist/feed/ 3