The AstroStat Slog » LAD http://hea-www.harvard.edu/AstroStat/slog Weaving together Astronomy+Statistics+Computer Science+Engineering+Intrumentation, far beyond the growing borders Fri, 09 Sep 2011 17:05:33 +0000 en-US hourly 1 http://wordpress.org/?v=3.4 a century ago http://hea-www.harvard.edu/AstroStat/slog/2009/a-century-ago/ http://hea-www.harvard.edu/AstroStat/slog/2009/a-century-ago/#comments Thu, 07 May 2009 19:22:37 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=2470 Almost 100 years ago, A.S. Eddington stated in his book Stellar Movements (1914) that

…in calculating the mean error of a series of observations it is preferable to use the simple mean residual irrespective of sign rather than the mean square residual

Such eminent astronomer said already least absolute deviation over chi-square, if I match simple mean residual and mean square residual to relevant methodologies, in order.

I guess there is a reason everything is done based on the chi-square although a small fraction of the astronomy community is aware of that the chi-square minimization is not the only utility function for finding best fits. The assumption that the residuals “(Observed – Expected)/sigma”, as written in chi-square methods, are (asymptotically) normal – Gaussian, is freely taken into account by astronomical data (astronomers who analyze these data) mainly because of their high volume. The worst case is that even if checking procedures for the Gaussianity are available from statistical literature, applying those procedures to astronomical data is either difficult or ignored. Anyway, if one is sure that the data/parameters of interest are sampled from normal distribution, Eddington’s statement is better to be reverted because of sufficiency. We also know the asymptotic efficiency of sample standard deviation when the probability density function satisfies more general regularity conditions than the Gaussian density.

As a statistician, it is easy to say, “assume that data are iid standard normal, wlog.” And then, develop a statistic, investigate its characteristics, and compare it with other statistics. If this statistics does not show promising results from the comparison and strictly suffers from this normality assumption, then statisticians will attempt to make this statistic robust by checking and relaxing assumptions and math. On the other hand, I’m not sure how much astronomers feel easy with this Gaussianity assumption in their data most of which are married to statistics or statistical tools based on the normal assumption. How often have the efforts of devising the statistic and trying different statistics been taken place?

Without imposing the Gaussianity assumption, I think that Eddington’s comment is extremely insightful. Commonly cited statistical methods in astronomy, like chi square methods, are built on Gaussianity assumption from which sample standard deviation is used for σ, the scale parameter of the normal distribution that is mapped to 68% coverage and multiple of the sample standard deviation correspond to well known percentiles as given in Numerical Recipes. In the end, I think statistical analysis in astronomy literature suffers from a dilemma, “which came first, the chicken or the egg?” On the other hand, I feel setback because such a insightful comment from one of the most renown astrophysicists didn’t gain much weight after many decades. My understanding that Eddington’s suggestion was ignored is acquired from reading only a fraction of publications in major astronomical journals; therefore, I might be wrong. Probably, astronomers use LAD and do robust inferences more often that I think.

Unless not sure about the Gaussianity in data (covering the sampling distribution, residuals between observed and expected, and some transformations), for inference problems, sample standard deviation may not be appropriate to get error bars with matching coverage. Estimating posterior distributions is a well received approach among some astronomers and there are good tutorials and textbooks about Bayesian data analysis for astronomers. Those familiar with basics of statistics, pyMC and its tutorial (or another link from python.org) will be very useful for proper statistical inference. If Bayesian computation sounds too cumbersome, for the simplicity, follow Eddington’s advice. Instead of sample standard deviation, use absolute mean deviation (simple mean residual, Eddington’s words) to quantify uncertainty. Perhaps, one wants to compare best fits and error bars from both strategies.

——————————————————
This story was inspired by Studies in the Hisotry of Probability and Statistics. XXXII: Laplace, Fisher, and the discovery of the concept of sufficiency by Stigler (1973) Biometrika v. 60(3), p.439. The quote of Eddington was adapted from this article. Another quote from this article I like to share:

Fisher went on to observe that this property of σ2[1] is quite dependent on the assumption that the population is normal, and showed that indeed σ1[2] is preferable to σ2, at least in large samples, for estimating the scale parameter of the double exponential distribution, providing both estimators are appropriately rescaled

By assuming that each observations is normally (Gaussian) distributed with mean (mu) and variance (sigma^2), and that the object was to estimate sigma, Fisher proved that the sample standard deviation (or mean square residual) is more efficient than the mean deviation form the sample mean (or simple mean residual). Laplace proved it as well. The catch is that assumptions come first, not the sample standard deviation for estimating error (or sigma) of unknown distribution.

  1. sample standard deviation
  2. mean deviation from the sample mean
]]>
http://hea-www.harvard.edu/AstroStat/slog/2009/a-century-ago/feed/ 0
[MADS] plug-in estimator http://hea-www.harvard.edu/AstroStat/slog/2009/mads-plug-in-estimator/ http://hea-www.harvard.edu/AstroStat/slog/2009/mads-plug-in-estimator/#comments Tue, 21 Apr 2009 02:34:40 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=2199 I asked a couple of astronomers if they heard the term plug-in estimator and none of them gave me a positive answer.

When computing sample mean (xbar) and sample variance (s^2) to obtain (xbar-s, xbar+s) and to claim this interval covers 68%, these sample mean, sample variance, and the interval are plug-in estimators. Whilst clarifying the form of sampling distribution, or on verifying the formulas or estimators of sample mean and sample variance truly match with true mean and true variance, I can drop plug-in part because I know asymptotically such interval (estimator) will cover 68%.

When there is lack of sample size or it is short in sufficing (theoretic) assumptions, instead of saying 1-σ, one would want to say s, or plug-in error estimator. Without knowing the true distribution (asymptotically, the empirical distribution), somehow 1-σ mislead that best fit and error bar assures 68% coverage, which is not necessary true. What is computed/estimated is s or a plug-in estimator that is defined via Δ chi-square=1. Generally, the Greek letter σ in statistics indicate parameter, not a function of data (estimator), for instance, sample standard deviation (s), root mean square error (rmse), or the solution of Δ chi-square=1.

Often times I see extended uses of statistics and their related terms in astronomical literature which lead unnecessary comments and creative interpretation to account for unpleasant numbers. Because of the plug-in nature, the interval may not cover the expected value from physics. It’s due to chi-square minimization (best fit can be biased) and data quality (there are chances that data contain outliers or go under modification through instruments and processing). Unless robust statistics is employed (outliers could shift best fits and robust statistics are less sensitive to outliers) and calibration uncertainty or some other correction tools are suitably implemented, strange intervals are not necessary to be followed by creative comments or to be discarded. Those intervals are by products of employing plug-in estimators whose statistical properties are unknown during the astronomers’ data analysis state. Instead of imaginative interpretations, one would proceed with investigating those plug-in estimators and try to device/rectify them in order to make sure they lead close to the truth.

For example, instead of simple average (xbar=f(x_1,…,x_n) :average is a function of data whereas the chi-square minimzation method is another function of data), whose breakdown point is asymptotically zero and can be off from the truth, median (another function of data) could serve better (breakdown point is 1/2). We know that the chi-square methods are based on L2 norm (e.g. variation of least square methods). Instead, one can develop methods based on L1 norm as in quantile regression or least absolute deviation (LAD, in short, link from wiki). There are so many statistics are available to walk around short comings of popular plug-in estimators when sampling distribution is not (perfect) gaussian or analytic solution does not exist.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2009/mads-plug-in-estimator/feed/ 2