The AstroStat Slog » score http://hea-www.harvard.edu/AstroStat/slog Weaving together Astronomy+Statistics+Computer Science+Engineering+Intrumentation, far beyond the growing borders Fri, 09 Sep 2011 17:05:33 +0000 en-US hourly 1 http://wordpress.org/?v=3.4 Cross-validation for model selection http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/ http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/#comments Mon, 20 Aug 2007 03:35:48 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/ One of the most frequently cited papers in model selection would be An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike’s Criterion by M. Stone, Journal of the Royal Statistical Society. Series B (Methodological), Vol. 39, No. 1 (1977), pp. 44-47.
(Akaike’s 1974 paper, introducing Akaike Information Criterion (AIC), is the most often cited paper in the subject of model selection).

The popularity of AIC comes from its simplicity. By penalizing the log of maximum likelihood with the number of model parameters (p), one can choose the best model that describes/generates the data. Nonetheless, we know that AIC has its shortcoming: all candidate models are nested each other and come from the same parametric family. For an exponential family, the trace of multiplication of score function and Fisher information becomes equivalent to the number of parameters, where you can easily raise a question, “what happens when the trace cannot be obtained analytically?”

The general form of AIC is called TIC (Takeuchi’s information criterion, Takeuchi, 1976), where the penalized term is written as the trace of multiplication of score function and Fisher information. Still, I haven’t answered to the question above.

I personally think that a trick to avoid such dilemma is the key content of Stone (1974), using cross-validation. Stone proved that computing the log likelihood by cross-validation is equivalent to AIC, without computing the score function and Fisher information or getting an exact estimate of the number of parameters. Cross-validation enables to obtain the penalized maximum log likelihoods across models (penalizing is necessary due to estimating the parameters) so that comparison among models for selection becomes feasible while it elevates worries of getting the proper number of parameters (penalization).

Numerous tactics are available for the purpose of model selection. Although variable selection (candidate models are generally nested) is a very hot topic in statistics these days and tones of publication could be found, when it comes to applying resampling methods to model selection, there are not many works. As Stone proved, cross-validation relieves any difficulties of calculating the score function and Fisher information of a model. I was working on non-nested model selection (selecting a best model from different parametric families) with Jackknife with Prof. Babu and Prof. Rao at Penn State until last year (paper hasn’t submitted yet) based on finding that the Jackknife enables to get the unbiased maximum likelihood. Even though high cost of computation compared to cross-validation and the jackknife, the bootstrap has occasionally appeared for model selection.

I’m not sure cross-validation or the jackknife is a feasible approach to be implemented in astronomical softwares, when they compute statistics. Certainly it has advantages when it comes to calculating likelihoods, like Cash statistics.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/feed/ 5
[ArXiv] Data-Driven Goodness-of-Fit Tests, Aug. 1, 2007 http://hea-www.harvard.edu/AstroStat/slog/2007/arxiv-data-driven-goodness-of-fit-tests/ http://hea-www.harvard.edu/AstroStat/slog/2007/arxiv-data-driven-goodness-of-fit-tests/#comments Fri, 17 Aug 2007 23:37:51 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/2007/arxiv-data-driven-goodness-of-fit-tests-aug-1-2007/ From arxiv/math.st:0708.0169v1
Data-Driven Goodness-of-Fit Tests by L. Mikhail

Goodness-of-Fit tests have been essential in astronomy to validate the chosen physical model to observed data whereas the limits of these tests have not been taken into consideration carefully when observed data were put into the model for estimating the model parameters. Therefore, I thought this paper would be helpful to have a thought on the different point of views between the astronomers’ practice of goodness-of-fit tests and the statisticians’ constructing tests. (Warning: the paper is abstract and theoretical.)

This paper began with presenting two approaches to constructing test statistics: 1. some measure of distance between the theoretical and empirical distributions like the Cramer-von Mises and the Komogorov-Smirnov statistics and 2. score test statistics, constructed in a way that the tests is asymptotically normal. As the second approach is preferred, the author confined his study to generalize the theory of score tests. The notion of the Neyman type (NT) test was introduced with very minimal assumptions to shape the statistics.

The author discussed the statistical inverse problems or the deconvolution problems of physics, seismology, optics, and imaging where noisy signals and measurements occur. These inverse problems induce the Neyman’s type statistics under appropriate regularity assumptions.

Other type of NT tests in terms of score functions and their consistency was presented in an abstract fashion.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2007/arxiv-data-driven-goodness-of-fit-tests/feed/ 1