The AstroStat Slog » hypothesis testing http://hea-www.harvard.edu/AstroStat/slog Weaving together Astronomy+Statistics+Computer Science+Engineering+Intrumentation, far beyond the growing borders Fri, 09 Sep 2011 17:05:33 +0000 en-US hourly 1 http://wordpress.org/?v=3.4 [ArXiv] Particle Physics http://hea-www.harvard.edu/AstroStat/slog/2009/arxiv-particle-physics/ http://hea-www.harvard.edu/AstroStat/slog/2009/arxiv-particle-physics/#comments Fri, 20 Feb 2009 23:48:39 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=1234

[stat.AP:0811.1663]
Open Statistical Issues in Particle Physics by Louis Lyons

My recollection of meeting Prof. L. Lyons was that he is very kind and listening. I was delighted to see his introductory article about particle physics and its statistical challenges from an [arxiv:stat] email subscription.

Descriptions of various particles from modern particle physics are briefly given (I like such brevity, conciseness, but delivering necessaries. If you want more on physics, find those famous bestselling books like The first three minutes, A brief history of time, The elegant universe, or Feynman’s and undergraduate textbooks of modern physics and of particle physics). Large Hardron Collider (LHC, hereafter. LHC related slog postings: LHC first beam, The Banff challenge, Quote of the week, Phystat – LHC 2008) is introduced on top of its statistical challenges from the data collecting/processing perspectives since it is expected to collect 1010 events. Visit LHC website to find more about LHC.

My one line summary of the article is solving particle physics problems from the hypothesis testing or rather broadly classical statistical inference approaches. I enjoyed the most reading section 5 and 6, particularly the subsection titled Why 5σ? Here are some excerpts I like to share with you from the article:

It is hoped that the approaches mentioned in this article will be interesting or outrageous enough to provoke some Statisticians either to collaborate with Particle Physicists, or to provide them with suggestions for improving their analyses. It is to be noted that the techniques described are simply those used by Particle Physicists; no claim is made that they are necessarily optimal (Personally, I like such openness and candidness.).

… because we really do consider that our data are representative as samples drawn according to the model we are using (decay time distributions often are exponential; the counts in repeated time intervals do follow a Poisson distribution, etc.), and hence we want to use a statistical approach that allows the data “to speak for themselves,” rather than our analysis being dominated by our assumptions and beliefs, as embodied in Bayesian priors.

Because experimental detectors are so expensive to construct, the time-scale over which they are built and operated is so long, and they have to operate under harsh radiation conditions, great care is devoted to their design and construction. This differs from the traditional statistical approach for the design of agricultural tests of different fertilisers, but instead starts with a list of physics issues which the experiment hopes to address. The idea is to design a detector which will proved answers to the physics questions, subject to the constraints imposed by the cost of the planned detectors, their physical and mechanical limitations, and perhaps also the limited available space. (Personal belief is that what segregates physical science from other science requiring statistical thinking is that uncontrolled circumstances are quite common in physics and astronomy whereas various statistical methodologies are developed under assumptions of controllable circumstances, traceable subjects, and collectible additional sample.)

…that nothing was found, it is more useful to quote an upper limit on the sought-for effect, as this could be useful in ruling out some theories.

… the nuisance parameters arise from the uncertainties in the background rate b and the acceptance ε. These uncertainties are usually quoted as σb and σε, and the question arises of what these errors mean. … they would express the width of the Bayesian posterior or of the frequentist interval obtained for the nuisance parameter. … they may involve Monte Carlo simulations, which have systematic uncertainties as well as statistical errors …

Particle physicists usually convert p into the number of standard deviation σ of a Gaussian distribution, beyond which the one-sided tail area corresponds to p. Thus, 5σ corresponds to a p-value of 3e-7. This is done simple because it provides a number which is easier to remember, and not because Guassians are relevant for every situation.
Unfortunately, p-values are often misinterpreted as the probability of the theory being true, given the data. It sometimes helps colleagues clarify the difference between p(A|B) and p(B|A) by reminding them that the probability of being pregnant, given the fact that you are female, is considerable smaller than the probability of being female, given the fact that you are pregnant.

… the situation is much less clear for nuisance parameters, where error estimates may be less rigorous, and their distribution is often assumed to be Gaussian (or truncated Gaussain) by default. The effect of these uncertainties on very small p-values needs to be investigated case-by-case.
We also have to remember that p-values merely test the null hypothesis. A more sensitive way to look for new physics is via the likelihood ratio or the differences in χ2 for the two hypotheses, that is, with and without the new effect. Thus, a very small p-value on its own is usually not enough to make a convincing case for discovery.

If we are in the asymptotic regime, and if the hypotheses are nested, and if the extra parameters of the larger hypothesis are defined under the samller one, and in that case do not lie on the boundary of their allowed region, then the difference in χ2 should itself be distributed as a χ2, with the number of degrees of freedom equal to the number of extra parameters (I’ve seen many papers in astronomy not minding (ignoring) these warnings for the likelihood ratio tests)

The standard method loved by Particle Physicists (astronomers alike) is χ2. This, however, is only applicable to binned data (i.e., in a one or more dimensional histogram). Furthermore, it loses its attractive feature that its distribution is model independent when there are not enough data, which is likely to be so in the multi-dimensional case. (High energy astrophysicists deal low count data on multi-dimensional parameter space; the total number of bins are larger than the number of parameters but to me, binning/grouping seems to be done aggressively to meet the good S/N so that the detail information about the parameters from the data gets lost. ).

…, the σi are supposed to be the true accuracies of the measurements. Often, all that we have available are estimates of their values (I also noticed astronomers confuse between true σ and estimated σ). Problems arise in situations where the error estimate depends on the measured value a (parameter of interest). For example, in counting experiments with Poisson statistics, it is typical to set the error as the square root of the observd number. Then a downward fluctuation in the observation results in an overestimated weight, and abest-fit is biased downward. If instead the error is estimated as the square root of the expected number a, the combined result is biased upward – the increased error reduces S at large a. (I think astronomers are aware of this problem but haven’t taken actions yet to rectify the issue. Unfortunately not all astronomers take the problem seriously and some blindly apply 3*sqrt(N) as a threshold for the 99.7 % (two sided) or 99.9% (one sided) coverage.)

Background estimation, particularly when observed n is less tan the expected background b is discussed in the context of upper limits derived from both statistical streams – Bayesian and frequentist. The statistical focus from particle physicists’ concern is classical statistical inference problems like hypothesis testing or estimating confidence intervals (it is not necessary that these intervals are closed) under extreme physical circumstances. The author discusses various approaches with modern touches of both statistical disciplines to tackle how to obtain upper limits with statistically meaningful and allocatable quantification.

As described, many physicists endeavor on a grand challenge of finding a new particle but this challenge is put concisely from the statistically perspectives like p-values, upper limits, null hypothesis, test statistics, confidence intervals with peculiar nuisance parameters or rather lack of straightforwardness priors, which lead to lengthy discussions among scientists and produce various research papers. In contrast, the challenges that astronomers have are not just finding the existence of new particles but going beyond or juxtaposing. Astronomers like to parameterize them by selecting suitable source models, from which collected photons are the results of modification caused by their journey and obstacles in their path. Such parameterization allows them to explain the driving sources of photon emission/absorption. It enables to predict other important features, temperature to luminosity, magnitudes to metalicity, and many rules of conversions.

Due to different objectives, one is finding a hay look alike needle in a haystack and the other is defining photon generating mechanisms (it may lead to find a new kind celestial object), this article may not interest astronomers. Yet, having the common ground, physics and statistics, it is a dash of enlightenment of knowing various statistical methods applied to physical data analysis for achieving a goal, refining physics. I recall my posts on coverages and references therein might be helpful:interval estimation in exponential families and [arxiv] classical confidence interval.

I felt that from papers some astronomers do not aware of problems with χ2 minimization nor the underline assumptions about the method. This paper convey some dangers about the χ2 with the real examples from physics, more convincing for astronomers than statisticians’ hypothetical examples via controlled Monte Carlo simulations.

And there are more reasons to check this paper out!

]]>
http://hea-www.harvard.edu/AstroStat/slog/2009/arxiv-particle-physics/feed/ 0
Prof. Brad Efron visits Harvard http://hea-www.harvard.edu/AstroStat/slog/2008/prof-brad-efron-visits-harvard/ http://hea-www.harvard.edu/AstroStat/slog/2008/prof-brad-efron-visits-harvard/#comments Tue, 25 Mar 2008 00:03:49 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/2008/prof-brad-efron-visits-harvard/ Bradley Efron, Stanford University
11:00 AM, Friday, April 4, 2008
Sever Hall Rm. 103
Title: SIMULTANEOUS INFERENCE: WHEN SHOULD HYPOTHESIS TESTING PROBLEMS BE COMBINED
Its abstract and other informations at http://www.stat.harvard.edu/Colloquia_Content/Efron08.pdf

Recently awarded Recent National Medal of Science, Statistics Professor Efron of Standford University has played a major role in many ground breaking interdisciplinary collaborations including astronomy. A quote from his website

I like working on applied and theoretical problems at the same time and one thing nice about statistics is that you can be useful in a wide variety of areas. So my current applications include biostatistics and also astrophysical applications. The surprising thing is that the methods used are similar in both areas. I recently gave a talk called Astrophysics and Biostatistics–the odd couple at Penn State that made this point.

This seminar will help grasping his brilliant insights of applying statistics to other disciplines as well as his ingenious mind on statistics. In particular, multiple testing is an growing interest among high energy physicists.

Click the Harvard Statistics Colloquium Series for more information.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/prof-brad-efron-visits-harvard/feed/ 0
Non-nested hypothesis tests http://hea-www.harvard.edu/AstroStat/slog/2008/non-nested-hypothesis-tests/ http://hea-www.harvard.edu/AstroStat/slog/2008/non-nested-hypothesis-tests/#comments Wed, 20 Feb 2008 03:15:38 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/2008/non-nested-hypothesis-tests/ I was reading [1]. I must say that I do not know Bayesian methods to cope with model misspecification, tests with an unknown true model, or tests for non-nested hypotheses except Bayes factor (concerns a lot how to choose priors). Nonetheless, the zeal among economists to test non-nested models might assist astronomers to move forward beyond testing nested hypotheses with F statistic.

Knowing that photons follows Poission distribution, that limited numbers of non nested candidate models (one does have physics to constrain models), that generalized linear model (GLM) could expand regression models discussed in [1] to account Poisson behavior, and that there hasn’t been any collaboration between economists and astronomers, citing some papers from economics journals may help astronomers handle non-nested models and test them (and get parameter estimates and proper error bars).

References—–

[1] MacKinnon, J.G. (1983). Model Specification Tests Against Non-Nested Alternatives, Queens’ Economics Department Working Paper, No. 573

[2] Cox, D.R. (1961). Tests of separate families of hypotheses, Proceedings of the fourth Berkeley symposium on mathematical statistics and probability, 1, pp. 105-123

[3] Cox. D.R. (1962). Further results on tests of separate families of hypotheses, JRSS, B, 24, pp. 406-424

[4] White, H. (1982). Maximum likelihood estimation of misspecified models, Econometrica, 50, pp. 1-25

[5] Nishii, R. (1988). Maximum likelihood principle and model selection when the true model is unspecified, Journal of Multivariate Analysis, 27(2), pp. 392-403

[6] Vuoung, Q.H. (1989). Likelihood ratio test for model selection and non-nested hypotheses, Econometrica, 57(2), pp.307-333

[7] Sin, C. & White, H. (1996) Information criteria for selecting possibly misspecified parametric models, Journal of Econometrics, 71, pp.207-225

The non-nested hypothesis testing problem was evolved from [2]&[3]. [4],[5],[6],&[7] are well cited papers on the topic or experienced my readership. :) Please, advice me if you have more information regarding non-nested hypothesis tests in astronomy.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/non-nested-hypothesis-tests/feed/ 0
[ArXiv] 3rd week, Jan. 2008 http://hea-www.harvard.edu/AstroStat/slog/2008/arxiv-3rd-week-jan-2008/ http://hea-www.harvard.edu/AstroStat/slog/2008/arxiv-3rd-week-jan-2008/#comments Fri, 18 Jan 2008 18:24:23 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/2008/arxiv-3rd-week-jan-2008/ Seven preprints were chosen this week and two mentioned model selection.

  • [astro-ph:0801.2186] Extrasolar planet detection by binary stellar eclipse timing: evidence for a third body around CM Draconis H.J.Deeg (it discusses model selection in section 4.4)
  • [astro-ph:0801.2156] Modeling a Maunder Minimum A. Brandenburg & E. A. Spiegel (it could be useful for those who does sunspot cycle modeling)
  • [astro-ph:0801.1914] A closer look at the indications of q-generalized Central Limit Theorem behavior in quasi-stationary states of the HMF model A. Pluchino, A. Rapisarda, & C. Tsallis
  • [astro-ph:0801.2383] Observational Constraints on the Dependence of Radio-Quiet Quasar X-ray Emission on Black Hole Mass and Accretion Rate B.C. Kelly et.al.
  • [astro-ph:0801.2410] Finding Galaxy Groups In Photometric Redshift Space: the Probability Friends-of-Friends (pFoF) Algorithm I. Li & H. K.C. Yee
  • [astro-ph:0801.2591] Characterizing the Orbital Eccentricities of Transiting Extrasolar Planets with Photometric Observations E. B. Ford, S. N. Quinn, &D. Veras
  • [astro-ph:0801.2598] Is the anti-correlation between the X-ray variability amplitude and black hole mass of AGNs intrinsic? Y. Liu & S. N. Zhang
]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/arxiv-3rd-week-jan-2008/feed/ 0