From Terence’s stuff: You want proof?

Please, IMS Bulletin, v.38 (10) check p.11 of this pdf file for the whole article.

It is widely believed that under some fairly general conditions, MLEs are consistent, asymptotically normal, and efficient. Stephen Stigler has elegantly documented some of Fisher’s troubles when he wanted a proof. You want proof? Of course you can pile on assumptions so that the proof is easy. If checking your assumptions in any particular case is harder than checking the conclusion in that case, you will have joined a great tradition.
I used to think that efficiency was a thing for the theorists (I can live with inefficiency), that normality was a thing of the past (we can simulate), but that—in spite of Ralph Waldo Emerson—consistency is a thing we should demand of any statistical procedure. Not any more. These days we can simulate in and around the conditions of our data, and learn whether a novel procedure behaves as it should in that context. If it does, we might just believe the results of its application to our data. Other people’s data? That’s their simulation, their part of the parameter space, their problem. Maybe some theorist will take up the challenge, and study the procedure, and produce something useful. But if we’re still waiting for that with MLEs in general (canonical exponential families are in good shape), I wouldn’t hold my breath for this novel procedure. By the time a few people have tried the new procedure, each time checking its suitability by simulation in their context, we will have built up a proof by simulation. Shocking? Of course.
Some time into my career as a statistician, I noticed that I don’t check the conditions of a theorem before I use some model or method with a set of data. I think in statistics we need derivations, not proofs. That is, lines of reasoning from some assumptions to a formula, or a procedure, which may or may not have certain properties in a given context, but which, all going well, might provide some insight. The evidence that this might be the case can be mathematical, not necessarily with epsilon-delta rigour, simulation, or just verbal. Call this “a statistician’s proof ”. This is what I do these days. Should I be kicked out of the IMS?

After reading many astronomy literature, I develop a notion that astronomers like to use the maximum likelihood as a robust alternative to the chi-square minimization for fitting astrophysical models with parameters. I’m not sure it is truly robust because not many astronomy paper list assumptions and conditions for their MLEs.

Often I got confused with their target parameters. They are not parameters in statistical models. They are not necessarily satisfy the properties of probability theory. I often fail to find statistical properties of these parameters for the estimation. It is rare checking statistical modeling procedures with assumptions described by Prof. Speed. Even derivation is a bit short to be called “rigorous statistical analysis.” (At least I wish to see a sentence that “It is trivial to derive the estimator with this and that properties”).

Common phrases I confronted from astronomical literature is that authors’ strategy is statistically rigorous, superior, or powerful without showing why and how it is rigorous, superior, or powerful. I tried to convey these pitfalls and general restrictions in their employed statistical methods. Their strategy is not “statistically robust” nor “statistically powerful” nor “statistically rigorous.” Statisticians have own measures of “superiority” to discuss the improvement in their statistics, analysis strategies, and methodology.

It has not been easy since I never intend to case specific fault picking every time I see these statements. A method believed to be robust can be proven as not a robust method with your data and models. By simulations and derivations with the sufficient description of conditions, your excellent method can be presented with statistical rigors.

Within similar circumstances for statistical modeling and data analysis, there’s a trade off between robustness and conditions among statistical methodologies. Before stating a particular method adopted is robust or rigid, powerful or insensitive, efficient or inefficient, and so on; derivation, proof, or simulation studies are anticipated to be named the analysis and procedure is statistically excellent.

Before it gets too long, I’d like say that statistics have traditions for declaring working methods via proofs, simulations, or derivations. Each has their foundations: assumptions and conditions to be stated as “robust”, “efficient”, “powerful”, or “consistent.” When new statistics are introduced in astronomical literature, I hope to see some additional effort of matching statistical conditions to the properties of target data and some statistical rigor (derivations or simulations) prior to saying they are “robust”, “powerful”, or “superior.”

One Comment
  1. Raoul LePage:

    I’ve long been cautious about any model employing hypotheses of randomness in model components, particularly the backbone method of multiple linear regression, popular accounts of which may have encouraged the use of such hypotheses elsewhere. One particular liability of models is they may suggest particular classes of sample points are all that are needed. Examples include regression models positing iid errors that are much the same no matter which sample points are included in the regression design. Something similar is seen in time series and elsewhere. If the model is wrong on that point it may have encouraged a laxity in data selection from which there is no recovery once the data are in. I call this “brittle” modeling. Statistics is yet young. Stay tuned for models that are more in the spirit of sampling theory where inference is rooted in randomness introduced by the experimenter rather than hypothesized (a different sample space) and no model is correct although some may be estimated to provide better population description than others. Simpler may for some purposes prove better in the long run.

    08-12-2011, 1:44 pm
Leave a comment