Posts tagged ‘Bayesian’

coin toss with a twist

Here’s a cool illustration of how to use Bayesian analysis in the limit of very little data, when inferences are necessarily dominated by the prior. The question, via Tom Moertel, is: suppose I tell you that a coin always comes up heads, and you proceed to toss it and it does come up heads — how much more do you believe me now?

He also has the answer worked out in detail.

(h/t Doug Burke)

[Q] Objectivity and Frequentist Statistics

Is there an objective method to combine measurements of the same quantity obtained with different instruments?

Suppose you have a set of N1 measurements obtained with one detector, and another set of N2 measurements obtained with a second detector. And let’s say you wanted something as simple as an estimate of the mean of the quantity (say the intensity) being measured. Let us further stipulate that the measurement errors of each of the points is similar in magnitude and neither instrument displays any odd behavior. How does one combine the two datasets without appealing to subjective biases about the reliability or otherwise of the two instruments? Continue reading ‘[Q] Objectivity and Frequentist Statistics’ »

Quote of the Date

Really, there is no point in extracting a sentence here and there, go read the whole thing:

Why I don’t like Bayesian Statistics

- Andrew Gelman

Oh, alright, here’s one:

I can’t keep track of what all those Bayesians are doing nowadays–unfortunately, all sorts of people are being seduced by the promises of automatic inference through the “magic of MCMC”–but I wish they would all just stop already and get back to doing statistics the way it should be done, back in the old days when a p-value stood for something, when a confidence interval meant what it said, and statistical bias was something to eliminate, not something to embrace.

Continue reading ‘Quote of the Date’ »

Books – a boring title

I have been observing some sorts of misconception about statistics and statistical nomenclature evolution in astronomy, which I believe, are attributed to the lack of references in the astronomical society. There are some textbooks designed for junior/senior science and engineering students, which are likely unknown to astronomers. Example-wise, these books are not suitable, to my knowledge. Although I never expect astronomers to learn standard graduate (mathematical) statistics textbooks, I do wish astronomers go beyond Numerical Recipes (W. H. Press, S. A. Teukolsky, W. T. Vetterling, & B. P. Flannery) and Error Data Reduction and Analysis for the Physical Sciences (P. R. Bevington & D. K. Robinson). Here are some good ones written by astronomers, engineers, and statisticians: Continue reading ‘Books – a boring title’ »

[ArXiv] 3rd week, Jan. 2008

Seven preprints were chosen this week and two mentioned model selection. Continue reading ‘[ArXiv] 3rd week, Jan. 2008’ »

[Quote] Bootstrap and MCMC

The Bootstrap and Modern Statistics Brad Efron (2000), JASA Vol. 95 (452), p. 1293-1296.

If the bootstrap is an automatic processor for frequentist inference, then MCMC is its Bayesian counterpart.

Continue reading ‘[Quote] Bootstrap and MCMC’ »

[ArXiv] 3rd week, Dec. 2007

The paper about the Banff challenge [0712.2708] and the statistics tutorial for cosmologists [0712.3028] are the personal recommendations from this week’s [arXiv] list. Especially, I’d like to quote from Licia Verde’s [astro-ph:0712.3028],

In general, Cosmologists are Bayesians and High Energy Physicists are Frequentists.

I thought it was opposite. By the way, if you crave for more papers, click Continue reading ‘[ArXiv] 3rd week, Dec. 2007’ »

[ArXiv] 4th week, Oct. 2007

I hope there are a paper or two drags your attentions and stimulates your thoughts in astrostatistics from arXiv.
Continue reading ‘[ArXiv] 4th week, Oct. 2007’ »

ab posteriori ad priori

A great advantage of Bayesian analysis, they say, is the ability to propagate the posterior. That is, if we derive a posterior probability distribution function for a parameter using one dataset, we can apply that as the prior when a new dataset comes along, and thereby improve our estimates of the parameter and shrink the error bars.

But how exactly does it work? I asked this of Tom Loredo in the context of some strange behavior of sequential applications of BEHR that Ian Evans had noticed (specifically that sequential applications of BEHR, using as prior the posterior from the preceding dataset, seemed to be dependent on the order in which the datasets were considered (which, as it happens, arose from approximating the posterior distribution before passing it on as the prior distribution to the next stage — a feature that now has been corrected)), and this is what he said:

Yes, this is a simple theorem. Suppose you have two data sets, D1 and D2, hypotheses H, and background info (model, etc.) I. Considering D2 to be the new piece of info, Bayes’s theorem is:

[1]

p(H|D1,D2) = p(H|D1) p(D2|H, D1)            ||  I
             -------------------
                    p(D2|D1)

where the “|| I” on the right is the “Skilling conditional” indicating that all the probabilities share an “I” on the right of the conditioning solidus (in fact, they also share a D1).

We can instead consider D1 to be the new piece of info; BT then reads:

[2]

p(H|D1,D2) = p(H|D2) p(D1|H, D2)            ||  I
             -------------------
                    p(D1|D2)

Now go back to [1], and use BT on the p(H|D1) factor:

p(H|D1,D2) = p(H) p(D1|H) p(D2|H, D1)            ||  I
             ------------------------
                    p(D1) p(D2|D1)

           = p(H, D1, D2)
             ------------      (by the product rule)
                p(D1,D2)

Do the same to [2]: use BT on the p(H|D2) factor:

p(H|D1,D2) = p(H) p(D2|H) p(D1|H, D2)            ||  I
             ------------------------
                    p(D2) p(D1|D2)

           = p(H, D1, D2)
             ------------      (by the product rule)
                p(D1,D2)

So the results from the two orderings are the same. In fact, in the Cox-Jaynes approach, the “axioms” of probability aren’t axioms, but get derived from desiderata that guarantee this kind of internal consistency of one’s calculations. So this is a very fundamental symmetry.

Note that you have to worry about possible dependence between the data (i.e., p(D2|H, D1) appears in [1], not just p(D2|H)). In practice, separate data are often independent (conditional on H), so p(D2|H, D1) = p(D2|H) (i.e., if you consider H as specified, then D1 tells you nothing about D2 that you don’t already know from H). This is the case, e.g., for basic iid normal data, or Poisson counts. But even in these cases dependences might arise, e.g., if there are nuisance parameters that are common for the two data sets (if you try to combine the info by multiplying *marginalized* posteriors, you may get into trouble; you may need to marginalize *after* multiplying if nuisance parameters are shared, or account for dependence some other way).

what if you had 3, 4, .. N observations? Does the order in which you apply BT affect the results?

No, as long as you use BT correctly and don’t ignore any dependences that might arise.

if not, is there a prescription on what is the Right Thing [TM] to do?

Always obey the laws of probability theory! 9-)

When you observed zero counts, you didn’t not observe any counts

Dong-Woo, who has been playing with BEHR, noticed that the confidence bounds quoted on the source intensities seem to be unchanged when the source counts are zero, regardless of what the background counts are set to. That is, p(s|NS,NB) is invariant when NS=0, for any value of NB. This seems a bit odd, because [naively] one expects that as NB increases, it should/ought to get more and more likely that s gets closer to 0. Continue reading ‘When you observed zero counts, you didn’t not observe any counts’ »

[ArXiv] Bayesian Star Formation Study, July 13, 2007

From arxiv/astro-ph:0707.2064v1
Star Formation via the Little Guy: A Bayesian Study of Ultracool Dwarf Imaging Surveys for Companions by P. R. Allen.

I rather skip all technical details on ultracool dwarfs and binary stars, reviews on star formation studies, like initial mass function (IMF), astronomical survey studies, which Allen gave a fair explanation in arxiv/astro-ph:0707.2064v1 but want to emphasize that based on simple Bayes’ rule and careful set-ups for likelihoods and priors according to data (ultracool dwarfs), quite informative conclusions were drawn:
Continue reading ‘[ArXiv] Bayesian Star Formation Study, July 13, 2007’ »

Is 3sigma the same as 3*1sigma?

Sometime early this year, Jeremy Drake asked this innocuous sounding question in the context of determining the bounds on the amplitude of an absorption line: Is the 3sigma error bar the same length as 3 times the 1sigma error bar?

In other words, if he measured the 99.7% confidence range for his model parameter, would it always be thrice as large as the nominal 1sigma confidence range? The answer is complicated, and depends on who you ask: Frequentists will say “yes, of course!”, Likelihoodists will say “Maybe, yeah, er, depends”, and Bayesians will say “sigma? what’s that?” So I think it would be useful to expound a bit more on this to astronomers, whose mental processes are generally Bayesian but whose computational tools are mostly Frequentist.
Continue reading ‘Is 3sigma the same as 3*1sigma?’ »