*Although the given expected background counts is 0.1, the real background counts should be in integer (am I thinking right?) and I believe the law of total probability is needed: P(X>=5|Background)=\sum _{BC=0,…} P(X>=5|B,BC)P(BC).*

This is along the lines of what a Bayesian calculation does. You can do it either with a known background rate, or accounting for uncertainty; either way, the marginal likelihood for the signal strength ends up being a sum over terms that condition on a certain number of b.g. counts, appropriately weighted. (This is one of the examples I present in my CASt summer school lectures.) I think the “bayes” option for the fitting statistic in Sherpa implements this (based on code I provided Peter Freeman), but I don’t know if it’s separately exposed for aiding source significance calculations.

There is some frequentist work with somewhat of the same flavor (conditioning on the b.g. counts being below the observed total # of counts); Michael Woodroofe did something along these lines.

]]>I run the simulations in Python with numpy.random.poisson() to get the simulated background counts and use the incomplete gamma function to calculate the significance for each simulation. I also plotted the distribution of my p-values…

Another trick here in addition to the background fluctuations is to consider also the prior knowledge about the source being at the location of the detected photons. How do we include

this prior information in the calculated p-values?

PS: I used a PINTofALE IDL function called detect_limit() ( http://hea-www.harvard.edu/PINTofALE/doc/PoA.html#DETECT_LIMIT ) to get those numbers.

]]>