#### my first AAS. VI. Normalization

One realization of mine during the meeting was related to a cultural difference; therefore, there is no relation to any presentations during the 212th AAS in this post. Please, correct me if you find wrong statements. I cannot cover all perspectives from both disciplines but I think there are two distinct fashions in practicing normalization.

$$\frac{1}{N(\theta)}\int_{\Omega} f(x;\theta)dx=1$$
$$\frac{1}{N(x)}\int_{\Theta} f(x;\theta)d\theta=1$$
If you are Beyesian, Θ is the focus; otherwise, Ω. Regardless, finding N that satisfies the above relations is called normalization. And the difference between astronomers and statisticians is how Θ or Ω is treated.

For astronomers, in general, the integration occurs in the range of observed minimum (or zero, depending on what physics tells you) and observed maximum. Statisticians, generally, integrate over the whole parameter space that satisfies the measure theory; for example, if f(x;θ) is in the form of gaussian distribution, then (-∞, ∞).

This different trend occurs because of the different view points toward f(x;θ)/N (hereafter, f). For statisticians, the integration occurs over a properly defined measure space and f is a proper density function. For astronomers, the integration happens over a physically meaningful space and f is a viable model subject to the law of physics.

However, I want to warn astronomers transforming f defined on a physically meaningful space for statistical inference. Sometimes, statistical inferences is performed on f while this model f is the result of fitting, not a probability density function because astronomer’s f was not defined based on measure theory. It is clear that the variance from the truncated normal distribution is different from the one from the regular normal distribution. This will give different confidence interval at a given confidence level and the size of the interval, astronomers care so much about will vary.

The astronomers’ f is not necessarily to be pmf or pdf unless f is defined on a proper probability measure space. Without checking whether the normalized f is measurable, often times the 2nd derivative of f is derived for a fisher information or a covariance matrix from which error bars with a given confidence level are built. Due to the fact that astronomers’ f may not satisfy the basic probability axioms, the nominal coverage that one likes to compare with other results can be underestimated (in my opinion, it is the primary reason for the claim of improvement in the results in astronomical papers thanks to narrow intervals; however, one cannot claim such victory because underlying assumptions are inconsistent).

Since astronomers are so keen on error bars and coverage, I wish them to care fundamentals of probability theory on which statistical inference is built in their normalization process.

A disclaimer of mine is that I often see astronomers are well aware of the properties of pdfs in general. Narrow error bars from other types of statistical analysis are most likely legitimate.

— This is my last posting on my first AAS