Comments on: Is 3sigma the same as 3*1sigma? http://hea-www.harvard.edu/AstroStat/slog/2007/3-times-sigma/ Weaving together Astronomy+Statistics+Computer Science+Engineering+Intrumentation, far beyond the growing borders Fri, 01 Jun 2012 18:47:52 +0000 hourly 1 http://wordpress.org/?v=3.4 By: vlk http://hea-www.harvard.edu/AstroStat/slog/2007/3-times-sigma/comment-page-1/#comment-42 vlk Mon, 12 Feb 2007 16:05:07 +0000 http://hea-www.harvard.edu/AstroStat/slog/2007/3-times-sigma/#comment-42 <p>Couple of points worth clarifying here:<br /> 1, it is not that the parameters be estimated via linear approximation, but rather that the χ^2 be estimated via linear deviations of the function around the best-fit, and<br /> 2, the issue is how far away from the best-fit is the linear approximation valid, whether the 1sigma error bar that comes out of this process is reliable, overestimated, underestimated, what?</p> <p>btw, while the Numerical Recipes gives a handy summary of the process, it wasn't invented by them. And they do say (NumRec in C, p695, 2nd Ed) that the one must use "Monte Carlo simulations or detailed analytic calculation in determining which contour Delta χ^2 is the correct one for [the] desired confidence level."</p> Couple of points worth clarifying here:
1, it is not that the parameters be estimated via linear approximation, but rather that the χ^2 be estimated via linear deviations of the function around the best-fit, and
2, the issue is how far away from the best-fit is the linear approximation valid, whether the 1sigma error bar that comes out of this process is reliable, overestimated, underestimated, what?

btw, while the Numerical Recipes gives a handy summary of the process, it wasn’t invented by them. And they do say (NumRec in C, p695, 2nd Ed) that the one must use “Monte Carlo simulations or detailed analytic calculation in determining which contour Delta χ^2 is the correct one for [the] desired confidence level.”

]]>
By: hlee http://hea-www.harvard.edu/AstroStat/slog/2007/3-times-sigma/comment-page-1/#comment-41 hlee Mon, 12 Feb 2007 15:16:06 +0000 http://hea-www.harvard.edu/AstroStat/slog/2007/3-times-sigma/#comment-41 <p>When the model parameters are estimated via linear approximation, the χ^2 function defined in the Numerical Recipes may not be χ^2 distribution. However, the name imposes an idea that the sum of errors after fitting always follows χ^2 distribution upon which statistical inference, such as getting confidence intervals, is performed. Unless the fitted model is linear in parameters, the name, χ^2 function needs to be changed to prevent any misleading. A new name will help to find breakdowns of the rule, mσ ~ m*1σ, where m is a positive number, or degrees of the approximation of the rule.</p> When the model parameters are estimated via linear approximation, the χ^2 function defined in the Numerical Recipes may not be χ^2 distribution. However, the name imposes an idea that the sum of errors after fitting always follows χ^2 distribution upon which statistical inference, such as getting confidence intervals, is performed. Unless the fitted model is linear in parameters, the name, χ^2 function needs to be changed to prevent any misleading. A new name will help to find breakdowns of the rule, mσ ~ m*1σ, where m is a positive number, or degrees of the approximation of the rule.

]]>
By: hlee http://hea-www.harvard.edu/AstroStat/slog/2007/3-times-sigma/comment-page-1/#comment-43 hlee Mon, 12 Feb 2007 05:18:13 +0000 http://hea-www.harvard.edu/AstroStat/slog/2007/3-times-sigma/#comment-43 <p>There were different viewpoints:<br /> Instead of maximum likelihood estimators, they described least square estimators via linear approximation (or linearization, where Hessian is mentioned). Most optimization (fitting) problems adopt some level of linearlization. Here, 1sigma error is the error of the parameter estimator, not the model (i.e. error quantities related to measurement errors). That χ^2 function does not explain the error of the parameter estimator unless the model is linear. It does explain residuals by the fitted model.</p> <p>Some simulation studies to compare bootstrapping and using their χ^2 function for spectrum line fitting can be done. Maybe, some non linear model fitting and its analytic/numerical solution to estimates of variance might be inserted. vlk's quote from NumRec confirms it's worthwhile to try.</p> There were different viewpoints:
Instead of maximum likelihood estimators, they described least square estimators via linear approximation (or linearization, where Hessian is mentioned). Most optimization (fitting) problems adopt some level of linearlization. Here, 1sigma error is the error of the parameter estimator, not the model (i.e. error quantities related to measurement errors). That χ^2 function does not explain the error of the parameter estimator unless the model is linear. It does explain residuals by the fitted model.

Some simulation studies to compare bootstrapping and using their χ^2 function for spectrum line fitting can be done. Maybe, some non linear model fitting and its analytic/numerical solution to estimates of variance might be inserted. vlk’s quote from NumRec confirms it’s worthwhile to try.

]]>