Jun 3rd, 2008| 02:53 am | Posted by vlk

It is somewhat surprising that astronomers haven’t cottoned on to Lowess curves yet. That’s probably a good thing because I think people already indulge in smoothing far too much for their own good, and Lowess makes for a very powerful hammer. But the fact that it is semi-parametric and is based on polynomial least-squares fitting does make it rather attractive.

And, of course, sometimes it is unavoidable, or so I told Brad W. When one has too many points for a regular polynomial fit, and they are too scattered for a spline, and too few to try a wavelet “denoising”, and no real theoretical expectation of any particular model function, and all one wants is “a smooth curve, damnit”, then Lowess is just the ticket.

Well, almost.

There is one major problem — *how does one figure what the error bounds are on the “best-fit” Lowess curve?* Clearly, each fit at each point can produce an estimate of the error, but simply collecting the separate errors is not the right thing to do because they would all be correlated. I know how to propagate Gaussian errors in boxcar smoothing a histogram, but this is a whole new level of complexity. Does anyone know if there is software that can calculate reliable error bands on the smooth curve? We will take any kind of error model — Gaussian, Poisson, even the (local) variances in the data themselves.

Tags:

Brad Wargelin,

error bands,

error bars,

Fitting,

least-squares,

Loess,

Lowess,

polynomial,

question for statisticians,

smoothing Category:

Algorithms,

Fitting,

Methods,

Stat,

Uncertainty |

11 Comments
Jun 4th, 2007| 05:42 pm | Posted by vlk

Leccardi & Molendi (2007) have a paper in A&A (astro-ph/0705.4199) discussing the biases in parameter estimation when spectral fitting is confronted with low counts data. Not surprisingly, they find that the bias is higher for lower counts, for standard chisq compared to C-stat, for grouped data compared to ungrouped. Peter Freeman talked about something like this at the 2003 X-ray Astronomy School at Wallops Island (pdf1, pdf2), and no doubt part of the problem also has to do with the (un)reliability of the fitting process when the chisq surface gets complicated.

Anyway, they propose an empirical method to reduce the bias by computing the probability distribution functions (*pdf*s) for various simulations, and then *averaging the pdfs* in groups of 3. Seems to work, for reasons that escape me completely.

[**Update:** links to Peter's slides corrected]

Tags:

bias,

C-stat,

chi-square,

chisq,

Cstat,

Fitting,

parameter estimation,

Peter Freeman,

spectra Category:

arXiv,

Bad AstroStat,

Fitting,

Methods,

Spectral,

Stat |

1 Comment
May 25th, 2007| 04:30 pm | Posted by vlk

Despite some recent significant advances in Statistics and its applications to Astronomy (Cash 1976, Cash 1979, Gehrels 1984, Schmitt 1985, Isobe et al. 1986, van Dyk et al. 2001, Protassov et al. 2002, etc.), there still exist numerous problems and limitations in the standard statistical methodologies that are routinely applied to astrophysical data. For instance, the basic algorithms used in non-linear curve-fitting in spectra and images have remained unchanged since the 1960′s: the downhill simplex method of Nelder & Mead (1965) modified by Powell, and methods of steepest descent exemplified by Levenberg-Marquardt (Marquardt 1963). All non-linear curve-fitting programs currently in general use (Sherpa, XSPEC, MPFIT, PINTofALE, etc.) with the exception of Monte Carlo and MCMC methods are implementations based on these algorithms and thus share their limitations.

Continue reading ‘On the unreliability of fitting’ »

Tags:

best-fit,

chi-square,

chisq,

Fitting,

levenberg-marquardt,

MCMC,

steepest descent Category:

Bad AstroStat,

Data Processing,

Fitting,

Frequentist,

Methods,

Spectral,

Stat,

Uncertainty |

2 Comments