Comments on: Q: Lowess error bars? http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/ Weaving together Astronomy+Statistics+Computer Science+Engineering+Intrumentation, far beyond the growing borders Fri, 01 Jun 2012 18:47:52 +0000 hourly 1 http://wordpress.org/?v=3.4 By: hlee http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-248 hlee Mon, 09 Jun 2008 00:27:49 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-248 Not a preview button, but now one can see how one's comment looks like. Please, let us know any inconvenience from slogging. Thanks again. Not a preview button, but now one can see how one’s comment looks like. Please, let us know any inconvenience from slogging. Thanks again.

]]>
By: hlee http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-247 hlee Mon, 09 Jun 2008 00:25:19 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-247 I don't think it is only for straight lines as regression analysis although the given examples are the most simplest cases. I understood quantile regression as a versatile, robust, and nonparametric method compared to traditional regression analysis typically built under normal errors. Given that thousands of data points to be fit, I thought economically bootstrap is not viable and quantile regression can be an alternative. I can be wrong but under the objective of fitting, lowess does not appeal to me. It's time to get rid of dusts on the book. I don’t think it is only for straight lines as regression analysis although the given examples are the most simplest cases. I understood quantile regression as a versatile, robust, and nonparametric method compared to traditional regression analysis typically built under normal errors. Given that thousands of data points to be fit, I thought economically bootstrap is not viable and quantile regression can be an alternative. I can be wrong but under the objective of fitting, lowess does not appeal to me. It’s time to get rid of dusts on the book.

]]>
By: vlk http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-245 vlk Sat, 07 Jun 2008 23:09:51 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-245 Thanks for the link to Gelman's post, Nick (btw, I fixed that hyperlink!). Hyunsook, could you explain how quantile regression helps to generate smooth curves? I was under the impression that they are just another way to fit straight lines. Alex, you bring up another bugaboo: when one bootstraps loess curves, it is easy to get them braided up like a frayed rope. In such cases, a density plot tells only half the story. What kind of strategies do statisticians use to deal with that? Thanks for the link to Gelman’s post, Nick (btw, I fixed that hyperlink!).

Hyunsook, could you explain how quantile regression helps to generate smooth curves? I was under the impression that they are just another way to fit straight lines.

Alex, you bring up another bugaboo: when one bootstraps loess curves, it is easy to get them braided up like a frayed rope. In such cases, a density plot tells only half the story. What kind of strategies do statisticians use to deal with that?

]]>
By: awblocker http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-244 awblocker Fri, 06 Jun 2008 17:51:20 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-244 Typically, loess analyses are accompanied by a plot showing the original fit with a large number of bootstrap replications, produced by resampling the original loess residuals. However, these are also quite difficult to read. I favor some type of shaded density plot for the bootstrap replications. If the program you're plotting in supports transparency, this can be done quickly by increasing the line width and dropping the opacity when plotting the bootstrapped loess curves. Typically, loess analyses are accompanied by a plot showing the original fit with a large number of bootstrap replications, produced by resampling the original loess residuals. However, these are also quite difficult to read. I favor some type of shaded density plot for the bootstrap replications. If the program you’re plotting in supports transparency, this can be done quickly by increasing the line width and dropping the opacity when plotting the bootstrapped loess curves.

]]>
By: hlee http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-242 hlee Wed, 04 Jun 2008 20:36:40 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-242 I was going to suggest quantile regression but surprised that there are many comments. Via quantile regression, you'll get best fit regression results at the given quantile. 25% and 75% percentiles will give regression fits with 50% error range. On the other hand, I learn lowess as a diagnostic tool like astronomers add error bars and a straight line to show how good the fit is, not for best fit. By the way, thank you, Nick for pointing a technical improvement for the slog. I'm not sure it's due to Wordpress, or the current theme, or my laziness that I wasn't able to find a plug in. I'll definitely look into it and will do best to include a preview button. I was going to suggest quantile regression but surprised that there are many comments. Via quantile regression, you’ll get best fit regression results at the given quantile. 25% and 75% percentiles will give regression fits with 50% error range. On the other hand, I learn lowess as a diagnostic tool like astronomers add error bars and a straight line to show how good the fit is, not for best fit.

By the way, thank you, Nick for pointing a technical improvement for the slog. I’m not sure it’s due to WordPress, or the current theme, or my laziness that I wasn’t able to find a plug in. I’ll definitely look into it and will do best to include a preview button.

]]>
By: Nick http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-241 Nick Wed, 04 Jun 2008 08:43:00 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-241 Oops. Sorry about the hyperlink. This is the reason why I need a preview button! I'll be more careful next time. Oops. Sorry about the hyperlink. This is the reason why I need a preview button! I’ll be more careful next time.

]]>
By: Nick http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-240 Nick Wed, 04 Jun 2008 08:41:46 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-240 vlk, I don't think it's superhard to do the bootstrap. Also not, imho, super enlightening. I myself would love to become a pure bayesian even in areas of nonparametric, and in this case there may be some Bayesian alternatives which give similar results to Loess. You might check out Gelman's post on it, <a href="http://www.stat.columbia.edu/~cook/movabletype/archives/2005/03/lowess_is_great.html" title="Gelman's Post on LOESS" rel="nofollow">, but he says that there are no Bayesian versions of it.</a> The comments back in 2005 do mention some Bayesian alternatives. One such alternative I would think is Gaussian Processes. If you google Gaussian Processes, you'll see that there is even a webpage on them. The difficult part is choosing a prior for the covariance function. This choice could give a wide range of alternatives (you could even get ARIMA/ARMA type fits or probably a wide range of splines). It's extremely general. Since posteriors only give confidence intervals in parameter space, I guess I'd use predictive distributions to get the confidence intervals in data space. BTW, it would be great to have a "preview" button for comments on this blog. vlk, I don’t think it’s superhard to do the bootstrap. Also not, imho, super enlightening. I myself would love to become a pure bayesian even in areas of nonparametric, and in this case there may be some Bayesian alternatives which give similar results to Loess.

You might check out Gelman’s post on it, , but he says that there are no Bayesian versions of it. The comments back in 2005 do mention some Bayesian alternatives.

One such alternative I would think is Gaussian Processes. If you google Gaussian Processes, you’ll see that there is even a webpage on them. The difficult part is choosing a prior for the covariance function. This choice could give a wide range of alternatives (you could even get ARIMA/ARMA type fits or probably a wide range of splines). It’s extremely general. Since posteriors only give confidence intervals in parameter space, I guess I’d use predictive distributions to get the confidence intervals in data space.

BTW, it would be great to have a “preview” button for comments on this blog.

]]>
By: vlk http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-239 vlk Tue, 03 Jun 2008 22:24:46 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-239 Thanks, Nick. I was afraid of that -- no alternative to brute force bootstrap or Monte Carlo then! Thanks, Nick. I was afraid of that — no alternative to brute force bootstrap or Monte Carlo then!

]]>
By: Nick http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-238 Nick Tue, 03 Jun 2008 21:38:39 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-238 I don't know about measured data errors or similar. Loess by itself doesn't really come equipped with a standard error calculator....since...if you're a frequentist...just how should the loess "parameters" be distributed according to the sampling distribution. Rather, people tend to use boostrap to find standard errors (like they use cross-validation to find "best fits"). For an example of bootstrapped standard errors in Loess, check out the link: <a href="http://www-stat.stanford.edu/~susan/courses/s208/node20.html" title="Bootstrapping Examples" rel="nofollow"> toward the middle of the page under the heading "Curve Fitting Example, Efron & Tibshirani, 7.3" I don’t know about measured data errors or similar. Loess by itself doesn’t really come equipped with a standard error calculator….since…if you’re a frequentist…just how should the loess “parameters” be distributed according to the sampling distribution.

Rather, people tend to use boostrap to find standard errors (like they use cross-validation to find “best fits”). For an example of bootstrapped standard errors in Loess, check out the link: toward the middle of the page under the heading “Curve Fitting Example, Efron & Tibshirani, 7.3″

]]>
By: vlk http://hea-www.harvard.edu/AstroStat/slog/2008/question-lowess-error-bars/comment-page-1/#comment-237 vlk Tue, 03 Jun 2008 18:55:30 +0000 http://hea-www.harvard.edu/AstroStat/slog/?p=329#comment-237 Yes, R does have a lowess function, but it doesn't produce an estimate of reliability. It doesn't matter (at this time) what the assumptions are about the underlying error distribution of the data. Lowess produces a curve based on fitting polynomials separately at each point (so that's why I called it a "best-fit" curve), and the question is, how robust is that curve, given that the data have scatter and/or that the data have measurement uncertainty? I suppose it is always possible to run a thousand Monte Carlo simulations based on the measured data errors, but I was looking for a faster, hopefully analytical, way to get the confidence band on the curve. Yes, R does have a lowess function, but it doesn’t produce an estimate of reliability. It doesn’t matter (at this time) what the assumptions are about the underlying error distribution of the data. Lowess produces a curve based on fitting polynomials separately at each point (so that’s why I called it a “best-fit” curve), and the question is, how robust is that curve, given that the data have scatter and/or that the data have measurement uncertainty?

I suppose it is always possible to run a thousand Monte Carlo simulations based on the measured data errors, but I was looking for a faster, hopefully analytical, way to get the confidence band on the curve.

]]>