Archive for the ‘Jargon’ Category.

Yes, please

[Book] The Elements of Statistical Learning, 2nd Ed.

This was written more than a year ago, and I forgot to post it.
Continue reading ‘[Book] The Elements of Statistical Learning, 2nd Ed.’ »

An Instructive Challenge

This question came to the CfA Public Affairs office, and I am sharing it with y’all because I think the solution is instructive.

A student had to figure out the name of a stellar object as part of an assignment. He was given the following information about it:

  • apparent [V] magnitude = 5.76
  • B-V = 0.02
  • E(B-V) = 0.00
  • parallax = 0.0478 arcsec
  • radial velocity = -18 km/s
  • redshift = 0 km/s

He looked in all the stellar databases but was unable to locate it, so he asked the CfA for help.

Just to help you out, here are a couple of places where you can find comprehensive online catalogs:

See if you can find it!

Continue reading ‘An Instructive Challenge’ »

Everybody needs crampons

Sherpa is a fitting environment in which Chandra data (and really, X-ray data from any observatory) can be analyzed. It has just undergone a major update and now runs on python. Or allows python to run. Something like that. It is a very powerful tool, but I can never remember how to use it, and I have an amazing knack for not finding what I need in the documentation. So here is a little cheat sheet (which I will keep updating as and when if I learn more): Continue reading ‘Everybody needs crampons’ »

A short note on Probability for astronomers

I often feel irksome whenever I see a function being normalized over a feasible parameter space and it being used as a probability density function (pdf) for further statistical inference. In order to be a suitable pdf, normalization has to be done over a measurable space not over a feasible space. Such practice often yields biased best fits (biased estimators) and improper error bars. On the other hand, validating a measurable space under physics seems complicated. To be precise, we often lost in translation. Continue reading ‘A short note on Probability for astronomers’ »

From Quantile Probability and Statistical Data Modeling

by Emanuel Parzen in Statistical Science 2004, Vol 19(4), pp.652-662 JSTOR

I teach that statistics (done the quantile way) can be simultaneously frequentist and Bayesian, confidence intervals and credible intervals, parametric and nonparametric, continuous and discrete data. My first step in data modeling is identification of parametric models; if they do not fit, we provide nonparametric models for fitting and simulating the data. The practice of statistics, and the modeling (mining) of data, can be elegant and provide intellectual and sensual pleasure. Fitting distributions to data is an important industry in which statisticians are not yet vendors. We believe that unifications of statistical methods can enable us to advertise, “What is your question? Statisticians have answers!”

I couldn’t help liking this paragraph because of its bitter-sweetness. I hope you appreciate it as much as I did.

some python modules

I was told to stay away from python and I’ve obeyed the order sincerely. However, I collected the following stuffs several months back at the instance of hearing about import inference and I hate to see them getting obsolete. At that time, collecting these modules and getting through them could help me complete the first step toward the quest Learning Python (the first posting of this slog). Continue reading ‘some python modules’ »

[ArXiv] classifying spectra

[arXiv:stat.ME:0910.2585]
Variable Selection and Updating In Model-Based Discriminant Analysis for High Dimensional Data with Food Authenticity Applications
by Murphy, Dean, and Raftery

Classifying or clustering (or semi supervised learning) spectra is a very challenging problem from collecting statistical-analysis-ready data to reducing the dimensionality without sacrificing complex information in each spectrum. Not only how to estimate spiky (not differentiable) curves via statistically well defined procedures of estimating equations but also how to transform data that match the regularity conditions in statistics is challenging.
Continue reading ‘[ArXiv] classifying spectra’ »

Scatter plots and ANCOVA

Astronomers rely on scatter plots to illustrate correlations and trends among many pairs of variables more than any scientists[1]. Pages of scatter plots with regression lines are often found from which the slope of regression line and errors bars are indicators of degrees of correlation. Sometimes, too many of such scatter plots makes me think that, overall, resources for drawing nice scatter plots and papers where those plots are printed are wasted. Why not just compute correlation coefficients and its error and publicize the processed data for computing correlations, not the full data, so that others can verify the computation results for the sake of validation? A couple of scatter plots are fine but when I see dozens of them, I lost my focus. This is another cultural difference. Continue reading ‘Scatter plots and ANCOVA’ »

  1. This is not an assuring absolute statement but a personal impression after reading articles of various fields in addition to astronomy. My readings of other fields tell that many rely on correlation statistics but less scatter plots by adding straight lines going through data sets for the purpose of imposing relationships within variable pairs[]

[MADS] logistic regression

Although a bit of time has elapsed since my post space weather, saying that logistic regression is used for prediction, it looks like still true that logistic regression is rarely used in astronomy. Otherwise, it could have been used for the similar purpose not under the same statistical jargon but under the Bayesian modeling procedures. Continue reading ‘[MADS] logistic regression’ »

SINGS

[MADS] Kalman Filter

I decide to discuss Kalman Filter a while ago for the slog after finding out that this popular methodology is rather underrepresented in astronomy. However, it is not completely missing from ADS. I see that the fulltext search and all bibliographic source search shows more results. Their use of Kalman filter, though, looked similar to the usage of “genetic algorithms” or “Bayes theorem.” Probably, the broad notion of Kalman filter makes it difficult my finding Kalman Filter applications by its name in astronomy since often wheels are reinvented (algorithms under different names have the same objective). Continue reading ‘[MADS] Kalman Filter’ »

More on Space Weather

Thanks to a Korean solar physicist[1] I was able to gather the following websites and some relevant information on Space Weather Forecast in action, not limited to literature nor toy data.

Continue reading ‘More on Space Weather’ »

  1. I must acknowledge him for his kindness and patience. He was my wikipedia to questions while I was studying the Sun.[]

[MADS] compressed sensing

Soon it’ll not be qualified for [MADS] because I saw some abstracts with the phrase, compressed sensing from arxiv.org. Nonetheless, there’s one publication within refereed articles from ADS, so far.

http://adsabs.harvard.edu/abs/2009MNRAS.395.1733W.
Title:Compressed sensing imaging techniques for radio interferometry
Authors: Wiaux, Y. et al. Continue reading ‘[MADS] compressed sensing’ »

[ArXiv] component separation methods

I happened to observe a surge of principle component analysis (PCA) and independent component analysis (ICA) applications in astronomy. The PCA and ICA is used for separating mixed components with some assumptions. For the PCA, the decomposition happens by the assumption that original sources are orthogonal (uncorrelated) and mixed observations are approximated by multivariate normal distribution. For ICA, the assumptions is sources are independent and not gaussian (it grants one source component to be gaussian, though). Such assumptions allow to set dissimilarity measures and algorithms work toward maximize them. Continue reading ‘[ArXiv] component separation methods’ »