A student had to figure out the name of a stellar object as part of an assignment. He was given the following information about it:

- apparent [V] magnitude = 5.76
- B-V = 0.02
- E(B-V) = 0.00
- parallax = 0.0478 arcsec
- radial velocity = -18 km/s
- redshift = 0 km/s

He looked in all the stellar databases but was unable to locate it, so he asked the CfA for help.

Just to help you out, here are a couple of places where you can find comprehensive online catalogs:

See if you can find it!

Answer next ~~week~~ month.

**Update (2010-aug-02):**

The short answer is, I could find no such star in any commonly available catalog. But that is not the end of the story. There does exist a star in the Hipparcos catalog, HIP 103389, that has approximately the right distance (21 pc), radial velocity (-16.1 km/s), and *V* magnitude (5.70). It doesn’t match exactly, and the *B-V* is completely off, but that is the moral of the story.

The thing is, catalogs are not perfect. The same objects often have very different numerical entries in different catalogs. This could be due to a variety of reasons, such as different calibrations, different analysers, or even intrinsic variations in the source. And you can bet your bottom dollar that the quoted statistical uncertainties in the quantities do not account for the observed variance. Take the *B-V* value, for instance. It is 0.5 for HIP 103389, but the initial problem stated that it was 0.02, which makes it an A type star. But if it were an A type star at 21 pc, it should have had a magnitude of *V*~1.5, much brighter than the required 5.76!

I think this illustrates one of the fundamental tenets of science as it is practiced, versus how it is taught. The first thing that a practicing scientist does (especially one not of the theoretical persuasion) is to try and see where the data might be wrong or misleading. It should only be included in analysis after it passes various consistency checks and is deemed valid. The moral of the story is, don’t trust data blindly just because it is a “number”.

]]>From SINGS (Spitzer Infrared Nearby Galaxies Survey): Isn’t it a beautiful Hubble tuning fork?

As a first year graduate student of statistics, because of the rumor that Prof. C.R.Rao won’t teach any more and because of his fame, the most famous statistician alive, I enrolled his “multivariate analysis” class without thinking much. Everything is smooth and easy for him and he has incredible memories of equations and proofs. However, I only grasped intuitive concepts like why the method works, not details of mathematics, theorems, and their proofs. Instantly, I began to think how methods can be applied to astronomical data. After a few lessons, I desperately wanted to try out multivariate analysis methods to classify galactic morphology.

The dream died shortly because there’s no data set that can be properly fed into statistical methods for classification. I spent quite time on searching some astronomical data bases including ADS. This was before SDSS or VizieR become popular as now. Then, I thought about applying them to classify supernovae because understanding the pattern of their light curves tells a lot of the history of our universe (Type Ia SNe are standard candle) and because I know some publicly available SN light curves. Immediately, I realize that individual light curves are biased from the sampling perspective. I do not know how to correct them for multivariate analysis. I also thought about applying multivariate analysis methods to stellar spectral types and stars of different mechanical systems (single, binary, association, etc). I thought about how to apply newly learned methods to every astronomical objects that I learned, from sunspots to AGNs.

Regardless of target objects to be scrutinized under this fascinating subject “multivariate analysis,” two factors kept discouraged me: one was that I didn’t have enough training to develop new statistical models in a couple of weeks to reflect unique statistical challenges embedded in data that have missings, irregularities, non-iid, outliers and others that are hardly transcribed into statistical setting, and the other, which was more critical, was that no accessible astronomical database repository for statistical learning. Without deep knowledge in astronomy and trained skills to handle astronomical data, catalogs are generally useless. Those catalogs and data sets in archives are different from data sets from data repositories in machine learning (these data sets are intuitive).

Astronomers would think analyzing toy/mock data sets is not scientific because it’s not leading to any new discovery which they always make. From data analyst viewpoints, scientific advances mean finding tools that summarize data in an optimal manner. As I demanded in Astroinformatics, methods for retrieving information can be attempted and validated with well understood, astrophysically devastated data sets. Pythagoras theorem was proved not only once but there are 39 different ways to prove it.

Seeing this nice poster image (the full resolution image of 56MB is available from the link), brought me some memory of my enthusiasm of applying statistical learning methods for better knowledge discovery. As you can see there are so many different types of galaxies and often times there is no clear boundary between them – consider classifying blurry galaxies by eyes: a spiral can be classified as a irregular, for example. Although I wish for automatic classification of these astrophysical objects, because of difficulties in composing a training set for classification or collecting data of distinctive manifold groups for clustering, as much as complexity that this tuning fork shows, machine learning procedures is equally complicated to be developed. Complex topology of astronomical objects seems to be the primary reason of lacking in statistical learning applications compared to other fields.

Nonetheless, multivariable analysis can be useful for viewing relations from different perspectives, apart from known physics models. It may help to develop more fine tuned physics model by taking latent variables into account that are found from statistical learning processes. Such attempts, I believe, can assist astronomers to design telescopes and to invent efficient ways to collect/analyze data by knowing which features are more significant than others to understand morphological shape of galaxies, patterns in light curves, spectral types, etc. When such experiences accumulate, different insights of physics can kick in like scientists scrambled and assembled galaxies into a tuning fork that led developing various evolution models.

To make a long story short, you have two choices: one, just enjoy these beautiful pictures and apprehend the complexity of our universe, or two, this picture of Hubble’s tuning fork can be inspirational to you for advances in astroinformatics. Whichever path you choose, it’s your time worthy.

]]>With my limited knowledge, I cannot lay out all important aspects in solar physics, climate changes (not limited to our lower atmosphere but covering the space between the sun and the earth) due to solar activities, and the most important issues of recent years related to space weather. Only I can emphasize that compared to earth climate/atmosphere or meteorology, contribution from statisticians to space weather is almost none existing. I’ve witnessed frequently that crude eyeballing instead of statistics in analyzing data and quantifying images occurs in Solar Physics. Luckily, a few articles discussing statistics are found and my discussion is rather focused on these papers while leaving a room for solar physicists to chip in how space weather is dealt statistically for collaborating with statisticians.

By the way, I have no intention of degrading “eyeballing” in data analysis by astronomers. Statistical methods under EDA, exploratory data analysis whose counterpart is CDA, confirmatory data analysis, or statistical inference, is basically “eyeballing” with technical jargon and basics from probability theory. EDA is important to doubt every step in astronomers’ chi-square methods. Without those diagnostics and visualization, choosing right statistical strategies is almost impossible with real data sets. I used “crude” because instead of using “edge detection” algorithms, edges are drawn by hand via eyeballing. Also, my another disclaimer is that there are brilliant image processing/computer vision strategies developed by astronomers, which I’m not going to present. I’m focusing on small areas in statistics related to space weather and its forecasting.

Statistical Assessment of Photospheric Magnetic Features in Imminent Solar Flare Predictions by Song et al. (2009) SoPh. v. 254, p.101.

Their forte is “logistic regression” a statistical model that is not often used in astronomy. It is seen when modeling binary responses (or categorical responses like head or tail; agree, neutral, or disgree) and bunch of predictors, i.e. classification with multiple features or variables (astronomers might like to replace these lexicons with parameters). Also, the issue of variable selection is discussed like *L_{gnl} to be the most powerful predictor*. Their training set was carefully discussed from the solar physical perspective. Against their claim that they used “logistic regression” to predict solar flares for the first time, there was another paper a few years back discussing “logistic regression” to predict geomagnetic storms or coronal mass ejections. This statement can be wrong if flares and CMEs are exclusive events.

The Challenge of Predicting the Occurrence of Intense Storms by Srivastava (2006) J.Astrophys. Astr. v.27, pp.237-242

Probability of the storm occurrence is response in logistic regression model, of which predictors are CME related variables including latitude and longitude of the origin of CME, and interplanetary inputs like shock speeds, ram pressure, and solar wind related measures. Cross-validation was performed. A comment that the initial speed of a CME might be the most reliable predictor is given but no extensive discussion of variable selection/model selection.

Personally speaking, both publications^{[1]} can be more statistically rigorous to discuss various challenges in logistic regression from the statistical learning/classification perspective and from the model/variable selection aspect to define more well behaving and statistically rigorous classifiers.

Often times we plan our days according to the weather forecast (although we grumble weather forecasts are not right, almost everyone relies on numbers and predictions from weather people). Although it may not be 100% reliable, those forecasts make our lives easier. Also, more reliable models are under developing. On the other hand, forecasting space weather with the help of statistics is yet unthinkable. However, scientists and engineers understand that the reliable space weather models help planning space missions and controlling satellites into safety mode. At least I know is that with the presence of flare or CME forecasting models, fewer scientists/engineers need to wake up in the middle of night, because of, otherwise unforeseen storms from the sun.

- I thought I collected more papers under “statistics” and “space weather,” not just these two. A few more probably are buried somewhere. It’s hard to believe such rich field is not touched by statisticians. I’d appreciate very much your kind forwarding those relevant papers. I’ll gradually add them.

After meeting Prof. Herman Chernoff unexpectedly – I didn’t know he is Professor Emeritus at Harvard – the urge revived but I didn’t have data, still then. Alas, another absent mindedness: I don’t understand why I didn’t realize that I already have the data, XAtlas for trying Chernoff faces until today. Data and its full description is found from the XAtlas website (click). For Chernoff face, references suggested in Wiki:Chernoff face are good. I believe some folks are already familiar with Chernoff faces from a New York Times article last year, listed in Wiki (or a subset characterized by baseball lovers?).

Capella is a X-ray bright star observed multiple times for Chandra calibration. I listed 16 ObsIDs in the figures below at each face, among 18+ Capella observations (Last time when I checked Chandra Data Archive, 18 Capella observations were available). These 16 are high resolution observations from which various metrics like interesting line ratios and line to continuum ratios can be extracted. I was told that optically it’s hard to find any evidence that Capella experienced catastrophic changes during the Chandra mission (about 10 years in orbit) but the story in X-ray can’t be very different. In a dismally short time period (10 years for a star is a flash or less), Capella could have revealed short time scale high energy activities via Chandra. I just wanted to illustrate that Chernoff faces could help visualizing such changes or any peculiarities through interpretation friendly facial expressions (Studies have confirmed babies’ ability in facial expression recognitions). So, what do you think? Do faces look similar/different to you? Can you offer me astronomical reasons for why a certain face (ObsID) is different from the rest?

**p.s.** In order to draw these Chernoff faces, check descriptions of these R functions, faces() (yields the left figure) or faces2() (yields the right figure) by clicking on the function of your interest. There are other variations and other data analysis systems offer different fashioned tools for drawing Chernoff faces to explore multivariate data. Welcome any requests for plots in pdf. These jpeg files look too coarse on my screen.

**p.p.s.** Variables used for these faces are line ratios and line to continuum ratios, and the order of these input variables could change countenance but impressions from faces will not change (a face with distinctive shapes will look different than other faces even after the order of metrics/variables is scrambled or using different Chernoff face illustration tools). Mapping, say from an astronomical metric to the length of lips was not studied in this post.

**p.p.p.s.** Some data points are statistical outliers, not sure about how to explain strange numbers (unrealistic values for line ratios). I hope astronomers can help me to interpret those peculiar numbers in line/continuum ratios. My role is to show that statistics can motivate astronomers for new discoveries and to offer different graphics tools for enhancing visualization. I hope these faces motivate some astronomers to look into Capella in XAtlas (and beyond) in details with different spectacles, and find out the reasons for different facial expressions in Capella X-ray observations. Particularly, ObsID 1199 is most questionable to me.

Personally, it was a highly anticipated symposium at CfA because I was fascinated about the female computers’ (or astronomers’) contributions that occurred here about a century ago even though at that time women were not considered as scientists but mere assistants for tedious jobs.

I learned more history particularly about Ms. Henrietta Leavitt who first speculated the period-luminosity relation from Cepheid stars. Her work is a real painstaking task that cannot be compared to finding a needle in a haystack. It’s like finding some needles from a same manufacturer from countless haystacks, which may or may not have a needle from the specific manufacturer. The worst part is, needles are needles. Not many needles have tags like your clothing for an identification.

However, I was disappointed because of two reasons. First is a minor disappointment but very valuable. The author (George Johnson) of the book, Miss Leavitt’s star – I haven’t read, actually I didn’t know it exists until today – answered my question that he does not think Ms Leavitt’s was exposed to statistical research. Finding a relationship between period and luminosity is closely related a simple regression analysis and I thought she knew about statistics to associate her discovery to now so called, the Leavitt’s law. This disappointment actually lead me to question when the statistical analysis kicked in in astronomy, particularly finding relationships in any studies related to the standard candle, to find out the correct estimate of the Hubble constant.

The second reason of my disappointment is very poorly executed statistics. Obviously, it’s not Ms. Leavitt who imposed such strange trend and statistical malpractices (or carelessness) in regression analysis among astronomers. Whenever speakers bring out scatter plots with regression lines and data points with error bars, I keep murmuring silently, “Oh, my God, how come they blindly did that?” There were statistical issues to be addressed prior to stating that their results support a certain hypothesis instead of putting a straight line and claiming that – “see, how good the slope is” – the Hubble constant is # plus minus $. A high leverage point on the right in addition to less than a dozen points clumped in the left corner, without various diagnostics tools in regression analysis, one does not claim that the straight line is a good fit nor can say that the analysis backs up the hypothesis. Perhaps these statistical diagnoses only advocate their concluding estimates and their uncertainty, and so are omitted. However, my feeling upon looking plots tells me that a simple bootstrap could prove that their estimates are not accurate as they think. Until you try, you don’t know, though. I may email those speakers politely if I can have data points they used for their scatter plots. Unfortunately, I know no one is willing to give me their data points for such unjust cause since even good causes, I had experiences of indifference (I myself might do the same if I were in their positions, no complaints!!!).

Regardless of these disappointments from the statistical instinct, it was a scientifically very interesting symposium and like to thank who made great efforts to put things together. It helped me to resolve some of my crave to know about Ms. Leavitt and to satisfy one of my old wishes that her work to be recognized under her name. If there is one, I wish I could have attended the symposium to commemorate the centennial of student-t, this year. It’s always good to know the history better to move forward.

Asides, during G. Johnson’s talk, he showed pictures of apt. building, which I see everyday, that Ms. Leavitt made her residence until her death and of Mr. Auburn cemetery, very beautiful calming place, where she was buried. I wished she had lived longer to see a glimpse of her great contribution to astronomical science.

]]>The CfA is celebrating the 100th anniversary of the discovery of the Cepheid period-luminosity relation on Nov 6, 2008. See http://www.cfa.harvard.edu/events/2008/leavitt/ for details.

**[Update 10/03]** For a nice introduction to the story of Henrietta Swan Leavitt, listen to this Perimeter Institute talk by George Johnson: http://pirsa.org/06050003/

**[Update 11/06]** The full program is now available. The symposium begins at Noon today.

The pictures come from space and ground observatories, from SoHO, TRACE, Hinode, STEREO, etc. Goes without saying, the images are stunning, and some are even animated. The real kicker is that images such as these are being acquired by the hundreds, every hour upon the hour, 24/7/365.25 . It is like sipping from a firehose. Nobody can sit there and look at them all, so who knows what we are missing out on. Can statistics help? Can we automate a statistically robust “interestingness” criterion to filter the data stream that humans can then follow up on?

]]>The flux at Earth due to an atomic transition *u –> l* from a volume element *δV* at a location *ɼ*,

I,_{ul}(ɼ) = (1/4 π) (1/d(ɼ)^{2}) A(Z,ɼ) G_{ul}(n_{e}(ɼ),T_{e}(ɼ)) n_{e}(ɼ)^{2}δV(ɼ)

where *n _{e}* is the electron density and

We can combine the flux from all the points in the field of view that arise from plasma at the same temperature,

I._{ul}(T_{e}) = (1/4 π) ∑_{ɼ|T}_{e}(1/d(ɼ)^{2}) A(Z,ɼ) G_{ul}(n_{e}(ɼ),T_{e}) n_{e}^{2}δV(ɼ)

Assuming that *A(Z,ɼ), n _{e}(ɼ)* do not vary over the points in the summation,

I,_{ul}(T_{e}) ≈ (1 / 4 π d^{2}) G_{ul}(n_{e},T_{e}) A(Z) n_{e}^{2}(ΔV / Δlog T_{e}) Δlog T_{e}

and hence the total line flux due to emission at all temperatures,

I_{ul}= ∑_{T}_{e}(1 / 4 π d^{2}) A(Z) G_{ul}(n_{e},T_{e}) DEM(T_{e}) ΔlogT_{e}

The quantity

DEM(T_{e}) = n_{e}^{2}(ΔV / Δlog T_{e})

is called the Differential Emission Measure and is a very useful summary of the temperature structure of stellar coronae. It is typically reported in units of **[cm ^{-3}]** (or

The expression for the line flux is an instance of Fredholm’s Equation of the First Kind and the *DEM(T _{e})* solution is thus unstable and subject to high-frequency oscillations. There is a whole industry that has grown up trying to derive DEMs from often highly unreliable datasets.

The flux [ergs s^{-1} cm^{-2} sr^{-1}] from an optically thin emission line that arises due to a transition between energy levels j and i in an ionic species Z^{+I} is simply written. It is the product of the probability of the transition *A _{ji}(Z,I)* (aka the Einstein coefficient), the number of particles of the species that exists in the upper level of the transition

But this apparently purely atomic calculation can be reformed and rewritten, after some algebra, in terms of quantities that are astrophysically more meaningful. The equations below walk you through the tranformation from atomic physics to quantities that can be separated out into different hierarchies of astrophysical source properties, from things that change not at all from one source to another, to things that are likely not the same even along the line-of-sight.

All of the quantities that depend only on the atomic physics can be pulled together into the emissivity of the transition, *e _{ji}(N_{e},T_{e},Z,I)*. This is (mostly) independent of the physical conditions at the source, and is generally treated as invariant except for changes due to the electron number density. These can therefore be calculated beforehand, and indeed, codes such as CHIANTI, SPEX, and APEC do just that. The abundance

It is important to note that each of the terms listed above have associated model or measurement uncertainties. Often, the Einstein coefficients and the energy of the emission are not experimentally verified, and the level populations are approximate calculations due to the complexity of the level structure of the species in question. Typical ion balance calculations assume that the plasma is in thermodynamic equilibrium, which is often not a good assumption. Abundances are known to vary radically (by factors greater than 2x) across the source. And finally, except at high temperatures and low density (such as stellar coronae), the assumption of zero opacity (i.e., that any emitted photon escapes to infinity without any scatterings) is not applicable, and radiative transfer effects must be included.

A brief word about the units. Astronomers tend to use cgs, not SI. So the flux usually has units [ergs/s/cm^{2}/sr], the emissivity *e _{ji}* is in [ph cm

The emission measure is a story by itself, one best left alone for another time.

]]>