Last Updated: 2008apr07
AstroStatistics Special Session
Methods and Techniques
High-Energy Astrophysics Division Meeting, Los Angeles
Monday, 12:45-2:00pm, Mar 31, 2008
Talks
Posters from the AstroStatistics and Data Analysis and Modeling Techniques sessions
- [04.02] Aldcroft, T.L., Green, P., Kashyap, V., Kim, D., & Connelly, J., Robust Source Detection Limits for Chandra Observations
- Abstract
We present a novel method for estimating the source detection limits in Chandra observations using the count threshold map
produced by the CIAO wavdetect tool. This is particularly useful for multiwavelength analysis of X-ray non-detections at the
position of prior sources, for instance known optically-selected AGN from the SDSS that are covered in the ChaMP survey. Because
the Chandra PSF and detector characteristics are highly position dependent, a robust estimate of the detection limit at a
particular location is not easily obtained. However, the CIAO wavdetect tool can produce a count threshold map at each wavelet
scale that explicitly accounts for such effects. Taking advantage of a large body of source detection simulations previously
done for the ChaMP effective area calculation, we derive an empirical correlation that uses the threshold map to predict the
spatially dependent count limit at which 50% and 90% of sources are detected. We have verified this algorithm using the 2 Msec
Chandra Deep Field South data.
- Haiku [.pdf]
- poster [.pdf]
- See also: #03.02, Kashyap et al.
-
- [04.05] Boone, L.M., Simulations of Radiative Signatures from Detailed Particle Dynamics
- Abstract
We present current work on detailed simulations of radiative signatures from relativistic particles. In particular,
inverse-Compton spectra for a variety of scenarios are derived from the perspective of individual interactions between photons
and a leptonic scattering population. Spectra are compared with those of more traditional modeling approaches in an effort to
better quantify assumptions employed by such models, and to provide an assessment of the current simulation's integrity.
-
- [03.01] Connors, A., van Dyk, D., & Chiang, J., Quantifying Doubt and Confidence in Image "Deconvolution"
- 2008HEAD...10.0301C [ADS]
- Abstract
How many times have you viewed a strikingly presented astronomical image --- and, in doubt, asked, "Where are the error bars?"
Such doubt can come from several places: 1/ The purely statistical uncertainty of the final measurements (e.g. Poisson
statistics); 2/ Possible mismatch between one's astrophysics sky-model and "sky-truth" (source model uncertainty); 3/ Related,
additional effect of an uncertain background shape on measurements of the source of interest (background model uncertainty); and
4/ Mis-match between one's current best instrument model and "instrument truth" (instrument or calibration uncertainty).
Additionally, there can be doubt in the methods used to process the data -- e.g. "How did they choose their stopping or
smoothing parameters?"
A number of different groups very generally attack these kinds of problems by embedding very flexible, non- or semi-parametric
models (from wavelets to Bayesian Blocks to Markov Random Fields) in full probabilistic frameworks (i.e. including "foward
fitting"). These probability frameworks incorporate sky and instrument models and model-mismatch, as well as "fitting" any
smoothing or stopping parameters. The CHASC group has used multiscale and MRF models of diffuse sky emission, plus
astrophysics-based sky models, together with a hierarchical Bayesian probability structure. We use MCMC to sample the full
highly dimensional posterior probability, which explicitly incorporates all the doubts mentioned above. We then choose two (or
more) physically-based summary statistics to quantify (and display) the breadth of the uncertainties. But our "credible regions"
have an unusual shape: the "error contours" include two spatial dimensions and one intensity dimension, in one example. We have
fun analyzing and comparing differing CCGRO/EGRET viewing periods, including those of the 3C273/3C279 region, using our
multi-scale model to capture varying and unexpected emission.
The authors acknowledge NASA AISRP and NSF interdisciplinary funding.
- Haiku [.pdf]
- poster [.pdf]
-
- [03.04] Chernenko, A., Pushing The Limits Of Low Count Rate Gamma-ray Spectroscopy With Global Fits: Detection, Estimation, Classification
- Abstract
With the launch of highly sensitive gamma-ray experiments thousands of highly variable and transient sources, such as Gamma-ray
bursts, have been recorded. Due to a number of reasons, such as very broad source brightness distribution, strong and fast
spectral variability, tremendous diversity of light curves, and frequently involved cosmological red-shift it has been very
difficult to establish robust, unbiased, cosmologically invariant parameters that characterize spectral variability of a source
regardless of its light curve and brightness.
As the solution we proposed a general method for analysis of time resolved gamma-ray spectra of astrophysical transients - the
Global Fit Analysis (GFA), which is based on a spectral evolution model global for a given transient. Since each Global fit is
in fact based on the entire source fluence, and prior information such as assumption of internal correlation of spectral
parameters, it is very robust and allows for spectroscopy much fainter sources than could normally be studied by traditional
spectroscopy dealing with individual independent spectra along the light curve. Or, for a given transient Global fits allow for
by far better time resolution.
The robustness of Global fits also makes batch Global fits quite feasible, so the parameters could be estimated without any
human intervention for a large number of sources, such as entire set of sources studied by a given experiment. In this paper we
exemplify such analysis with Gamma-ray bursts recorded in BATSE experiment.
-
- [04.03] Davis, J.E., A Pile-up Model For The Chandra HETGS
- Abstract
The observation of a bright X-ray source by the Chandra X-ray Observatory's High Energy Transmission Grating Spectrometer
(HETGS) can be compromised by photon pile-up in the CCDs leading to false absorption features in the dispersed spectrum. A
statistical method of handling pile-up in non-dispersive Chandra spectra has been widely available for several years. Here, a
new model of photon pile-up in dispersed spectra will be presented. The effectiveness of the model will be illustrated by its
application to HETGS observations of several ultraluminous X-ray sources, including Cygnus X-1, Circinus X-1, and GX 5-1.
This work is supported under contract SV3-73016 between MIT and the Smithsonian Astrophysical Observatory, which operates the
Chandra X-ray Center on behalf of NASA under contract NAS8-03060.
-
- [04.06] Kaastra, J.S., Lanz, T., Hubeny, I., & Paerels, F., White Dwarf Spectra and Calibration of X-ray Grating Spectrometers
- Abstract
White dwarf spectra have been widely used as calibration sources for X-ray and EUV instruments. The in-flight effective area
calibration of the RGS of XMM-Newton and of the LETGS of Chandra depend upon the availability of reliable calibration sources.
We calculate a grid of model atmospheres for Sirius B and HZ 43A, and adjust the parameters using several constraints until the
ratio of the spectra of both stars agrees with the ratio observed with the LETGS of Chandra. This ratio is independent of any
errors in the effective area of the LETGS. We determine how accurately the effective area of the LETGS is determined using our
method, and find interesting constraints on the parameters for both stars. We discuss the role of the Lyman pseudo-continuum in
the calculation of the spectrum of Sirius B. The treatment of that pseudo-continuum appears to play a fundamental role in the
ultimate accuracy that can be reached. With the proper treatment of the pseudo-continuum, the soft X-ray flux of both stars and
thereby the absolute effective area of the LETGS can be determined with an uncertainty of less than 5 %.
-
- [03.02] Kashyap, V., van Dyk, D., Connors, A., Freeman, P., Siemiginowska, A., Zezas, A., & the SAMSI-SaFeDe Collaboration, What Is An Upper Limit?
- 2008HEAD...10.0302C [ADS]
- Abstract
When a known source is undetected at some statistical significance during an observation, it is customary to state the upper
limit on its intensity. This limit is taken to mean the largest intrinsic intensity that the source can have and yet remain
undetected. (Or equivalently, the smallest intrinsic intensity it can have before its detection probability falls below a
certain threshold.)
Note that this definition differs from the concept of the parameter confidence bounds that are in common usage and are
statistically well understood. This similarity of nomenclature has led to a confusing literature trail.
Here we describe the upper limit, or the detection limit, as used by astronomers in a statistically coherent fashion. We show
that it follows naturally from the calculation of the statistical power and describe examples to illustrate how it works and how
to calculate it in real world cases (see also Aldcroft et al., this conference).
This work was supported by the Chandra X-ray Center NASA contract NAS8-39073 and NSF grant DMS 04-06085.
- Haiku [.pdf]
- poster [.pdf]
- See also: #04.02, Aldcroft et al.
-
- [03.03] Loredo, T., & Wasserman, I., Modeling Populations Using Heterogeneous Data, with Application to GRBs
- Abstract
Astronomers often follow up survey observations with supplemental observations that provide additional information about a
subset of surveyed sources. A motivating example is gamma-ray bursts (GRBs): survey missions such as CGRO and Swift provide
basic information (e.g., direction, peak flux) for all bursts, but for a subset of bursts with counterparts at other
wavelengths, other data are available, such as host galaxy redshifts or isotropic energy estimates. This heterogeneity
significantly complicates global (population-level) analyses. Building on our earlier Bayesian framework for analyzing GRB and
other survey data, we have developed a "data fusion" methodology that can optimally combine survey data from various sources.
Using analysis of the GRB spatial and luminosity distribution as a concrete example, we describe the overall approach, present
preliminary results for GRBs, and highlight benefits of Bayesian data fusion over more conventional approaches. This work is
partially funded by the NASA Swift GI program.
- Haiku [.pdf]
- poster [.pdf]
-
- [41.15] Lee, H., Kashyap, V., Drake, J., Ratzlaff, P., Siemiginowska, A., Zezas, A., Connors, A., van Dyk, D., Park, T., Izem, R., Incorporating Effective Area Uncertainties Into Spectral Fitting
-
- 2008HEAD...10.4115L [ADS]
- Abstract
We have developed a fast, robust, and general method to incorporate effective area calibration uncertainties in model fitting of
low-resolution spectra. Because such uncertainties are ignored during spectral fits, the error bars derived for model parameters
are generally underestimated. Incorporating them directly into spectral analysis with existing analysis packages is not possible
without extensive case-specific simulations, but it is possible to do so in a generalized manner in a Markov-Chain Monte Carlo
framework. We describe our implementation of this method here, in the context of recently codified Chandra effective area
uncertainties. We develop our method and apply it to both simulated as well as actual Chandra ACIS-S data. We estimate the
posterior probability densities of power-law model parameters that include the effects of such uncertainties. We describe a file
format based on the HEASARC ARF standard which will allow these uncertainties to be included during analysis in any astronomical
spectral fitting package.
This research was supported by NASA/AISRP grant NNG06GF17G and NASA contract NAS8-39073 to the Chandra X-Ray Center.
- poster [.pdf]
-
- [04.01] Miyaji, T., Griffiths, R.E., & C-COSMOS Team CSTACK: A Web-Based Stacking Analysis Tool for Deep/Wide Chandra Surveys
- Abstract
Stacking analysis is a strong tool to probe the average X-ray properties of X-ray faint objects as a class, each of which are
fainter than the detection limit as an individual source. This is especially the case for deep/wide surveys with Chandra, with
its superb spatial resolution and the existence of survey data on the fields with extensive multiwavelength coverages. We
present an easy-to use web-based tool (http://saturn.phys.cmu.edu/cstack), which enables users to perform a stacking analysis on
a number of Chandra survey fields.Currently supported are C-COSMOS, Extended Chandra Deep Field South (proprietary access,
password protected), Chandra Deep Fields South, and North (Guest access user=password=guest). For an input list of positions
(e.g. galaxies selected from an optical catalog), the WWW tool returns stacked Chandra images in soft and hard bands and
statistical analysis results including bootstrap histograms. We present running examples on the C-COSMOS data. The next version
will also include the use of off-axis dependent aperture size, automatic exclusions of resolved sources, and histograms of
stacks on random positions.
- Haiku [.pdf]
-
- [03.05] Ptak, A. Bayesian Analysis of X-ray Luminosity Functions
- Abstract
Often only a relatively small number of sources of a given class are detected in X-ray surveys, requiring careful handling of
the statistics. We previously addressed this issue in the case of the luminosity function of normal/starburst galaxies in the
GOODS area by fitting the luminosity functions using Markov-Chain Monte Carlo simulations. We are expanding on this technique to
include uncertainties in the redshifts (often photometric) and uncertainties in the spectral energy distributions of the
sources. We will also be performing a Bayesian analysis of the correlations between X-ray emission and fluxes from other bands
(particularly radio, IR and optical/NIR) and also between X-ray luminosity and star-formation rate and stellar mass estimates.
We will discuss our current results and progress to date, including also the impact of the additional 1 Ms of CDF-S data.
- poster [.ppt]
-
- [04.04] Zoglauer, A.C., Boggs, S.E., Collmar, W., Kippen, M., Novikova, E., Weidenspointner, G., & Wunderer, C.B., List-Mode Likelihood Imaging Applied to COMPTEL Data
- Abstract
Eight years after de-orbiting CGRO, COMPTEL's 1-30 MeV all-sky imaging performance, as well as its sensitivity for continuum
sources, remain unsurpassed. Moreover, currently no official successor mission exists
that might challenge COMPTEL's performance --- only GLAST is expected to improve upon COMPTEL above ~20 MeV. Since the time when
the original COMPTEL data analysis techniques were developed in the 1990's, the performance of state-of-the-art computers has
increased by more than a factor of 100, allowing for new analysis techniques that were unthinkable at that time. These encompass
detailed orbital background simulations including detector activation, Bayesian event selection techniques, and list-mode
imaging.
In this work we concentrate on the list-mode maximum likelihood expectation maximization (ML-EM) imaging method. It allows all
measured information to be included into the imaging response of the instrument. As a consequence, this approach has the
capability to produce improved images compared to those from the original techniques applied to COMPTEL data. We are currently
in the process of adapting the list-mode approach to COMPTEL, determining the imaging response via simulations, and reanalyzing
data with the ML-EM method.
We will show first results of the Galactic anti-center region, and compare them to previous COMPTEL results.
-
- * This is the sequel to the AstroStatistics Workshop held at HEAD-2004 in New Orleans
-
- * Question from the floor to DvD: "I have heard of something called BLoCXS -- is it available?"
- Answer: yes, but it is not robust enough for public release, so if you want to use it, contact the authors.)
-
- * A general point made by EDF and later reinforced by DvD in response to question on why the F-test behavior is so different for emission and absorption lines (paraphrased): we can usually only tell when something is correct; when it isn't, nobody knows how wrong it can be.
-
- * Comment from the floor: the Protassov et al. method of calibrating the LRT via ppp has been implemented as an XSPEC script (... by whom? ...)
-
- * There is an AstroStatistics Weblog run by the CHASC group called the AstroStat Slog
-
- * Comment by Tom Loredo on MCMC being "just one of a suite of tools for Bayesian computation (the simplest building on commonly available results; i.e., the Laplace approximation)", also discussed in his 1999 ADASS paper on Computational Technology for Bayesian Inference
-
CHASC