Presentations |
Brian Vegetabile (UC Irvine) & Tom Aldcroft (CfA/CXC) / Hong Jae Sub (Harvard) 12 Aug 2014 Noon EDT SciCen 706 |
- [Brian and Tom] Chandra X-Ray Observatory Aspect Camera Assembly Star Acquisition Analysis
- Abstract:
The Chandra X-Ray Observatory celebrated its 15th year of service this
past July, showcasing a feat of engineering success that can be
attributed to a team of talented engineers. Chandra was initially
designed as a 5 year mission and as such robust models of performance
are necessary to ensure the future success of the spacecraft. This
talk gives an overview of the Chandra Aspect Camera Assembly (ACA) and
the current challenges it faces with regards to star acquisition. We
then present the current methodology that attempts to capture the
probability that the ACA will acquire an individual star. Finally we
present two methods that are planned for the assessing this
probability as well as preliminary results. The methods discussed
will be Logistic Regression based on Maximum Likelihood Estimates and
then compared against a Bayesian Probit Regression model from the 2006
paper by Holmes and Held.
-
- [Jaesub] Source Detection in Mosaicked NuSTAR Images Using 'Trial Number' Map
- Abstract:
Nuclear Spectroscopic Telescope ARray (NuSTAR), launched in June,
2012, is the first focusing hard X-ray telescope, covering 3 to 80
keV X-rays. As a part of the Galactic Plane Survey, we have observed
the Galactic Center region with a series of overlapping pointings.
It is desirable to perform source detection algorithms on the
Mosaicked images in order to take full advantage of photon statistics.
However, relatively large point spread function and dramatic changes
of the exposure across the field make wavdetect and other conventional
methods unreliable. We introduce a new approach for source detection
using a simple Poisson statistics. This approach has been used to
validate selected faint sources detected by wavdetect and other
methods in the past. Here we apply the technique on every pixel
in the entire image in search for new sources by generating a map
of trial numbers needed to produce observed counts by random
fluctuations.
-
|
Aneta Siemiginowska (CfA) / Vinay Kashyap (CfA) 2 Sep 2014 Noon EDT SciCen 706 |
- AstroStatistics: What is it good for?
- Abstract:
We introduce astrostatistics concepts to Statistics students. We
describe the nature of high-energy X-ray and gamma-ray data and walk
through some examples of high-energy astronomical sources. We then
review the work done by CHASC graduate students.
- Presentation slides:
[.pdf] ;
[.pptx] ;
[.key '09]
- Video:
[YouTube] ;
[UCDavis]
-
|
Victor Pankratius (MIT/Haystack) 16 Sep 2014 12:30pm EDT SciCen 706 |
- Big Computing and Computer-Aided Discovery in Astronomy
- Abstract:
Next-generation astronomy needs to handle rapidly growing data
volumes from ground-based and space-based telescope networks. In
radio astronomy for instance, the current generation of antenna
arrays produces data at Tbits per second, and forthcoming instruments
will expand these rates much further. Human scientists are thus
becoming increasingly overwhelmed when attempting to opportunistically
explore Big Data.
As real-world phenomena at various wavelengths are digitized and
mapped to data, the scientific discovery process essentially becomes
a search process across multidimensional data sets. The extraction
of meaningful discoveries from this sea of data therefore requires
highly efficient and scalable machine assistance to enhance human
contextual understanding.
Computer-Aided Discovery uses automation in a new way to match
models and observations and support scientists in their search. The
NSF-supported computational infrastructure currently being developed
at MIT Haystack opens up new possibilities to answer questions such
as: What inferences can be drawn from an identified feature? What
does a finding mean and how does it fit into the big theoretical
picture? Does it contradict or confirm previously established models
and findings? How to test hypotheses and ideas effectively? To
achieve this, scientists can programmatically express hypothesized
scenarios, constraints, and model variations. Using programmable
crawlers in a cloud computing environment, this approach helps
delegate the automatic exploration of the combinatorial search space
of possible explanations in parallel on a variety of data sets.
- Bio:
Victor Pankratius is a computer scientist at MIT Haystack Observatory. He is the principal investigator of the NSF-supported computer-aided discovery project and currently leading efforts to advance astroinformatics at Haystack. He is also involved in projects such as ALMA Phasing to enhance the ALMA Observatory with Very-Long Baseline Interferometry capabilities, the Event Horizon Telescope, as well as in the Radio Array of Portable Interferometric Detectors (RAPID). His track record includes research on parallel multicore systems and software engineering as well as industry collaborations with partners such as Intel, Sun Labs, and Oracle. Contact him at pankrat [at] mit.edu, victorpankratius.com, or Twitter @vpankratius.
- Slides are in Dropbox/astrostat folder
-
|
Hyungsuk Tak (Harvard) 23 Sep 2014 12:30pm EDT SciCen 705 |
- Bayesian approach to time delay estimation
- Abstract:
Light rays from a quasar take dierent routes towards the earth
when deflected due to a strong gravitational field that obstructs their
ways. The arrival times of these rays vary depending on the lengths
of paths and gravitational potentials that they go through, which
leads to several images of the same source with lagged times. These
differences in arrival time are called time delays used to calculate
cosmological parameters, e.g. Hubble constant, H0. Though various
grid-optimization methods to find time delay estimates have been
dominating this field, a fully Bayesian model is promising because
it turns computationally expensive grid optimization problem into
a simple posterior sampling scheme. The model is based on a state-
space representation for irregularly-observed time series data with an
Ornstein-Uhlenbeck process for the unobserved underlying process.
I present simulated data of doubly-lensed quasar with known delays
and one real quasar data set to show its effectiveness and accuracy.
- Slides [.pdf]
-
|
Laura Brenneman (CfA) 30 Sep 2014 12:30pm EDT SciCen 706 |
- Measurement of a Black Hole Spin: X-ray spectroscopy of the active galaxy NGC 4151: an example of spectral model fitting and caveats
- Abstract:
I will present our X-ray spectral analysis of a 150,000-s X-ray
observation of the Seyfert 1.5 galaxy NGC 4151 taken with the NuSTAR
and Suzaku observatories. We show that our broadband observations can
disentangle the coronal emission, reflection, and absorption
properties of the active galactic nucleus (AGN). Despite spectral
complexity added by absorption variability during the observations, we
find strong evidence for relativistic reflection from an ionized inner
accretion disk. We compare our results with an alternative model
composed only of absorption and reprocessing of continuum emission in
several zones of material relatively far from the SMBH. We measure an
increase in the total line-of-sight column density during a four-hour
time interval, which is the shortest timescale for X-ray absorption
variability yet observed in this source. These results demonstrate the
power of employing NuSTAR in conjunction with lower-energy X-ray
observatories such as Suzaku to measure the fundamental physical
properties of AGNs with the greatest accuracy and precision ever
achieved.
-
|
John Johnson (CfA) 07 Oct 2014 1pm EDT SciCen 706 |
- A way to teach Bayesian statistical methods to novices
- Abstract:
When a student first hears "Bayesian analysis", the words immediately
sound foreign and the concept can be daunting. I will present my method of teaching
Bayesian statistics to novices that leaves them with a set of fundamentals that can
serve as a starting point to solving any any analysis problem (at least in principle).
To the extent that applying these basic fundamentals becomes computationally
intractable will lead the student to more advanced concepts.
-
|
Ryan Christopher Lynch (MIT) 21 Oct 2014 1pm EDT SciCen 706 |
- Bayesian approaches to the detection and analysis of unmodeled gravitational wave signals
- Abstract:
With the Advanced LIGO detectors planning to begin their first science runs in 2015, efforts are being made by data analysts to prepare for the first gravitational wave detections. While the gravitational waveforms emitted by compact binary coalescence events are well-modeled theoretically, efforts are also being made to prepare for unmodeled, burst-like signals emitted from events such as supernovae. In this talk, I will discuss two Bayesian approaches to the unmodeled burst problem. The first is an MCMC-based approach to parameter estimation, known as LALInference Burst (LIB), which I will show can also be used in the context of signal detection. The second is a semi-analytic sky-localization pipeline that is currently under development, which ideally could be used as a low-latency precursor to the full parameter estimation produced by LIB.
- Slides [.pdf]
-
|
Fan Min Jie 04 Nov 2014 1pm EST/10am PST SciCen 706 remotely from UCD |
- Separating image structures via graph-based seeded region growing
- Detecting source structure in 2-D images is of
great importance in astronomy. This talk reports our
on-going work on using a graph-based seeded region growing
(G-SRG) method for clustering and detecting source
structures. The Delaunay triangulation and the Voronoi
estimator are utilized in the process. The computation is
fast and easy to implement. Our numerical experiments on a
typical Chandra X-ray Observatory image show that this
method achieves visually reasonable results.
- slides [.pdf]
-
|
Meng Xiao-Li (Harvard) 25 Nov 2014 1pm-2:30pm EST Sanders Theater |
- The Magic of MCMC and Statistics: A Live Performance
- Markov chain Monte Carlo (MCMC) methods, originated in computational physics more than half a century ago, have had magical impact in quantitative scientific investigations. This is mainly due to their ability to simulate from very complex distributions needed by all kinds of statistical models, from bioinformatics to financial engineering to astronomy. This talk provides an introductory tutorial of the two most frequently used MCMC algorithms: the Gibbs sampler and the Metropolis-Hastings algorithm. Using simple yet non-trivial examples, we demonstrate, via live performance, the good, bad, and ugly implementations. Along the way, we reveal the statistical thinking underlying their designs, including the secret behind the greatest statistical magic ...
(Audience participations are required, though no prior experiences are needed.)
-
|
Giri Gopalan (Harvard) 02 Dec 2014 1pm EST SciCen 706 |
- A Bayesian Model for the Detection of X-ray Binary Black Holes
- (joint work with Saku V. and Luke B.)
- In X-ray binary systems consisting of a compact object
that accretes material from an orbiting secondary star,
there is no simple means to determine if the compact object
is a black hole or a neutron star. To assist this process we
develop a Bayesian statistical model, which makes use of the
fact that X-ray binary systems appear to cluster based on
their compact object type when viewed from a particular 3-
dimensional coordinate system derived from spectral data. In
particular we utilize a latent variable model in which the
latent variables follow a Gaussian process prior, and hence
we are able to induce the spatial correlation we believe
exists between systems of the same type. The key parameters
of this model are the probabilities that an observation
comes from a black hole, a pulsar, or non-pulsing neutron
star. A benefit of this approach is of a computational
nature - the assumption of a prior which follows a
multivariate normal distribution allows for the
implementation of elliptical slice sampling for performing
inference, a fast and stable alternative to standard
Metropolis-Hastings or Gibbs sampling (Murray 2010). Our
model is fit from 13 years worth of spectral data from 30
X-ray binary systems. Its predictive power is evidenced by
the accurate prediction of system types using inferred
probabilities from the aforementioned model.
-
- A version of this talk was presented at the 225th
meeting of the AAS at Seattle. A transcript, and associated
animations are on github.
[github]
-
|
Jiao Xiyun (Imperial) 27 Jan 2015 1pm EST / 6pm GMT remotely from Imperial |
- Embedding Supernova Cosmology into a Bayesian Hierarchical Model
- Abstract:
The Physics Nobel Prize (2011) was awarded for the discovery that
the expansion of the universe is accelerating. We embed a big
bang cosmological model into a Bayesian hierarchical Gaussian
model to quantify the acceleration. This problem has motivated
our work and the complexity of the model makes it an ideal
testbed to develop new strategies in algorithm. The Data
Augmentation (DA) algorithm and Gibbs sampler are widely used for
sampling from highly structured models. To improve their
convergence, numerous algorithms have been developed. We pay
special attention to the Marginal Data Augmentation (MDA),
Partially Collapsed Gibbs (PCG) and Ancillarity-Sufficiency
Interweaving Strategy (ASIS). We propose combining the usage of
MDA, PCG, ASIS along with Metropolis-Hastings algorithm to
simplify the implementation and further improve the convergence
of Gibbs-type samplers. We use both the cosmological hierarchical
model and a factor analysis model to illustrate our combining
strategy. Moreover, we introduce the idea of surrogate
distribution, which shares the same marginals with the target
distribution but has different correlation structure. MDA, PCG
and ASIS are unified by this idea. If we are only interested in a
subset of parameters, it is promising to produce further
efficiency with surrogate distribution samplers. In the end, we
describe some extensions on the cosmological model.
- Slides [.pdf]
-
|
Si Shijing (Imperial) 10 Feb 2015 1pm EST / 6pm GMT remotely from Imperial |
- Empirical and Fully Bayesian Hierarchical Models:
Two Applications in the Study of Stellar Evolution
- Abstract:
Fitting complex statistical models often requires sophisticated
computing that for practitioners takes the form of black-box
codes. While introducing hierarchical structures can have
important statistical advantages (e.g., borrowing strength to
reduce mean square error), it poses significant computational
challenges, especially if one does not want to "open the black
box". In this project we develop a wrapper for such black-box
computer code that iteratively calls the black-box code in order
to fit a hierarchical model via Empirical Bayes. Specifically,
an EM-type algorithm is employed to obtain the MLEs of the
hyper-parameters in a way that takes advantage of the available
black-box code. We compare the results with a fully-Bayesian
approach and deploy our methods on two problems in stellar
evolution: estimating the population distribution of the age of
Halo white dwarf stars and exploring the variability of the
Initial Final Mass Relationship among stellar clusters.
- Presentation Slides [.pdf]
-
|
Irina Udaltsova (UCDavis) and Andreas Zezas (Crete) 17 Feb 2015 10am PST / 1pm EST / 6pm GMT / 7pm EET remotely from UC Davis and Crete |
- Colorful logN-logS
- Abstract:
The logN-logS is the relation between the number of sources (e.g. galaxies, stars, etc) as a function of their intensity. It is one of the key tools we have for characterizing and studying the populations of Astrophysical objects.
Unfortunately, its derivation is subject to several biases arising, for example, from the source detection process, and the Poissonian nature of the measured source intensities and the observed number of sources.
Over the past few years we have made several advances in the direction of accounting for these biases in a principled way.
We developed a hierarchical Bayesian model for inferring the logN-logS distribution for source population, corrected for the non-ignorable missing data mechanism. The method also allows the joint estimation of breakpoint(s) in the logN-logS and the unknown number of observed sources.
However, an additional complication that is related to the fact that not all sources have the same spectral parameters and influences the inferred source intensities, is rarely included in the formulation of the logN-logS analysis.
The goal of this talk is to present the problem, and discuss different possibilities for addressing it, which may ultimately lead to the combination of two major CHASC endeavors: the logN-logS analysis with BLoCKS or BEHR.
- AZ slides [.ppt]
- IS slides [.pdf]
-
|
Lazhi Wang (Harvard) 03 Mar 2015 1pm EST |
- Bayesian Model for Detection of X-ray ``Dark" Sources
- Abstract:
The goal of source detection is to obtain the luminosity
function, which specifies the relative number of sources at
each luminosity for a population of sources. Of particular
interest is the existence of ``dark" sources with zero
luminosity. In this talk, we first introduce the
hierarchical Bayesian model we build for the source
intensities. To capture the possible existence of X-ray
``dark" sources, we assume the intensities are zero with
probability $\pi_d$, and follow a gamma distribution with
probability $1-\pi_d$. We then discuss a hypothesis testing
procedure to examine the existence of X-ray ``dark" sources.
Results of simulation studies are provided to show the
performance of the model, and the level and the power of the
testing procedure under a variety of simulation settings.
Finally, we apply our method to the real data.
- Presentation slides [.pdf]
-
|
Hyungsook Tak (Harvard) 10 Mar 2015 1pm EST |
- Time Delay Challenge
-
|
Gwendolyn Eadie (McMaster) 24 Mar 2015 1pm EDT remotely from McMaster |
- Bayesian Mass Estimates of the Galaxy: incorporating incomplete data
- Abstract: The total mass and cumulative mass
profile M(r) of the Milky Way Galaxy are two of the most
fundamental properties of the Galaxy. To estimate these
properties, we rely on the kinematic information of
satellites which orbit the Galaxy, such as globular clusters
and dwarf galaxies. However, transforming this data
accurately into a mass profile is not a trivial problem,
because the complete 3D velocity and position vectors of
objects are sometimes unavailable. We have developed a
Bayesian method to deal with incomplete data effectively.
Our method uses a hybrid-Gibbs sampler that treats the
unknown velocity components of satellites as parameters in
the model. We explore the effectiveness of this method using
simulated data, and then apply our method to the Milky Way
using velocity and position data of globular clusters and
dwarf galaxies. We find that in general, missing velocity
components have little effect on the total mass estimate for
each of four different models.
- Presentation slides [.pdf]
-
|
Ian Czekala (CfA) 14 Apr 2015 1pm EDT |
- Robust Spectroscopic Inference with Imperfect Models
- Abstract:
We present a modular, extensible framework for the
spectroscopic inference of physical parameters based on
synthetic model spectra. The subtraction of an imperfect
model from a continuously sampled spectrum introduces
covariance between adjacent datapoints (pixels) into the
residual spectrum. In the limit of high signal-to-noise
data with large spectral range that is common for stellar
parameter estimation, that covariant structure can bias the
parameter determinations. We have designed a likelihood
function formalism to account for the structure of the
covariance matrix, utilizing the machinery of Gaussian
process kernels. We specifically address the common problem
of mismatches in model spectral line strengths (with respect
to data) due to intrinsic model imperfections (e.g., in the
atomic or molecular data, or radiative transfer treatment)
by developing a novel local covariance kernel framework that
identifies and self-consistently downweights pathological
spectral line "outliers." By fitting multiple spectra in a
hierarchical manner, these local kernels provide a mechanism
to learn about and build data-driven corrections to
synthetic model spectral libraries. The application of this
method, implemented as a freely available open source code,
is demonstrated by fitting the high resolution optical
(V-band) spectrum of WASP-14, an F5 dwarf with a transiting
exoplanet, and the moderate resolution near-infrared
(K-band) spectrum of Gliese 51, an M5 dwarf.
- arXiv:1412.5177
- Starfish
- Presentation Slides [url]
-
|
David Stenning (UCIrvine) 21 Apr 2015 1pm EDT / 6pm GMT remotely from UCI |
- Astrostatistical Analysis in Solar and Stellar Physics
- Presentation slides
- Abstract:
This talk focuses on developing statistical models and
methods to address data-analytic challenges in
astrostatistics---a growing interdisciplinary field
fostering collaborations between statisticians and
astrophysicists. The astrostatistics projects we tackle can
be divided into two main categories: modeling solar activity
and Bayesian analysis of stellar evolution. These
categories from Part I and Part II of this talk,
respectively.
The first line of research we pursue involves classification
and modeling of evolving solar features. Advances in
space-based observatories are increasing both the quality
and quantity of solar data, primarily in the form of
high-resolution images. To analyze massive streams of solar
image data, we develop a science-driven dimension reduction
methodology to extract scientifically meaningful features
from images. This methodology utilizes mathematical
morphology to produce a concise numerical summary of the
magnetic flux distribution in solar "active regions" that
(i) is far easier to work with than the source images, (ii)
encapsulates scientifically relevant information in a more
informative manner than existing schemes (i.e., manual
classification schemes), and (iii) is amenable to
sophisticated statistical analyses.
In a related line of research, we perform a Bayesian
analysis of the solar cycle using multiple proxy variables,
such as sunspot numbers. We take advantage of patterns and
correlations among the proxy variables to model solar
activity using data from proxies that have become available
more recently, while also taking advantage of the long
history of observations of sunspot numbers. This model is
an extension of the Yu et al. (2012) Bayesian hierarchical
model for the solar cycle that used the sunspot numbers
alone. Since proxies have different temporal coverage, we
devise a multiple imputation scheme to account for missing
data. We find that incorporating multiple proxies reveals
important features of the solar cycle that are missed when
the model is fit using only the sunspot numbers.
In Part II of this talk we focus on two related lines of
research involving Bayesian analysis of stellar evolution.
We first focus on modeling multiple stellar populations in
star clusters. It has long been assumed that all star
clusters are comprised of single stellar populations---stars
that formed at roughly the same time from a common molecular
cloud. However, recent studies have produced evidence that
some clusters host multiple populations, which has
far-reaching scientific implications. We develop a Bayesian
hierarchical model for multiple-population star clusters,
extending earlier statistical models of stellar evolution
(e.g., van Dyk et al., 2009; Stein et al., 2013). We also
devise an adaptive Markov chain Monte Carlo algorithm to
explore the complex posterior distribution. We use
numerical studies to demonstrate that our method can recover
parameters of multiple-population clusters, and also show
how model misspecification can be diagnosed. Our model and
computational tools are incorporated into an open-source
software suite known as BASE-9. We also explore statistical
properties of the estimators and determine that the
influence of the prior distribution does not diminish with
larger sample sizes, leading to non-standard asymptotics.
In a final line of research, we present the first-ever
attempt to estimate the carbon fraction of white dwarfs.
This quantity has important implications for both
astrophysics and fundamental nuclear physics, but is
currently unknown. We use a numerical study to demonstrate
that assuming an incorrect value for the carbon fraction
leads to incorrect white-dwarf ages of star clusters.
Finally, we present our attempt to estimate the carbon
fraction of the white dwarfs in the well-studied star
cluster 47 Tucanae.
-
|
Vasileios Stampoulis (Imperial) 28 Apr 2015 1pm EDT / 6pm GMT remotely from Imperial |
- Classifying Galaxies using a Data-driven approach
- Abstract: [.pdf]
- Spectroscopy has been utilised in identifying the main
power source in active galaxies. Based on the different
mechanisms that excite the gas that exists inside the galaxies
(and which, as a result of those mechanisms, glows in
different wavelengths), the Galaxies may be separated into 4
categories: the AGN (Active Galactic Nuclei), which are
divided into LINERs and Seyferts, the HII region-like
galaxies(star forming galaxies) and the Composite galaxies
(their spectra contain significant contributions from both AGN
and star-forming).
Four emissions intensities ratios are being used as means to
classify different galaxies; log(NII/Halpha), log(SII/Halpha),
log(OI/Halpha) and log(OIII/Halpha). Both physical and
empirical models have been developed in order to propose a
classifcation scheme based on those ratios. However, the
exact demarcation between star-forming galaxies and AGN is
subject to considerable uncertainty and the increasing flow of
data from massive new surveys shows the inadequacy of the
existing scheme.
In this project we utilise a data-driven approach in order to
build a density estimation model that will describe accurately
the distributions of the 4 different classes of galaxies.
Identifying and parametrizing the distributions of the pure
star-forming Galaxies and the pure AGN would provide a solid
quantitative tool in order to explore further scientific
problems.
- Presentation slides [.pdf]
|
Murray Aitkin (Melbourne) Monday 04 May 2015 4pm EDT SciCen 705 |
- Superclusters and voids in the galaxies (revisited)
- Presentation slides [.pdf]
- Abstract: The 1990 JASA paper by Kathryn
Roeder on the analysis of the recession velocities of 82
galaxies led to a major sequence of papers on Bayesian
methods for determining the number of components in a finite
mixture of normal densities. Most methods computed the
integrated likelihoods for each number of components and
converted the integrated likelihoods to posterior model
probabilities. Different analyses of the galaxy velocity
data by major groups gave mystifyingly different
conclusions.
This talk revisits the data, and gives a grapical and a new
(and controversial) Bayesian analysis which concludes that
there is strong evidence for three components, weak evidence
for four, and no evidence for more than four. The new
analysis replaces the integrated likelihood for each model
by the posterior distribution of the model likelihood.
- References
- Postman, M.J., Huchra, J.P. and Geller, M.J. (1986) Probes of large-scale structures in the Corona Borealis region. The Astronomical journal 92, 1238-1247. [The astronomical data]
- Roeder, K. (1990) Density estimation with confidence sets exemplified by superclusters and voids in the galaxies. JASA 85, 617-624. [The first statistical analysis]
- Aitkin, M. (2001) Likelihood and Bayesian analysis of mixtures. Statistical Modelling 1, 287-304. [A comparison of the different frequentist and Bayesian conclusions]
- Aitkin, M. (2010) Statistical Inference: an Integrated Bayesian/Likelihood Approach. CRC Press, Boca Raton. [The model comparison approach, with the galaxy application pp. 210-221]
- Aitkin, M. (2011) How many components in a finite mixture? pp. 277-292 In Mixtures: Estimation and Applications. eds. K.L. Mengersen, C.P. Robert and D.M. Titterington. Wiley, Chichester
- Gelman, A., Robert, C.P. and Rousseau, J. (2013) Inherent difficulties of non-Bayesian likelihood inference, as revealed by an examination of a recent book by Aitkin. Statistics and Risk Modeling 30, 105-120. [A confused attack on the approach]
- Aitkin, M. (2013) Comments on the review of Statistical Inference. Statistics and Risk Modeling 30, 121-132. [A spirited defense of the approach]
-
|
Sara Algeri (Imperial) 19 May 2015 1pm EDT / 6pm BST remotely from Imperial |
- Statistical Issues in the Search for Particle Dark Matter
- Abstract:
Non-standard hypothesis tests commonly arise in the the search
for new physics. For example, parameters may lie on the
boundary of the parameter space, nuisance parameters may only
be defined under the alternative model, or researchers may
want to compare non-nested models. Although these issues have
been addressed since the early days of modern statistics, they
pose significant challenges in practice. Testing separate
families of hypotheses, in particular, has not yet found a
theoretically satisfactory solution that is easy to implement
and does not require any prior assumption.
This talk proposes, validates, and compares a set of new
methods that aim to address these issues. In particular, we
show that an opportune reformulation of a non-nested models
hypothesis test allows us to use well-known results in
asymptotic theory and provides a simple and ready-to-use
solution. The proposed methods rely on the Likelihood Ratio
Test (LRT) and an alternative test introduced in 2005 by Pilla
et al. that is based on the Score function. Both approaches
reduce the problem of testing non-nested hypotheses to finding
an approximation for excursion probabilities of the form
P(sup{Yt}>c), with Yt being either a Chi-square or a
Gaussian process. The main difference between the two
solutions is that the method based on the LRT formalizes the
problem in terms of excursion sets, whereas the Score-based
method provides an approximation based on tubes formulae. It
will be shown that both methodologies exhibit advantages and
suffer limitations, both in terms of computation and, more
importantly, in terms of the specific conditions associated
with the models being tested.
- Presentation slides [.pdf]
|
Anna Barnacka (CfA) 16 Jun 2015 1pm EDT SciCen 706 |
- Resolving the High Energy Universe with Strong Gravitational Lensing
- Gravitational lensing is a potentially powerful tool for
elucidating the origin of gamma-ray emission from distant
sources. Cosmic lenses magnify the emission and produce time
delays between mirage images. Gravitationally-induced time
delays depend on the position of the emitting regions in the
source plane. Well sampled gamma-ray light curves provide a
measure of the time delay and thus a new route to resolving
the sources. We have investigated three methods of time-delay
estimation from unresolved light curves; the Autocorrelation
Function, the Double Power Spectrum, and the Maximum Peak
Method. As a prototypical example of the power of lensing
combined with long, uniformly sampled light curves provided by
the Fermi satellite, we investigated the spatial origin of
gamma-ray flares from PKS 1830-211.
- Presentation slides: [.key.zip] [.pdf]
-
|
|
-
|