ISSI Workshop 2016 April: meeting notes

Table of Contents


Websites


Team's Stated Goal

"During the past several decades the amount of high quality solar and stellar data has exploded. These data, however, must be interpreted in the context of complex atomic processes to derive insights into the underlying physics. It is unclear how robust our inferences can be once the uncertainties in the atomic data and instrument calibration are properly accounted for. The goal of this team is to develop and apply the robust statistical techniques needed to fully understand the limits of our ability to interpret remote sensing observations."

Broadly speaking, this project entails a chain of specialists in astrophysical atomic data, statistics, and astrophysical data analysis working together to understand and carefully propagate uncertainties in the atomic physics up to the analysis of spectral observations.


Team Members

HW: Harry Warren (PI) NRL
AF: Adam Foster SAO
CB: Connor Ballance QU Belfast
CG: Chloe Guennou Columbia
DS: David Stenning UCI
DVD: David van Dyk Imperial Coll. London
FR: Fabio Reale Palermo
FA: Frederic Auchere Inst. Astr. Spatiale
GDZ: Giulio del Zanna Cambridge
IA: Inigo Arregui Inst. Astr. de Canarias
JC: Jessi Cisweski CMU
MW: Mark Weber SAO
NS: Nathan Stein UPenn
RS: Randall Smith SAO
VD: Veronique Delouille Royal Obs. Belgium
VK: Vinay Kashyap SAO

Day 1 --- 2016.04.13, Wed

D1.1. HW: Introduction

  • Went around room introducing ourselves to 2 new members (Connor Ballance, Adam Foster).
  • Recapping the basic problem.
  • We must infer solar (or astrophysical) properties using atomic physics models, which have large (or unknown) errors.
  • There are Fe XIII lines in the wavelength range around 200 A.
  • A narrow range in one element avoids some problems.
  • Also, count rates are high (in the Sun), which helps the statistics.
  • So using these particular density-sensitive line pairs is about the simplest, cleanest version of this problem available.
  • If one only does a single pair, then density can be cancelled out, and a simple T ratio can be analyzed. But there was a decision last year that we want to look at a slightly more complicated version of the problem, where we solve several line pairs together, and that necessarily leads to considering both the density and T. (Assuming T is isothermal.)
  • Q: How quickly does this run? A= For CHIANTI to turn some Fe XIII ne and Te into line intensities is only a few secs.
  • Q: Used default CHIANTI? A= Yes.
  • Some discussion trying to understand why the chi-squares are so large.
    • HW points out that there is the fit of the Gaussians to the spectral data: the counts in the lines are very high so any deviation is unlikely. The simple model seems to fit the model well, but this is subjective. Another possible contribution to the large chi-squares is how well the ne, Te solutions actually fit the data.
  • The atomic model is essentially a set of transition rates. We can manipulate these rates to sample estimated errors.
    • HW tried naive sampling by just multiplying every rate by some random perturbation. This generates a large set of alternate emissivities. Each run takes ~10 secs.
      • Applying this with 10% errors leads to significantly different results than using the CHIANTI default emissivities.
      • Observation that all of the runs together leads to a wide histogram distribution for density. Furthermore, this does not appear to be sensitive to the chi-square weighting of the individual runs.
      • Left with questions:
        • What are the errors in atomic data ? (GDZ has done this by hand for these lines.)
        • How do we sample parameter space with discrete atomic data? ("Discrete" means HW only made 1000 samples of emissivities.)
        • How do we analyze many obs?
        • How do we scale to more complicated problems? (Eg, uncertainties in excitation rates, ionization fractions, abundances, calibration.)
      • Q: Could we dig into the guts of CHIANTI and control the errors on specific quantum values? A= Probably yes, although need a GDZ or RS.

D1.2. VK: Oxygen lines (astrophysical data)

  • Astrophysical data are different than solar data, insofar as that individual photons are recorded.
  • HW: Perhaps intensities in physical units could be provided?
    • VK: They are, via the plots, but are left at basic data to make it easy for the Statisticians to model--- the less pre-processing, the less biased the estimation will be.
  • Time-wavelength plot of photon spectra.
  • Plots of T sensitivity of Fe XIII and O VII lines.
  • The O VII lines have a very different T sensitivity. (Point made that O VII responses are more separable, ie, you might be able to tell which subset of curves apply.)
  • Also, you can calculate the T first, independent of the density, and then later bootstrap to the density.
  • On GitHub there is a Capella dataset. VK reviewed the format of the datafile.
    • The +1 and -1 orders have slightly different effective areas, but otherwise could be "combined".
      • VK: Re "combined"... The numbers in the RDB file refer to counts integrated across certain wavelengths ranges dominated by specific lines. The original data are in the form of a list of events with wavelengths and effective areas attached, and may be put together in different ways depending on what is needed.
  • To analyze this dataset, would need the sampling of atomic physics emissivities to be in both T and dens, contra Fe XIII analysis.
  • Some discussion with CB re how much work it is to rerun the atomic models for variations in both T and dens. He answered that it is not much worse, but that you aren't going to redo the calculation from step one every time.
  • Some discussion of the impact of ionization balances for Fe versus O.
    • VK: upon further discussions, it is unclear just how much of an issue this is. CB's method self-consistently introduces variations to the ion balance while messing with the orbitals, and may be sufficient.

D1.3. DS: Stats approach on Fe XIII problem.

  • HW: This approach is similar to importance sampling.
    • VK: this just means that the sample at hand has more entries where the probability density is higher, and vice versa. So, exactly like.
  • Used GDZ's sampling of atomic models.
  • Method #1 = Pragmatic Bayes ==> Assume P(atomic | data) = P(atomic). This is a simplifying assumption.
  • Method #2 = Full Bayes ==> Effectively, this allows the likelihood of a particular atomic model matching the data to be used as a weight on that model's contribution to the posterior for the desired parameters (n, ds). The result is to shrink the histogram distribution range on the estimated parameters.
  • Initially skipped over some of the mathematical details, but then circled back and delved into it. [[Sorry, couldn't follow details. --MW]]
  • The Full Bayesian method gives relatively robust results for estimating n, ds. The "disturbing result" is that it does not give robust results on finding the best atomic models from the data. The MCMC approach involves some sampling, and the process is landing on a different subset of "best" models every time. In this respect, they look different even if they still produce very similar parameter estimates.
    • VK: This appears to have been due to a bug where a default curve was being picked all the time at a second step, and has been fixed now. (See Section 3 on Day 2.)
  • Q: Would it be useful to parameterize the set of atomic models by a handful of collision and decay rates, instead of making up a direct discrete index? A= The statisticians think that would be useful if you were not greatly increasing the number of parameters.
    • CB: Those two categories of rates are highly correlated by T, so have to be careful.
  • HW asked for this code to be put into our team repository. DS agreed; he just wants to clean it up a bit first.

D1.4. NS: Different Stats approach.

  • Need to be careful with priors in Full Bayesian. So ds --> Cauchy, not uniform. Discrete uniform on atomic models. Uniform on log n. Treated n differently from ds because the atomic models wer already sampled and are functions of n but not ds.
  • NS approach differs from DS approach in a technical detail of how the model likelihoods are being averaged.
  • Use "STAN" code to do calculations.
    • Hamiltonian Monte Carlo.
    • But it cannot handle the discrete models, so he does an analytic marginalization over the models, by hand.
    • VK: This treats it as a mixture of 1000 replicate curves and computes an overall likelihood that includes all the possible atomic replicates, then marginalizes -- sums over -- the indices of the replicates.
  • FR: All of these estimates of ds look too big for moss. MW: Yeah, that's about 10,000 km thick... HW: I had to do some tricks with elemental abundances, etc. to get the intensities to come out right. Maybe I introduced a systematic error.
  • NS finds two-mode distributions in n, ds, contra DS who found single-mode results.
    • VK: Both have now converged to give the same answer.
  • There are three atomic replicate curves which account for almost all the weights given to them, with one of them accounting for 83% of it.
  • HW asked for this code to be put into our team respository. NS agreed, with caveats on items like STAN which need to be installed for R.
  • DVD: DS and NS should calculate the contour plots for their results, which should show up the nature of the difference they are getting in their results.
  • NS and DS had compared in detail the results for using one particular atomic model curve, and they verified that they got the same result.
  • DVD: Q= Can we have more pixels? HW: A= I have put 1000 pixel data in the repository. They are from an EIS raster of an active region. I selected 1000 pixels with the largest intensities. They are not guaranteed to be spatially uncorrelated. They are not all pure moss pixels.

D1.5. JC: Another approach. Approx Bayesian Computation (ABC).

  • Could incorporate all of the CHIANTI code into the calculations, and still get Bayesian posteriors.
  • Compare calculated data against simulated data to power a rejection process.
    • VK: this is just describing how ABC works. When you don't have an easy way to compute likelihoods, e.g., because your models are computer generated randomly *cough* atomic replicates *cough* then you generate the model, compute some measure of goodness, and reject the obviously bad ones.
  • This is useful for sampling likelihoods when you are unable to write them down.
  • Also, it incorporates any systematic errors you don't know.
    • VK: don't think this is correct -- can't account for anything that hasn't been explicitly put in to the process]
  • Q= Could we be trying to fit the whole spectra instead of a handful of line intensities? GDZ, HW: A= Nah, not really with EIS, many problems with that. (Eg, background levels, multithermal plasmas.) VK: Chandra has code to predict spectra from n, T models that includes instrumental effects like QE, line response.

D1.6. CB: Propagating uncertainties w.r.t. O VII lines.

  • Baseline versus Sensitivity
    • Baseline is varying by top-down errors (see HW).
    • OR baseline = just sample over existing databases and methods in the literature; gives generous errors. VK: This is the method followed by GDZ.
    • Sensitivity: Bottom-up is actually recalculating the orbitals, etc. For one method, sample 100s of rates, etc. Does not determine absolute uncertainty between methods.
  • "R-matrix" = setting up Hamiltonian for all transitions and then diagonalizing.
  • To run the full problem, the parallel codes exist and have been tested, but they would require you to get a grant and access to a supercomputer.
  • Bottom-up method:
    • For a He-like atom, assume orbitals are bound by cases for H-like and Li-like. Also, allow larger variation for higher orbitals.
    • Since H-like is pretty unique, then this is probably over-generous in estimating the errors.
      • VK: Which ends up producing large variations in line locations, most of which can be ruled out simply by comparing with the measured line locations.
  • HW: Q= What is happening when people come out with "improved" atomic data?
    • CB: A= They are either adding more lines, levels, and/or they are optimizing the structure to incorporate new or improved experimental measurements.
  • Some discussion of ionization processes. Changing a low orbital increases the ionization potential for the low levels, but better shields the high levels and might decrease the ionization potential at higher levels, so there are two competing tendencies here and the effect is dependent upon both T and dens.
  • Q= Aren't the wavelengths really important? Don't we want to reproduce the well-observed line energies? GDZ: A= Yes, they're important, because they can have an effect on level populations in principle.
    • VK: If wavelength information is available (in both atomic replicates and in data), the statistics methods can use that to discriminate among the atomic replicates.
  • HW: Q= How well do these calculations scale? CB: A= Depends on your resources. If you only need to do a few 100 levels, can do in a few hours. Goes as n^3 scaling. Can do Fe XIII on local cluster (48 nodes), but aren't going to do all Fe ions.

D1.7. FR: Requesting Loci curves.

  • FR: I suggest it would be a simple and useful plot to see loci curves for each for the 5 EIS Fe XIII lines.
  • HW: You mean like the EIS T-EM curves, you want to see ds-density loci curves?
  • FR: Yes.

D1.8. GDZ: Making ratio curves with CB's results.

O VII:

  • CB generated 12 new calculations for O VII.
  • GDZ used these and calculated ratios for each of them.
  • The default CHIANTI ratio curve is consistent with the 12 CB runs, albeit at their edge. CB: The default CHIANTI curve depends on a similar calculation with the same codes, but the CHIANTI people probably did some optimizations, and I didn't do any optimizations in generating my run. GDZ: Yes, this indicates some systematic difference, but it's not a significant discrepancy for this analysis.

Fe XIII:

  • For this collision data, GDZ only had to calculate the level populations for the first 4 levels, to effectively completely explain the transitions. (At 2 MK.)
  • Recalculated the effective collision strengths. He got a small percentage difference from Peter Storey's nominal values. (At 2 MK.)
  • The difference is because GDZ counted 749 levels up to n=4. Storey only went part way into n=3.
  • GDZ compared his A-values to Peter Young's (2004) A-values.
  • MW: Q= Is GDZ using these discrepancies as "uncertainties"?
    • VK: A= Yes, but this could be "wrong"--- the new calculations could simply be much more accurate.
  • GDZ applied uncertainties as errors on a normal distribution. Therefore, a "10% uncertainty" leads to most randomizations falling within +/- 20%, for a variation of 40%.
  • So then GDZ applied 100 random variations to both the excitation rates and A-values.
  • HW: Q= Will your method scale to more complicated or larger problems? GDZ: A= It's easy to do, but it requires a lot of art, and is not as well-founded as what CB is doing. Authors could complain about why you haven't compared to them, for example.
  • VK: Q= Could you use your relationship between uncertainty and effective collision strength generically?
    • GDZ: A= possibly.
    • [VK post-comment: uh.. I don't remember what I was asking for, but this doesn't sound like it!]

Day 2 --- 2016.04.14, Thu

D2.1. AF: Looking for correlations when varying the orbitals (a la CB).

  • Looking for correlations in CB's runs.
  • If one can find correlations in the emissivities, that would require the least storage, ie, amount of info. If instead one has to go back to, say, correlations in A-values, then that would require saving much more info.
  • Did the emissivity for different densities, and that did not change much, but requires more runs to investigate.
  • CB: Looks like the most correlated transitions are the strongest dipole ones.
  • Also looked at correlations for level populations.
  • Closely set levels turn out to be anit-correlated.
  • Correlations are stronger at higher densities.
    • HW: p-values appear to be significant (<0.05) for highly correlated transitions but weaker correlations have less significant p-values (e.g., r~0.5 p~0.25).
    • VK: Uhm.. I don't understand the significance of this point? p-value is a measure of whether the observed correlation can be generated from a random association, so a larger correlation is likely to be more significant, i.e., have smaller p-values.
  • Has proposed to follow someone who varied the orbitals, similar to CB, but did not randomly vary. Instead they took regular points in the parameter space.
  • That work further supports that there are correlations and anti-correlations in line-ratios when varying the orbitals, so that has to be handled carefully.
  • HW: This is disturbing, because it somewhat invalidates the HW and GDZ naive approach, where we independently vary all the errors.
    • VK: But not surprising, since we expected this to be the case, and the indepedent random variations were the first step towards achieving a better estimate of the variations.
  • Discussion of the importance of these correlations.
    • It matters how far through the process we want to introduce the random variations.
    • If one starts at the very beginning as CB has done, then the correlations show up naturally.
    • If one just wants to vary the emissivities, then it is simple, but the correlations are not as clear and one is being a bit more ad hoc.
    • If one starts in the middle of the process, where one is varying the collision strengths and A-values, then knowing these correlations is imperative.
      • VK: I like HW's idea of building a covariance matrix using CB's calculations and using a multivariate Normal to draw random samples in the HW/GDZ way. Then the former can run slowly, and the latter will give quicker results that everybody can start using right away.

D2.2. CB: Line wavelengths for varying He-like orbitals between H-like and Li-like.

  • Showed plot of line positions for his set of 11 runs.
  • T ~ 3--4 MK
  • Strong dipole lines.
  • 10% variation in A-values ==> 1 part in 200 in wavelength positions.
  • Long discussion of what to do about the fact that CB's approach causes relatively significant variation in the wavelength locations of lines, which we know are not correct for our strong lines.
    • End point:
      • CB can evaluate what the line wavelengths are at an early stage in his calculations.
      • HW et al will provide to CB a list of lines we care about and a cutoff window (or VARIANCE) for each one.
      • In CB's calculations, there is an early stage (just after calculating orbitals), where CB can reject a run if the wavelengths do not fall within their windows.
      • NS says that this sort of two-stage rejection approach would still permit the principled statistical analysis that follows on.
        • VK: The main issue that I see here is importance sampling. Pragmatic Bayes takes the sample of atomic replicates as a fully defined prior, and allows the calculations to wander around this space. If this space is somehow biased, then the prior will be skewed and give unexpected results. However, after the discussions following this point, I think it is not a big problem if CB censors the structures to be consistent with known uncertainties in line wavelengths.
    • Alternate approach:
      • CB could keep all runs, but at an intermediate step force all wavelengths to go back to their known values.
      • This would keep the full variation in A-values, but would imply level populations that are a little inconsistent with the energy levels.
        • VK: I do not like this at all, because this means the A-values do not form a representative sample.

D2.3. DS: Statisticians have converged.

  • DS found a bug in his code.
  • As a result, DS has now converged to NS results, including the two-mode distributions on both ds and density.
  • Interestingly, the Pragmatic Bayesian approach does not show the two modes, which is a real qualitative difference.
    • VK: this is because the pragmatic Bayes method is not trying to pick out the best atomic replicate curves, but simply takes it on faith that all the given curves are equally feasible. So the multi-modal structure washes out.
  • HW: Q= If you want to add more pixels, can you use the first results as a starting point, or should you start over and analyze multiple pixels jointly?
    • NS: A= You can do either, but you should probably try both and confirm that results are consistent.
    • DVD: Two other methods. NS did brute force that also agreed. And then JC's approach.
    • JC: I'm running something now.
    • DVD: STAN was 15 mins. Other NS thing was 45 mins. DS was several hours, although could be cut down a lot. And JC's takes about 15--20 mins.
    • DVD: These methods are embarrasingly parallelizable.
    • HW: How to extend now to different data points.
    • MW: I would like to see the predicted observations from the "best value" solution (whatever that means to the statisticians). NS: You have to be careful to specify which model you are using to make that calculation.
    • VK: What do we want to know ds, density for? HW: It is a very nice constraint on the TR of these hot loops, very clean of other features. Fabio could use this as input to his MHD models.

D2.4. FR: MHD model Fe observations.

  • FR recapped his MHD loop model.
  • Loop is given a complicated twisting motion at the footpoints and therefore heated by resistivity.
  • Loop model is rooted in the chromosphere.
  • Run on a supercomputer with 1000s of cores.
  • Can take a slice through the TR of the loop, and fold through passbands to generate predicted observations.
  • Since yesterday, they have generated predicted observations in Fe XIII.
  • They find densities between about log ne = 9.5--10.
  • But the thickness of their TR is about ~few thousand km, maybe. Certainly not the 10,000 km that the stats are coming up with.
  • The field strength is between 10 and 15 Gauss.
  • The avg T drifts up to about 3.0 to 3.5 MK. The max T is more like 4 MK, give or take.
  • Would like to try stronger fields and maybe we can initiate a flare.
  • HW: Q= What could be done to reduce the computation time?
    • FR: A= It is a highly parallel problem. We've already optimized the time stepping. If we increase the magnetic field then the time steps have to be smaller.

D2.5. GDZ: Why does CHIANTI default model lie on periphery of CB's runs?

  • Following a question from yesterday, GDZ investigated why the CHIANTI default model gives an emissivity curve on the edge of all of CB's curves, even though the codes are nominally the same.
  • GDZ looked at the collision strengths each used for the relevant lines. z = forbidden line; w,x,y are recombination lines.
  • He found some small differences in the collision strengths at the few percent level, which explain the discrepancy in results.
  • Some of the differences (like the x line) he can explain due to how someone did the calculations for CHIANTI. Some of them (like the z line) he is not yet clear on how they are different. (Could be due to the resolution of the energy grid.)

D2.6. FA: Recapping CG's work on uncertainties in DEM inversions.

  • Do a test where a True DEM is folded through responses into observations, add random noise, then solve for DEM models that are parameterized with only 3 or 4 parameters, such that the parameter space can be easily scanned.
  • Considered a true DEM that was a power law with slope alpha up to some peak temperature Tp, and then a Gaussian dropoff at higher T.
  • Generated EIS observations in about ~20 lines, and then did fits of parameterized DEM models to the observations.
  • Was able to analyze how well slope alphas can be found for different peak T.
  • Did a survey of AR cores that have been analyzed in the literature, and found that the derived uncertainties on slope alphas were pretty broad.

D2.7. VK: DEM model proposal.

  • VK is posing a "simple" DEM problem using Chandra data with the O VI and VII lines.
  • Use 6 lines plus the continuum itself.
  • Want to fit a DEM (3 parameters) + atomic replicates. dV (the volume element analogous to the line-of-sight element ds) is included in the DEM. Density is a bit tricky, has to be interpolated in from replicate curves. VK is still polishing this.
  • DEM model is power law rise and exponential drop off.
    • VK: There are many possible parameterized DEM models available that can be used to approximate "known" Capella DEM: log-Normal, gamma-distrib, scaled gamma-distrib, power-law+Normal a la CG, etc.

D2.8: Working time.

  • Remainder of afternoon was taken for subgroups and working.

Day 3 --- 2016.04.15, Fri

D3.1. NS: New Fast Approx Method.

  • Reporting on work done during the mtg.
  • Using the Laplace approximation to speed up the calculations by making an assumption of a Gaussian distribution.
    • HW'S INTERPRETATION: This is the "map" - maximim a posteriori - solution; this is also brute force, evaluating all of the combinations of intensities and atomic physics curves. Does this scale to larger problems?
    • VK: I don't think that is quite correct. A MAP estimate/solution is basically one that finds the mode of a posterior probability density function and calls it a day. The Laplace method approximates an integral by the peaks and the widths of a Gaussian integrand. They might end up with very similar modes in most cases, but the latter still preserves the profile of the posterior.
  • Can analyze posteriors for each pixel separately, or jointly.
  • Computing time for 1000 pixels = 6.5 hours.
  • Comparison of full HMC (STAN) method with the Laplace approximation. Results look pretty close.
  • Procedure:
    • Do a random sampling of pixels, and solve that set with both approaches.
    • According to some criterion, check whether a sufficient number of the pixels are solved well enough by the Laplace Method.
    • If OK, then can confidently use Laplace Method for all pixels.
  • For this dataset, doing the separate pixel analysis, atomic model indexed #800 is obviously the most consistently preferred model across all pixels.
    • VK: This strength of selection of just one replicate has bothered me. I think I have made peace with it by looking at it as the range of atomic replicates provided is too large and the number too small and thus insufficient to cover small deviations from #800.
  • Entropy plot shows that pixels for low-density plasma have higher "entropy" and do not tell us much about which atomic models are preferred. The high-density pixels have low entropy and dominate the preference for particular atomic models.

D3.2. HW: Designing the Fe Paper.

  • Have put the manuscript on Overleaf for communal editing.
  • HW: GDZ, do you want to use the 2% or 5% errors? It looked like the 5% were a bit too conservative.
    • GDZ: Referees probably won't be happy with 2%, so prefer 5%.
    • DVD: Should err on side of more conservative, ie, broader priors, ie, use 5%. A little extra big is the better way to lean.
  • DVD: Inference section should distinguish between Computing methods and proper Inference.
    • HW: I.e., between mathematics and pragmatic calculation.
  • Test case #1: Blind test with specific ne, ds, with default CHIANTI model. Should get back the CHIANTI default model as the most preferred one.
    • HW: Also try other curves to compute the intensities and see if these can also be recovered.
  • Pragmatic Bayes approach informs one of how ne, ds results change as models are perturbed. Full Bayes is for finding preferred atomic models.
  • Need a statement somewhere to say that we might be uncovering issues with the calibration or line blending, but we are not dealing with that here.
    • HW: These results suggest that there could be problems with the atomic data but don't prove anything. The atomic data have to be investigated to find a physically plausible explanation for why a likely curve fits the data better.
  • Test case #2: Use FR's MHD model to blind test the solver. This is a good thing to try, because (a) we have T info, and (b) the model has very different ne, ds pairs than what we were getting out of the EIS obs.
    • GDZ: We must be very careful to see how well we recover the right results on the MHD model before we commit to including the results in the paper. Mention the Testa et al. paper where they did something similar with IRIS models, and their results suggest it is not possible to meaningfully do those inversions.
    • The point is that we want to do some sort of goodness of fit check of the parameter estimates (ne, ds) to see how well they predict the data.
      • This is complicated since the errors on the atomic models are folded in.
  • NS, DS: There are still a handful of cases that are causing problems with the Hessian calc. We are working on that. But after that, the codes should be ready for use by HW.
    • HW will also check that all 10^6 combinations have LS solutions.
  • DVD: If you don't think that this data set is significant for telling you something about atomic models, then use the Pragmatic Bayesian. If this is the first time something like this has shed light on the atomic errors, and the data is thought to be good for it, then look to the Full Bayesian.
  • MW: Why don't we replace Covariance section with a broader Discussion section and absorb it.
    • The Discussion section should cover all of the speculative issues, but the Conclusion section should be kept to underline the definite results so far and make sure they don't get lost amidst all the other discussion.
  • The Intro section should include a description of the full context of this Team's objectives, what our strategy is, and why we are addressing Fe and (later) O lines.

D3.3. VK: Designing the O-Cappella-DEM Paper.

  • Discussion between VK and GDZ over whether O VIII is used. GDZ is concerned about the complication of dealing with the ion fractionization.
    • GDZ: Tricky to set the errors on the ionization balance.
    • AF: Yeah, but the perturbations to the ion balance are going to significantly change these particular lines.
      • [VK: Hmm.. I think Adam said that ion balance uncertainties will not be significant for O VII/O VIII.]
      • [VK: From what I understood, GDZ was concerned about adding extra sources of uncertainties into the mix, and wanted to try doing an analysis with just O VII, a la Fe XIII. The limitation we have with O is that the density sensitive ratio is also sensitive to temperature, and we know for a fact (with Capella) that the plasma is multithermal and the peak EM is far away from the peak of O VII. So I think there is no alternative but to incorporate some kind of EMD information into the analysis.]
  • VK: One of the goals of this paper is to take us one step closer to DEM inferences.

D3.4. HW: Future Plans.

  • HW reports that ISSI has agreed that we can extend to June 2017, and thus have one more mtg.
    • We are right on the edge of our budget, so we might have to juggle individual travel plans.
    • Some people volunteered to self-support if necessary.
  • There was a consensus that we want to have the 3rd mtg.
  • Expression of expectation that the Fe paper can be drafted in the next couple of months.
  • Expect that the O paper is on the timescale of about a year.
  • Want to bring in effective area curves?
    • First as simple functions, then with errors to learn something about instrument calibrations?
      • [VK: I was puzzled when HW brought this up. Effective area curves are usually never simple functions, and have all sorts of edge structures. Could be simple over the Fe XIII wavelength region, but if so I would be quite (pleasantly) surprised. Perhaps HW meant simple multiplicative functional modifications of existing effective area curves?]
  • Need to bring in ion fractionation and abundances if want to do DEMs.
    • VK: Eventually.
  • To start with:
    • Try to incrementally increase the complexity of the solver we have so far.
    • We can parameterize the DEMs as a set of amplitudes on psuedo-isothermal components (radial basis functions). (DEM replaces density, ds pair.)
    • Keep effective area as just a constant function.
    • Need to include ion fractions, but could sample errors as we do for the atomic data.
  • DVD: Give us [statisticians] a functional form for all of these pieces put together.
  • MW: Do we need to talk about proposing for grants to support work past June 2017? If we need support to be continuous from June 2017, then there is urgency for us to work on any such proposal. If that is not a concern, then we could punt on thinking about proposals for a while.
    • DVD: Would be nice to support a grad student or post doc.
    • HW: Complicated because there aren't any relevant international grants. Our group would have to split up nationally on finding proposal opportunities.
    • HW: Is it possible to have a budget for hosting a workshop in a grant.
      • MW: I think it should be possible.

D3.5: Misc conversation in the afternoon.

  • Let's have a mtg at CfA in Autumn 2016, probably after Hinode-10 and before AGU / holidays. (Ie, mid-Sept to late-Nov.)
    • This might be a Stats-oriented meeting.
    • Expect Fe paper to be finished by then...
  • Let's have the next ISSI mtg in Spring 2017.
    • JC: I teach classes in the Spring, so I am not available starting circa Jan.17.
    • GDZ: If we have it in January, we could plan for another ISSI proposal. The deadlines are in Feb and Mar.

D3. Post-script A: GDZ/HW conversation AFTER the ISSI meeting.

  • Also look at EIS data from earlier in the mission 20061225_225013 [full CCD].
  • Add Fe XIII 196.5 and 209.6 A lines to the list of intensities.
  • Save perturbations so that we can recover the collisional rates for most-likely curve.
    • Q= Can this be done by using a fixed seed or do we need to save the rates?
    • A= Probably not that much data to be saved.
  • To proceed we need to:
    • Compute all line intensities for both data sets of interest.
    • Recompute perturbed atomic data assuming >5% and saving seed or rates.
    • Compute LS fits to all combinations of intensities and curves.
    • Run DS/NS routines to find results from Pragmatic Bayes and Full Bayes.

Future Meetings

CfA Meeting (Autumn 2016, tentative):

3rd ISSI Meeting (Spring 2017):