So, under the assumption I understand the problem, let N = n1 + n2. Also let pi1 = n1/N and pi2 = n2/N ( or some other weight scheme ). Then the total finite mixture distribution is:

f(y|theta) = pi1*f1 + pi2*f2 (where theta is a parameter vector containing parameters from f1 and f2)

and the expected values is just a weighted sum of expected values.

Does this help? If not I can discuss a system reliability example.

A simple example that Hyunsook came up with to explain the probelm: suppose you ask a hundred men and ten women how bright the day was, and suppose the hundred men said it was 5+-0.3 (on a scale of 0-10), and the ten women said it was 7+-1.0 How bright *was* the day then?

Sutton, A.J., and Higgins, J.P.T. (2007) Recent developments in meta-analysis, Statistics in Medicine, Vol. 27-5, pp.625-650

seems like a decent place to start. It has some basics, some history, and plenty of references for anyone interested. There is also a section in the red book (Bayesian Data Analysis by Gelman et al) that has an introduction and example. Meta-analysis is essentially hierarchical modelling though, the meta-analysis literature is just more concerned with the specific issues involved in combining different studies. Hope that helps… ]]>

Anyway, after considerable offline discussion, it came down to exactly that point. If you believe that there is some reason to believe one dataset over the other, find a weight that uses that information. Without that information, the only thing to do is lump them all together and pray.

]]>