(a) The Nyquist limitation can be considerably mitigated if the sampling is non-uniform (See, for instance, Bretthorst, G.L. 1988, Bayesian Spectrum Analysis and Parameter Estimation, Vol. 48 of Lecture Notes in Statistics, eds. Berger et al., Springer).
(b) Each run comprises a count of the surviving capture products. Since the products decay exponentially, the measurement derived from each run is not a uniform average of all captures during the run, but is heavily weighted to the end of the run. Both factors play an important role in greatly enhancing the capabilities of power spectrum analysis of radiochemical data.
No, it is not weak. Any one test, on any one data set, may yield results with a confidence limit of order 99%. However, we have analyzed two data sets (Homestake and GALLEX-GNO) in some detail, carrying out several tests of each data set, so the cumulative significance is quite impressive.
(a) It is impossible to tell whether or not such data contains a periodicity simply by inspection. Press et al. (Numerical Recipes in Fortran (Cambridge University Press, 2nd ed., 1992, p. 571) give a relevant example of a time series that looks like a scatter diagram, but is in fact strongly periodic.
(b) We find that we can re-arrange the GALLEX-GNO data in such a way that it still looks like a scatter diagram, but is in fact strongly periodic.
(c) The reality of any claimed signal can be established only through appropriate statistical tests. The validity of the tests that we have used has been verified by applying them to synthetic data.
(a) The amount of data is not that small. For instance, we estimate that each run of GALLEX-GNO conservatively contains about 0.3 bits of information. All 84 runs therefore represent 25 bits - enough information to yield a decimal number accurate to 7 digits!
(b) The experimenters have taken account of systematic errors in the data analysis leading up to their published results. If there were additional systematic errors to be taken into account, they would depend on experimental parameters in the same way for all runs, and therefore would not introduce spurious periodicities.
(a) Here again, irregular sampling has a huge advantage over regular sampling. Aliases of a real signal typically have much smaller power than the real signal and, if one has determined the spectrum of the sampling process, one can determine whether it is possible that two or more peaks are related by the aliasing.
(b) One can also check to see whether two or more peaks are related by removing one of the peaks (by subtracting the appropriate oscillation), and determining whether or not the others go away.
(c) Another point is that, if one is analyzing two data sets, one can determine whether a candidate frequency turns up in both data sets, or only in one.
(a) Caldwell and Pulido conclude, from their independent analyses of all published neutrino data, that the RSFP process gives at least as good a fit to the time-averaged data as does the MSW effect.
(b) If the flux varies, the MSW effect alone cannot provide an answer to the neutrino problem.
If neutrinos are Dirac particles, the RSFP effect turns them into right-handed sterile neutrinos, which cannot be detected by any experiment, but if neutrinos are Majorana particles, the RSFP effect turns them into antiparticles of a different flavor, which radiochemical experiments cannot detect but water experiments can detect with low efficiency.
Another way to identify the RSFP process is by the energy dependence of the surviving electron neutrinos, which are detected efficiently by all experiments. In particular, if the RSFP effect is in operation, the survival probability of intermediate-energy neutrinos (pep, 7Be, CNO) will be in the range 0.05 - 0.15, whereas the LMA, LOW, and VAC processes can reduce the survival probability only to the range 0.4 - 0.5. The upcoming Borexino experiment should give an improved measurement of the suppression factor in this range.
Maintained by Mark Weber.
Last modified: 2002 April 18.