With various systems to simulate realistic observations at hand – Chris’ noise simulator and Hans’ ideal image simulator – it becomes possible to pose some questions, such as:
1) Is it better to reduce every
blessed image and then average derived results, or should images be
coadded and then reduced?
2) Knowing that the bias has a 1-count
amplitude, 20-minute period is it then a sensible
strategy to scale a low-noise ‘superbias’ to the observed but noisy bias?
3) What is the relationship between precision in alfa and scatter in
DS/BS ratio, in the presence of realistic noise?
A partial, idealized, answer to 3) is hinted at by considering the change in DS and BS intensities as a function of change in alfa:
DS change: -15.9935 %
BS change: 0.0893807 %
alfa ch: 0.995023 % [Note that PSF was normalized]
I.e. If we change alfa by 1% we get a 16% change in a typical DS point and a 0.1% change in a typical BS point. This suggest that we need to know alfa to an accuracy of 0.6% (alfa was 1.7) or 0.006. This would be the typical step-size of a grid-search, for instance, and the tolerance on any downward-descent search. Perhaps it is easy to get such accuracy with methods that ‘fit the sky’ – such as both the BBSO and our own forward methods.
How does the above change in the presence of realistic noise?
Not sure about the effects of alpha described here. They don’t square with my experience! I’ll check into this…
Not sure about the effect of alpha enumerated above — it sounds much more than I would have expected from tests here. I’ll look into this!