In this post: http://iloapp.thejll.com/blog/earthshine?Home&post=297 we investigated the effects of alignment on the noise. We measured noise in squares on the final image and therefore had some ‘cross-talk’ from the variations due to surface structure.
Now, we consider the variance-over-mean along the stack direction – that is, for each image pixel we calculate variance and the mean and look at the ratio of these, which we call ‘VOM’. In the perfect world, this image should be 1.
We look at unaligned images, and then at aligned images. The ‘superbias’ was subtracted (but not scaled to the adjacent dark-frame average). RON of 8.3 electrons was subtracted before VOM was calculated as (SD-RON)^2/mean. Gain used was 3.8 e/ADU. We plot VOM in a slice across the final images:
Top is VOM in the unaligned image, and below is VOM in the aligned image. A lot can be mentioned: First, the surface is not constant, obviously. Second, the effect of alignment is not just a uniform lowering of the VOM that we expect (same mean, less cross-talk between pixels).
In the top image we have VOM near 0.1 (dotted line) on the DS and most of the sky. On the BS the VOM is near 10 apart from the peaks that occur at the intersection with the lunar disc edge. There variance rises because of jitter between images and the mixing of disc and sky pixels.
In the aligned image VOM is near 2 or 3 on the BS disc, higher at the peaks (so there is still an edge-effect, but less). On the DS and the sky a spatial structure has been formed, slanting inwards towards the BS.
What is going on? Effects of pixel-subshifting? What does the interpolation do? Why is it spatially dependent? Strange flux-nonconservation?
The sub-pixel shifting used is based on the IDL ‘interpolate’ function. In the manual for this function it is suggested to allow for cubic interpolation by setting a keyword “cubic=-0.5”. I did this and recalculated the above. The effect was slight.
A test was performed to see what variance is induced by the interpolation. By shifting an image by a non-integer amount of pixels in x and y and then shifting that image back, and calculating the difference, we see that using INTERPOLATE without ‘cubic’ induces 2-3 times more residual S.D. than does use of INTERPOLATE with cubic=-0.5. The interpolations lost variance compared to the original image. With cubic interpolation the loss of standard deviation relative to the mean is at the 1% level – quite a lot actually. [Note: conservation of flux not the same as conservation of variance]
Could it be some effect of bias-subtraction? VOM has dropped most in the halo near the BS. Why?
Since we are looking at variance divided by mean here, we can explain
low values of VOM in two ways: the variance is too small or the mean was
too large, or both. The mean is affected by bias subtraction and the
effect will be largest where the mean is small to begin with. Above we
see a very low VOM where the mean is actually ‘picking’ up – we are in
the halo and moving towards the BS. Errors in bias subtraction should be
smaller there than on the sky. So we turn to variance – apparently it
is too small in the halo part. We influence the variance when we
subtract RON – perhaps we are subtracting too much? But again, the
effect should be largest where the variance is small – i.e. not on the
brightening halo. Mystery persists.