In several posts we have considered the behaviour of variance-over-mean images (VOM). In the ideal world these should be 1 because the noise (once RON is subtracted) is Poisson-distributed. We have seen how this is hardly the case in observed images.

We now consider a test using ideal models with and without imposed image drifts. We generate 100 synthetic images with Poisson noise, and RON as well as a bias.

Below, each set of three panels show a cut across one image in the synthetic stack, then a slice of the VOM image before image alignment and then the same slice after alignment. On the left are images that were not drifted while to the right images were allowed to ‘jiggle around’ by a few pixels.

We see that VOM is 1 on the BS in un-jiggled images, but that DS and sky values fall below 1.

We see a HUGE effect on images that were allowed to drift.
Some of it we understand. Before alignment, VOM rises on structured areas of the drifted images because surface albedo variations are being introduced for a given pixel along the stack direction. The effect on the DS and sky is much less – perhaps because the Poisson noise is so large comapred to the signal variations. After alignment, VOM falls to slightly below 1 on the BS, except near the edge. On the DS and the sky, though, a large lowering is seen. So far it is not understood how this comes about.

Any strange effects seen in observed images will be all the larger since the images do not just drift but also ‘shimmy’ because of atmospheric turbulence.

The effects of aligning images to sub-pixel shifts is part of the above.

Let us learn from this that a noise model probably is hopeless to build in the presence of image shifts – despite realignment – and that sub-pixel interpolation is not a welcome added bother. We could just omit single images with the most drift and use a straight average of the remaining stack. In real images we have the option to not use stacks that have a l ot of drift – but we do not know the extent of the ‘shimmy’ for the remainder.

These realizations above have the most impact on our ability to interpret results that discuss the effect of alignment – alignment reduces some problems but probably adds some others.