We are interested in just how good the method we use is when it comes to measuring B-V in small areas on the DS and the BS of observed images. In this entry we could find an error as low as 0.005 (once we stopped measuring incorrectly). Is this thelower limit?
We now use synthetic images with realistic levels of Poisson noise added to get the answer. We use ‘syntheticmoon_new.f’ from Chris Flynn and input the ideal image from JD2455814 – i.e. the almost Full Moon – and coadd 100 images, each with realistic noise added, and folded with a realistic PSF (alfa=1.7). We do this twice, with different noise seeds – convert to instrumental magnitudes, and then subtract the two images, pretending they are B and V images and call the result our ‘synthetic B-V image’.
The resulting image is then measured in the areas designated by the agreed selenographic coordinates – those used in our little paper in Table 1. The errors found are:
0.0004 in the synthetic image
0.001 in the case of the real observed images
So, something is generating extra noise for us in the observed images, increasing the noise by more than a factor of two. This could be due to many things – real images are bias subtracted and flat fielded – synthetic are not; there is no readout noise in the synthetic images; image registration does not have to be performed (but could be simulated) in the synthetic images and we have seen variance do strange things during the necessary interpolations that take place during image alignment. There may be more issues to think of here.
This was for a situation with full illumination on both patches. What happens when one is in Earthshine only?
We will simulate this, but, for now, assuming the DS is 1000 fainter than the BS we would expect the error level to be sqrt(1000) larger in a DS-BS image, or about 0.012 mags. This is close to what we see (0.015).
Added later: Ona synthetic image of our lucky night (JD2455945) we find that the lower error limit on B-V BS-DS is
0.0021
and, for the observed image (see this entry)
0.005
So, again, we have a factor of two or so more nois ein reality than the best-case synthetic world predicts.
There is potential for improving our technique! For now we should report the above in the little paper. Fine tuning bias subtraction, flat fielding and image alignment – and understanding the role of image ‘wigglyness’ in alignment issues is needed.