Blog Image

Earthshine blog

"Earthshine blog"

A blog about a telescopic system at the Mauna Loa Observatory on Hawaii to determine terrestrial albedo by earthshine observations. Feasible thanks to sheer determination.

Laplacian method applied to all good observations

From flux to Albedo Posted on Dec 30, 2012 22:41

The Laplacian method has been discussed here .

We now extend the method to include an identical analysis step applied to synthetic images generated for the moment of observations. We then take the ratio of the results from observations and the results from the models. This will eliminate the effects due to geometry and reflectance (as long as the reflectance model used for the synthetic images is correct), leaving only the effects of changes in earthshine intensity. So, this is another ‘ratio of ratios’ result. The ratios involved are, in summary:

(Laplacian signature at DS/ Laplacian signature at BS; in observations) divided by
(Laplacian signature at DS/ Laplacian signature at BS; in models).

We use only the ‘good images’ identified by Chris in this entry:

We start by inspecting the dependency of (Laplacian signature at DS/ Laplacian signature at BS; in observations) on phase [attention: this is not yet the promised ratio of ratios – just a ratio! :)], in each of the available filters:

We see the morning and evening branches folded to the same side, for comparison. We see an offset between the branches in each filter, and we see the dependency on lunar phase. We next look at the same ratio but from models. Since we do not have colour-information in our models we redundantly now show plots for each filter as if the models were color-dependent – they are not: but the points available in each filter are different, of course. Recall that a model is generated for each observation:

We see a very similar pattern – dependency on phase (but steeper this time). We also notice that the branches are closer together than in the observations.

What does this mean? The branches are separated, in observations, due to different distributions of light and dark areas on the eastern and western halves of the lunar disc facing Earth. Our Laplacian method samples pixels right at the edge of the lunar disc and has evidently met areas of different albedo. The separation in observations is not reproduced in the models – this can imply that the model albedos are incorrectly distributed (i.e. mares and craters etc in the wrong places) this is somewhat unlikely as we use one of the most detailed lunar albedo maps, from the Clementine mission. However, our model does not sample colour – the map used, and thus the ratio between ‘light’ and ‘dark’ – is taken from the 750nm Clementine image [I think; must check!]. This map was stretched to match the older Wildey albedo map [see here: ] which was made in such a way that the ‘filter’ the WIldey map corresponds to is a combination [see: ] of the Johnson B and V filters. We see the most well-reproduced branch spacing in the V band observations, compared to the models. This implies we have some colour-information about the two halves of the Moon – or a tool for how to scale the lunar albedo map when different colours are to be considered.

We next inspect the ratio of the observations and the models – the ratio of ratios:

For each filter is shown the Albedo derived – it is the ‘ratio of ratios’ spoken of above and is identical to the one used in BBSO literature – it is the albedo relative to a Lambertian Earth-sized sphere. The model albedo used was 0.31 so the ‘actual’ albedo derived is the above times 0.31.

We see that there is a phase-dependence in this – particularly in one branch, with the observations being relatively brighter than the models at phases nearer Full Moon, compared to phases nearer new Moon.

Since we have seen the EFM method produce less phase-dependent albedos [see here: ] we think the Laplacian method needs further development and investigation before it can be used.

It is worth listing why the Laplacian method might be useful:

1) It does not require careful alignment of model and observation. The signal is extracted from a robustly defined location in each image.
2) It is a ‘common-mode-rejecting’ method and is not dependent on image resolution.
3) It is relatively fast.

More CPU/GPU tests

Post-Obs scattered-light rem. Posted on Dec 30, 2012 00:07

I have been looking at the use of GPUs versus CPUs for our scattered light analysis. We need to be able to convolve artificial Lunar images (outside the atmosphere) with the instrument PSF. GPUs offer a considerable speed advantage.

First look (CPU versus GPU):

Upper left panel: artificial lunar image outside the atmosphere.

This artificial image is then convolved with a 2-D Gaussian-like PSF
which has fat (powerlaw) tails, and which closely reproduces what we see
in real data.

Upper right panel: convolution using 2-D FFT code running on a CPU

Lower left panel: convolution using 2-D FFT code running on a GPU

Lower right panel: the ratio of the two methods, i.e. the ratio of the two previous panels

There is a lot of structure in there, mainly images of the lunar
crescent turning up in different places — at a level of about 0.1% of
the intensity.

IMPORTANT: the CPU code was written in double precision, whereas the GPU was in single precision.

(The above reproduces with more explanation an earlier post)

Notes: The CPU code calls the FFTW3 libraries from Fortran (Dec’s ifort compiler is used), just using the standard Fortran to C wrappers provided with FFTW3. The GPU code is in written in CUDA.

Second look (CPU only, single versus double precision):

The plot above shows the ratio of the single precision CPU versus
double precision CPU (i.e. no GPU results shown on this plot).

There is similar structure in the ratio — and at about the
same level as the GPU tests gave, i.e. discrepancies at the level of a
few x 0.1% of the intensity.

Third look (CPU in double precision, renormalisation)

In this plot
we compare CPU double precision, applied to the ideal Lunar image, and without “min/max
renormalisation”. (Min/max renormalisation means scaling the input image so that the smallest value in the frame is 0.0 and the largest value is 1.0).

The ratio panel of the two convolutions (bottom right) shows noise only, and at a
very low level — 1 part in 1E7. Highly acceptable!

Fourth look (CPU, single precision, renormalisation)

This plot shows the same as the previous one — but with single precision rather than double. The artefacts are back, at the same old level of a few x 0.1%!


We might already be able to conclude from the above that double precision FFT/CPU is robust (negligible
artefacts), but that a single precision CPU, or a single precision GPU,
produces similar sized (few x 0.1%), and thus slightly worrying, artefacts.

But I need access to a double precision GPU to test this. Hope to do so next week!
The acid test will be the results of comparing double precision on a CPU to a GPU.