Blog Image

Earthshine blog

"Earthshine blog"

A blog about a telescopic system at the Mauna Loa Observatory on Hawaii to determine terrestrial albedo by earthshine observations. Feasible thanks to sheer determination.

Estimates of earthlight B-V color.

Error budget Posted on Aug 11, 2013 11:13

We have some observationally based estimates of B-V for earthshine. We are working on various models of the same that would allow us to check our assumptions and set error limits.

A somewhat sophisticated model of the color (B-V) of earthshine has been made. It consists of a Monte Carlo simulation where scattering events are decided by random number generators and probabilities bazsed on albedos.

I draw photons from the wavelength distribution given by the Wehrli 1985 Solar spectrum. For each photon we allow a Rayleigh scattering event to occur with probabilities depending on the wavelength. If the photon is not Rayleigh scattered its further fate is given by factors describing the fraction of the surface covered by land (0.3), and the albedos of clouds, land (either Soil or Vegetation), and ocean. Some photons are absorbed while others are scattered into space. The scattered photons
are collected and the emergent spectrum built up. If repeated for a large number of Monte Carlo trials a smooth spectrum results. The B-V colour of this is then calculated, using Bessel transmission curves for the Johnson B and V filters. The trials are done assuming that all land is covered by either vegetation or is bare soil. Cloud albedos
are wavelength-independent, while ocean albedos are zero above 750 nm. Tables of albedos for land types and ocean is given by “Spectral signatures of soil, vegetation and water, and spectral bands of LANDSAT 7.Source: Siegmund, Menz 2005 with modifications.” found at this link:http://bit.ly/14jD3qm.

The B-V of the incident solar spectrum is calculated as a check. The expected solar value of 0.623 (+/- 0.001) is recovered, where the uncertainty is the S.D. of the mean over the trials. The B-V values of the emergent spectra are subject to stochastic noise, so a number (20, each with 2 million photons) of trials is performed and the mean of derived photometric values, with standard deviation of the mean, calculated.

The results for earthlight are:

B-V of earthlight, assuming all land is vegetation-covered:
0.5692 +/- 0.00057
where the uncertainty is the S.D. of the mean.

B-V of earthlight, assuming all land is bare soil:
0.5759 +/- 0.00068
where the uncertainty is the S.D. of the mean.

The difference is 0.0067 +/- 0.00088 with earthshine due to the soil-only model being redder than the vegetation-only model. The difference in B-V is small, but significant.

A validating test is to look at how much flux is taken out at 550 nm – we know that extinction at that wavelength is near k_V = 0.1 so about 10% of the flux is removed. Since Rayleigh scattering is symmetric in the forward and backward direction (for single scattering events) we estimate that half of the light removed from the beam is scattered into space. Thus we expect the flux of Rayleigh scattered light to be about 5% of the incoming flux. We measure the Rayleigh/Incoming flux ratio to be 0.0521 +/- 0.0003. That is, as expected, about 5% of the sunlight is scattered by Rayleigh scattering into space, a similar amount being scattered downwards and giving us the blue sky, and totalling an extinction at 550 nm of near k=0.1. This assumes all absorption is scattering due to Rayleigh. At 550 nm that is not a bad approximation.
For the B band at 445 nm we find that on average 2.3 times more flux is taken out by Rayleigh, corresponding to an extinction coefficient 2.3 times larger than in V. That is a bit steep, surely? Chris has measured k_B=0.15 from extinction data.

So, the model suggests that when you switch all land areas from vegetation only to soil only the B-V will redden by less than 0.01. The color of both modes is near B-V=0.57, with realistic land-ocean, and cloud fractions, as well as albedos. The model indicates a colour very close to the cruder model from the paper (B-V=0.56 at k=0.1). We use the exercise here to estimate that errors due to omission of land- and ocean- albedo is of the order 0.01 mags, and that B-V of earthlight is near 0.56.

Now, what did we find observationally?

We found DS B-V is 0.83. We also know that the sunlight has B-V=0.64 but appears as Moonlight at B-V=0.92. so that we can infer that one reflection on the Moon reddens light by 0.28 in B-V. The earthlight is also reddened by striking the Moon and we can infer that our observation of the DS implies earthlight that is 0.28 bluer, or B-V=0.83-0.28=0.55. That is very close to what our model suggests!

Note that all uncertainties here are governed by photon statistics –
not underlying physics or observational skills. Note also that we use
the phrase ‘earthlight’ when we mean the light scattered by Earth and
observed in space. By ‘earthshine’ we shall mean the color of the DS of
the Moon, which is illuminated by the earthlight. We have not even tried
to model the effects of hemispheric geometry, or any effects of
reflectance-behaviour on angles.

——— added later ————-

This was done under an assumption that corresponds to ‘airmass is 1
everywhere’ – this is wrong, of course, since airmass for those surface elements near the edge of the terrestrial disc is larger than
1. It turns out that if you assume that airmass goes as sec(z) then the
area-weighted effective airmass for a hemisphere viewed from infinity is
2. We repeat the above trials with twice as much Rayleigh scattering.
The colours we get now are:

Assuming all land is plants : B-V = 0.5063 +/- 0.0009
Assuming all land is bare soil: B-V = 0.5152 +/- 0.00075

The difference is: 0.0089 +/- 0.0012

The Earthlight is now bluer than under the airmass=1 assumption (good), and the difference in turning land pixels from bare soil to vegetation is still small.

If we assume complete ignorance of the soil/vegetaion mix we get:

B-V earthlight, for airmass=1 : 0.57 – 0.58
B-V earthlight, for airmass=2 : 0.51 – 0.52.

The difference between airmass=1 (not realistic) and airmass=2 (realistic) is 0.06 mags.

I conclude that the model indicates a B-V of earthlight, at the most realistic assumption of the effective or average airmass involved in Rayleigh scattering, near 0.515 with a small dependence on wheather the land is vegetation or bare soil on the order of 0.01 at most.

I should repeat the above with VE1 and VE2 filters to see what change in NDVI vegetation index we might expect.




Effect of alignment, II

Error budget Posted on Feb 10, 2013 13:01

In this post: http://iloapp.thejll.com/blog/earthshine?Home&post=297 we investigated the effects of alignment on the noise. We measured noise in squares on the final image and therefore had some ‘cross-talk’ from the variations due to surface structure.

Now, we consider the variance-over-mean along the stack direction – that is, for each image pixel we calculate variance and the mean and look at the ratio of these, which we call ‘VOM’. In the perfect world, this image should be 1.

We look at unaligned images, and then at aligned images. The ‘superbias’ was subtracted (but not scaled to the adjacent dark-frame average). RON of 8.3 electrons was subtracted before VOM was calculated as (SD-RON)^2/mean. Gain used was 3.8 e/ADU. We plot VOM in a slice across the final images:

Top is VOM in the unaligned image, and below is VOM in the aligned image. A lot can be mentioned: First, the surface is not constant, obviously. Second, the effect of alignment is not just a uniform lowering of the VOM that we expect (same mean, less cross-talk between pixels).

In the top image we have VOM near 0.1 (dotted line) on the DS and most of the sky. On the BS the VOM is near 10 apart from the peaks that occur at the intersection with the lunar disc edge. There variance rises because of jitter between images and the mixing of disc and sky pixels.

In the aligned image VOM is near 2 or 3 on the BS disc, higher at the peaks (so there is still an edge-effect, but less). On the DS and the sky a spatial structure has been formed, slanting inwards towards the BS.

What is going on? Effects of pixel-subshifting? What does the interpolation do? Why is it spatially dependent? Strange flux-nonconservation?

The sub-pixel shifting used is based on the IDL ‘interpolate’ function. In the manual for this function it is suggested to allow for cubic interpolation by setting a keyword “cubic=-0.5”. I did this and recalculated the above. The effect was slight.

A test was performed to see what variance is induced by the interpolation. By shifting an image by a non-integer amount of pixels in x and y and then shifting that image back, and calculating the difference, we see that using INTERPOLATE without ‘cubic’ induces 2-3 times more residual S.D. than does use of INTERPOLATE with cubic=-0.5. The interpolations lost variance compared to the original image. With cubic interpolation the loss of standard deviation relative to the mean is at the 1% level – quite a lot actually. [Note: conservation of flux not the same as conservation of variance]

Could it be some effect of bias-subtraction? VOM has dropped most in the halo near the BS. Why?



Repeatability of finding center and radius of the lunar disc.

Error budget Posted on Aug 21, 2012 12:33

A required image analysis step is the determination of lunar disc centre and radius.

We currently use a hybrid method: First we fit circles to points on the BS rim (found as an image by using edge-detection image-analysis methods: SOBEL filters!) of the lunar disc image – this is done many times using different points and then the median is extracted for x0,y0 and radius. These values are then used as starting guesses for a more refined method that searches a range of possible values near the starting guesses and determines a ‘winner’ based on how well a synthetic circle matches the ‘rim image’ generated above.

We compare the statistics of the starting guesses and the final adopted values. In two separate runs on 88 different images we assemble these values:

Run 1:
x0 start guess: 311.82352 +/- 2.7447506
x0 final guess: 310.44694 +/- 2.3127377
y0 start guess: 248.49938 +/- 1.1339389
y0 final guess: 248.70556 +/- 1.0812740
radius start guess: 136.32425 +/- 0.98834739
radius final guess: 136.90833 +/- 0.44461002

Run 2:
x0 start guess: 311.78076 +/- 2.9802184
x0 final guess: 310.46767 +/- 2.3079720
y0 start guess: 248.51968 +/- 1.0747726
y0 final guess: 248.70327 +/- 1.1017349
radius start guess: 136.42162 +/- 1.2634192
radius final guess: 136.90870 +/- 0.40265642

x0 is found slightly more precisely with the final method (2.3 pixels vs 2.8).
y0 is found with similar precision in the two methods (about 1.1 pixels).
radius is found better with the final method (0.4 vs about 1 pixels).

We seem to be able to determine the centre of the lunar disc with image analysis techniques to the 1-2 pixel level, and radius to better than half a pixel.



Effect of registration errors on measuring the flux in a darkside patch

Error budget Posted on Aug 21, 2012 11:53

We have looked at the error in measuring the flux in a patch on the earthshine side if the position of the patch is uncertain by a few pixels.

A circular patch (aperture) was chosen, as shown by the green circle below:


The aperture is 31 pixels in radius, and is in a not particularly uniform luminosity area of the lunar disc.

The amount of scattered light into this area is rather small (~<10% of the flux) and has been ignored.

The flux in this aperture was computed for the correct registration (no offset in x or y) and for offsets of 1 and 2 pixels in both x and y (i.e. a rectangular grid of -2 to 2 pixels offset in x and the same in y).

Simply summing the flux in this aperture yields a 1.5% error in the flux, if the registration error is up to 2 pixels. The error reduces to 1.0%, if the registration error is just 1 pixel.

We tried rolling off the edge of the aperture with a cosine function (i.e. from 1.0 to 0.0) — it starts 6 pixels inside the edge of the aperture. We weight the flux in each pixel by this function. Pixels inside this rolloff zone are weighted with 1.0.

The improvement is rather small. For this “soft tophat” aperture, registration errors of order 2 pixels lead to a flux error of ~1.4%. This is to be compared to the error of 1.5% for the hard edged aperture. If we can achieve registration errors of 1 pixel, the soft edged aperture yields a flux error of 0.8%, compared to 1.0% for a hard edged aperture.

Next we tried a much more uniformly illuminated region of the moon:


The results are much better. In this region, a registration error of up to 2 pixels yields flux errors in the patch of order 0.2% (31 pixel hard edge aperture) and 0.15% (31 pixel soft edge aperture). This is very acceptable!

Conclusions:

Using a soft edged aperture helps ameliorate registration errors, but not as much as we had thought.

Selection of uniformly illuminated patches helps much more! Doing both is a good thing.



Good Questions

Error budget Posted on Jun 01, 2012 08:20

At Henriette’s very successful Master’s thesis defense yesterday a few good questions were asked by the opponents and the audience. Some of them we were prepared for, but a few were novel:

1) A question about the quality of FFs was a little mixed up, but the essence was ‘are they good because they are almost overexposed’? Now that we know about the CCD nonlinearity which starts even at 15000 counts and is not obviously in the ‘blooming phase’ until you pass 55000 we might consider the impact of non-linearity on various quality estimates again. However, the high quality of the hohlraum FFs is due to the hohlraum being a good uniform source.

2) If we had a way to rotate the CCD we could indeed investigate field gradients. This need only be done once. As far as I know the CCD stops its rotation because it is now held by alignment of two steel pins. Let us ask Ben to comment on whether a 180 degree turn of the camera is possible in the present setup.

3) After which interval does lunar libration and sunrise on the Moon cause so many changes in the illumination of the lunar surface that stacking images is pointless? This can be investigated with our forward model system. A long time ago I tested that and found that half an hour was the limit, but this should be redone and focused on areas of the Moon – there are obvious changes near the terminator after juts a few minutes but what is the effect near the disc edge, and so on. It takes some 3-4 minutes to get two bias frames and a stack of 100 images.

Added later: A quick investigation using synthetic images of the quarter Moon suggest that the maximum change in a single pixels’ intensity occurs on the terminator and is roughly 1% per minute. Away from the terminator – in typical DS and BS pixel positions – the change in DS/BS ratio is like 0.03% per minute. This is thus an issue mainly for BBSO, who do not expose DS and BS simultaneously.

4) What would be the error due to alignment problems alone? We can investigate that with the forward model.

5) There is no WCS in the file headers. Could alignment be aided with such a system? WCS is the World Coordinate System – this is what we can find by using astrometry.net – the pointing of the telescope is found with sub arcsecond precision and written into the FITS header. We may be able to do that if there are stars in the frame – since we are using coAdd mode at the moment it seems impossible to get stars in the frame except occasionally (e.g. tau Tauri).

6) In the precision error budget independence is assumed – how much de-pendence is OK for the budget still to be valid? This is a good question about statistics fundamentals! We sought to separate the dependent and independent errors in two budgets, but in general the question can be handled with formal methods or simulations where synthetic examples are investigated.

7) Scale the bias frame better by following the cooler periodicity. This was countered by a warning: Beware that the periodicity seen may depend on camera load.
The counter was good because the nice periodicity is obtained when taking long sequences of bias-only images, right? Under observing conditions the heat load on the camera due to taking science frames may be different – and may vary depending on exposure time and number of frames taken. We can investigate this by studying bias frames from regular sequences of image-taking: are there signs of a periodic bias level in those images? We know the period and we know the time of exposure so it should be possible to perform a signal analysis. A worst-case, ‘upper limit’ analysis could be performed by simply studying how much the mean value of scaled superbiases varies from regular observing sequences: If the variability of the mean level is more than that seen in the bias-only sequences then we know that we have additional heat load on the cooler during science frame taking.



JD2456072 – weather log

Error budget Posted on May 25, 2012 08:10

clear, RH just 5%. 15mph. 50 F.



Term Independence in Error Budget

Error budget Posted on Mar 18, 2012 09:28

In modelling the error budget in terms of the relative uncertainty on an intensity

delta_I/I

we are looking at something different than the error on the terrestrial albedo

delta_albedo / albedo
or
delta_(DS/BS) / (DS/BS)

We seem to be finding large delta_I/I’s and small delta_albedo/albedo. This can be because an error on BS could be compensating an error on the DS – i.e. the errors are not independent. In writing the error budget as

(d_I/I)^2= Sum_i [ (d_a(i)/a(i))^2],

where the a(i)’s are terms in the error budget we have made the assumption that the a(i)’s suffer errors that are independent so that various cross-terms in the derivatives are zero in the mean.

In model-fitting, such as ours, where some of the factors to be fitted are de-pendent (alfa and offset is an example) the error budget should be written differently, and we should look at this. Henriette’s estimate of d_DS/DS remains correct, I think, but it should be understood that some other terms appear in the full d_albedo/albedo error budget, which cancel. An example of this is seen in the post here.



PSF smooths

Error budget Posted on Mar 17, 2012 08:09

Henriette has shown that there is a bias in the EFM method (on one synthetic image corresponding to one lunar phase with one setting of alfa).

To understand this – and to ultimately understand how to avoid the problem – we perform some tests on synthetic images.

First I compare the intensity of pixels in two images – one image is a synthetic image and the other is one generated by folding with a PSF with alfa=1.6 and 1.8). No noise is added. Plotting all pairs of pixels from the two images against each other we get:


In the upper left panel we compare pixel intensities in the original synthetic image with the same pixels’ intensities in the 1.6-folded image. To the right in the upper row we see results for 1.8. We note that bright pixels have moved to the right side of the red line indicating they have been weakened in intensity; we note that pixels with originally very low intensities have increased and we note that intermediary pixels have suffered both fates. The panel to the left in the middle row shows the percentage changes in pixels from image 1 and 2 in the row above and we see, as expected, that folding with 1.8 lessens the intensity less than if using 1.6 and this is of course because the 1.8 PSF is narrower than the 1.6 one and thus influences fewer pixels at the further parts of the DS. In the rightmost image in the middle row, and the last image at the bottom, we repeated panels i and 2 but now on a log scale to more clearly show the faint end of the sequences. We have also coloured in red those pixels that in image 3 suffered a larger change in intensity than 1.000%; these are all pixels at the faint end – probably pixels on the DS right next to the BS which had their intensities enormously increased when the halo spread out.

The above result was for single pixels. We next investigate what happens to the mean intensities in squares placed across the image at various places and compared between the two images.


Boxes that were 7×7 were placed randomly all over the images and mean intensities extracted. Upper row shows the original intensity vs. that found in the folded image, for alfa=1.6 and 1.8. Bottom row shows the changes of those boxes’ intensities in percent of original value (except 0 values). Red dots indicate negative values folded up to be shown in this log-axis display. We see what we expect – large means suffer the least; low values suffer the most.

The above results help us understand what happens to intensities in folded images. We also understand that if the halo is not completely removed from the DS then the DS may be left with a positive residual flux. We also clearly understand that if the BS is used as the reference than it will always be an underestimate of the true BS intensity in real observed images – by on the order of 1%.

Since we will always only have observed images to estimate the DS/BS ratio from we will always be making a small mistake in using the observed BS intensity (in addition to whatever error we make in using ‘corrected’ DS intensities). What is the change in error on the BS if we have an alfa=1.6 and an alfa=1.8 PSF?

The answer is – about 80% – that is if the BS was in error (compared to the correct value) by 1 percentage point at alfa=1.6 then it may well be in error by about 0.5 percentage point (compared to what is true) when alfa is 1.8. That is – the influence of nightly variations in the PSF (say from 1.6 to 1.8, which is realistic) may cause the DS/BS ratio to vary by 1/2 % solely by the influence via the BS – never mind what problems we may have on the DS under such seeing changes.

As far as I know the BBSO does not address this problem. It is not a problem that is resolved by ‘extinction corrections’ since these always apply to all pixels in an image equally (apart from the field size issue).

Does using ‘the total image flux’ work? Chris has in this blog analyzed what happens to the total flux when an image is convolved with PSFs of various alfas. We revisit those results now:


For a large set of random values of lunar phases a synthetic image of the Moon was generated. It was folded with a PSF with a random value for alfa between 1.5 and 1.85. The total flux of the original image and the folded image was then calculated and the difference converted toa percentage of the original total flux, and plotted above. Even if the PSF is flux-conserving the flux is not conserved for small values of alfa (broad PSFs) because the flux in the halo starts reaching the edge of the 512×512 image frame and is lost. The results above show that for all lunar phases the flux is conserved inside the +/- 0.1% limits for alfa larger than 1.57 or thereabouts. Such a small value corresponds to a fairly bad night so as long as we stay to good nights we can count on the total flux in the image being conserved.



Stdev and bias in EFM

Error budget Posted on Mar 16, 2012 15:35

I have worked with the EFM method for removing scattered light, testing it on a synthetic moon image from night JD 2455923.

The “true” scattered light is represented by an image:
S_true = observed – pedestal – usethisidealimage
The best guess scattered light is represented by the end result from the fit minus the pedestal:
S_efm = trialim – pedestal

I have investigated the mean value of S in the 3 boxes shown below for 100 S_true and corresponding S_efm images. It is the same synthetic image that is the basis of all the “observed” images, but they have slightly different noise added when they are convolved with the PSF.The results:

It is clear that we have significant bias. Not all the stray light is removed with the EFM method. On the positive side, the standard deviations of the fits are small. The counts here can be compared to a typical DS signal of 5 counts.



Error on determination of linearly changing albedo

Error budget Posted on Mar 11, 2012 12:41

I have run simulations like Peter’s of our ability to recover a changing albedo with time.

The sims are over a period of a decade with 100 observations of (global) albedo per year (randomly chosen nights). 1000 sims were run and the slope of the albedo change with time measured using least squares from the monthly means. One sim was for no change in the average albedo with time, the other for a linearly incrasing albedo with a slope of 0.1 percent of the albedo per year — i.e. mean yearly albedo increases from 0.300 to 0.303 over a decade.

It’s important to start each sim on a different day of the year — if not, there is a bias in the slope caused by the seasonal pattern being the same in all sims.

The plot shows the histograms of the slope determinations.


Black histogram: no change to the mean albedo with time. Green histogram: change of 1 percent over a decade – i.e. 0.1% per year.

The natural variation in the albedos was set at 1 percent in addition to the seasonal variations.

The histograms are centered on the correct slope — but the width of the histograms is quite broad and backs up Peter’s sims of last week and reported in the blog.

For the green curve the mean of the 1000 simulations is 0.100 (i.e. an excellent match to the input albedo rise of 0.1 %/year) and the scatter is 0.025.

This setup (10 years, 100 albedo measurements per year, 1% natural variation in the albedo) gives a 4-sigma detection of a rising albedo of 0.1%/year



Albedo variations in CERES data

Error budget Posted on Mar 09, 2012 10:25

What is the variability in global-mean daily-average values of albedo?

We need this number in order to argue for the levels of precision we are striving for: A main counter-argument to very high instrumetal and method precision could be ‘the natural variability is much higher, so you need not work so hard’. A much worse counter-argument would be ‘you must have much higher precision to catch the very small variations expected’. The worst is probably ‘natural variability is much higher than all climate change variations of interest so you will never be able to say anything interesting using your – or satellite – albedo data’.

We estimate what the daily global-mean albedo variability is. We necessarily have to do this from satellite data, and use the CERES data product (click on the link).

Downloading 1×1 degree daily-average CERES albedo for ‘all sky’ conditions and averaging all data points not flagged as ‘bad data’ we get a daily series for several years with seasonal variations – like the Figure 1 in Bender et al. Extracting and removing the seasonal cycle, and removing all obvious glitches in the data we get the daily albedo anomaly series. We calculate the standard deviation of this series and express it as a percentage of mean albedo.

We find that the CERES-based global-mean albedo varies by 1% from day to day (1SD is 1.07% of the mean).

We did not weight the data with a cosine law towards the poles, but the effect of this is minor. The 1SD on albedo is therefore near 0.003.

We simulated daily observations of albedo data drawn from a population with the Bender et al seasonal cycle and daily noise added with the 1% SD from above. This is an experiment in sampling a constant albedo. We simulated 10, 20 and 30 years of data. We fitted straight lines to the data using 365 points a year (unrealistic) or 100 points a year (realistic) and expressed the change in albedo, according to the straight-line fit, at the end of the 10, 20 or 30 years period as a perecentage of the mean albedo.

The plot shows histograms from 1000 realisations of the above process. To the left are results for 365 samples a year. To the right for 100 samples a year, and each row is for 10, 20 and 30 years of data, respectively. The histograms show what the chances are of observing an albedo slope of a certain value when the noise is realistic and the actual underlying albedo is constant. For 10 years of data and 100 observations per year the chances of observing an albedo slope between -0.35% and +0.35%, when the slope is really 0, are thus about 68 in 100 (expressed that way to avoid using “%” twice in the same sentence … ). For 20 years of data the same odds apply to the +/- 0.24% range. 30 is – 0.2%.

These results differ from those in the previous post on this subject (below) by the value used for the standard deviation of the variability. In that post we had used the values for scatter in Bender, and they are valid for monthly means and are about 3 times smaller than what we used here. As sqrt(30) is about 5.5 we learn that daily albedo values are not independent – there are about 3 days between independent data points for global-mean albedo!

What have we learned? If our instrument and methods can really give us 1 datapoint each night with 0.1% precision then we are slightly on the luxury side – natural variability will falsely show us slopes of +/- 3 times that number in 10 years of data, and even for 30 years of data we are still a factor of 2 better than we ‘need to be’.

However, if the instrument and methods are not quite up to the 0.1% level we have aimed for (a realistic situation, actually … ) then we are not being over-perfectionists.

Likewise, reported and discussed changes in albedo slope are quite large. Bender et al show in their Figure 1 a change in observed albedo, between the CERES data and the ERBE data of 4% over about 15 years. Nobody believes this change is real – it is due to instrument calibration issues – but the numbers serve to show what we are up against: We should ask, can we rule out 4% change in albedo over 15 years with our earthshine data? The answer would seem to be – yes, we will be able to verify changes in global-mean albedo to the +/- 0.35% level with 10 years of data.

So we are doing all right with our idea!

We should repeat the above with an analysis of what can be done with 1, 2 etc instruments; or a global network of instruments with the occassional dropout due to realistic bad weather, and so on.



measuring linearly changing albedo

Error budget Posted on Mar 09, 2012 05:41

Simulation of 100 albedo measurements per year of a linearly rising
albedo of 1 percent per decade. The simulation is based on the mean monthly global albedo estimates over several decades from a suite of models as in Bender et al . 2006 (Tellus, 58A, 320) figure 3.

grey crosses : individual measurements

green line — monthly averages

red circles — annual averages

the black lines are the fits to the monthly averages — they both come out with the correct slope – so it does work!


the two panels show the effect of 1 percent (lower) and 0.1 percent scatter (upper) on
the individual albedo measurements due to natural variability.

the 0.1% case allows the underlying model to be seen clearly.

1% scatter in the daily albedo makes things look a deal noisier — but the right slope comes out and one can get a pretty good seasonal pattern
from the monthly means and better stull by folding the data into a year.

we are looking further into what daily variations naturally arise — the Bender paper is not clear on this.



comparison of alignment procedures

Error budget Posted on Mar 08, 2012 18:30

I have compared two different methods for aligning images: Chae’s IDL procedure and Kalle’s / my center of mass Python procedure.

The example given here is the result for a stack of 11 images obtained within 0.7 minutes on JD2455864 in the V-filter, and it is representative for many other similar stacks.

I have aligned the stack with each of the two methods, determined the standard deviation of the stack and divided this with the coadded image of the stack. Finally the image has been multiplied with 100, so that it directly shows the relative error on the raw observation in percent. The images are basically the dO/O in the error budget equations.

The left images are aligned with Chae’s method and the right images are aligned with the center of mass method. The only difference between top and bottom images is the scaling. The top ones are shown on a square root squale that emphasizes the bright side of the Moon, and the bottom images are shown on a histogram equalized scale that emphasizes the dark side of the Moon.

The center of mass method out-performs Chae’s method for both the bright and the dark side. This is especially true for the earthshine. Both methods have difficulties with the bright limb.

The difference between the two methods is not so much the method for calculating the required offset. Instead it is the interpolation technique used in the subpixel move that matters. Chae’s method (shift_sub) uses bilinear interpolation, whereas the CM method uses exponential functions as the interpolating functions. With the CM method there is visible blurring of the lunar features, and it is this smoothing that lowers the standard deviation of the individual pixels.

In the form the methods have now, I would recommend Chae’s alignment whenever we wish to show a nicely centered pretty Moon image and the CM method whenever we wish to determine the intensity of a box of pixels on the Moon. Ofcourse we can play with the parameters of both methods and this might increase or decrease the blurring.



Can it be done, we ask!

Error budget Posted on Mar 06, 2012 14:36

Our instrument design and data reduction goals have been driven by a 0.1% error requirement.

In light of considerable natural variability from year to year and week to week in earths albedo we ask which limits on achievable accuracy the natural variability set?

We illuminate this problem by modelling natural albedo variability, based on Figure 3 in Bender et al (2006). The Figure gives an annual cycle and the 1 S.D. variability limits around this cycle.

We use that data to generate 10, 20 and 30 years of daily simulated albedo values. We fit a straight line to the entire dataset as well as to a dataset made up of 100 annual samples (like realistic observing conditions – one data point per night). We calculate the change in albedo over the period simulated and express this change in percent of the average albedo. We do that for the daily sample and for the 100-times-a-year sample. We next show the histograms of these accumulated errors. To the left in each row is the histogram of errors in albedo for the ‘all-nights period’ and to the right is the histogram for the ‘100 annual samples’.

The S.D. of these errors are:

years all days 100 days
10 0.102 0.184
20 0.077 0.132
33 0.060 0.105

The above suggest that natural variability will impose an uncertainty on estimates of global albedo at about 0.1% if just 10 years of nightly data are analysed, but that 30 years of data are needed if only 100 samples a year are made.

Any errors our measurements have above the 0.1% level will thus only make the problem worse.

The above analysis assumes global coverage of our data; all sampling effects due to lunar phases being are ignored by the random selection of 100 point per year.

Overall the analysis suggests that we may be able to achieve climate-change related data of interest in a decade as long as we do not allow errors in our observations or data-reduction much above the 0.1% level.
On shorter time-scales the high accuracy of our method will allow us to track such phenomena as the daily or weekly changes in clouds masses as well as more subtle changes such as seasonal changes. A volcanic eruption would be yum-yum!



S/N image

Error budget Posted on Dec 07, 2011 16:20

From night JD2455864 V-filter moon images I have selected images with similar count levels in the bright part of the Moon. This resulted in 23 images. These I have aligned and coadded (mean-half-median method) thus obtaining a coadded object image O, and a standard deviation image, delO. From these two images (and images with estimates of B, delB, F, delF) I constructed a S/N image calculated pixel by pixel. I found the DS to have S/N ~2-3 and the BS to have S/N ~30-40. The S/N image looks like this

While the DS S/N value might be realistic, the BS S/N value is much lower than expected. This is likely due to the fact that co-add mode necessarily is observed close to a rising or setting moon. On this particular night the Moon was setting, and the 23 selected images are obtained with a 30 minutes interval – in which time the Moon went from an altitude of 22 degrees to 17 degrees. The typical BS count level falls in this period from 27900 to 26400. A decrease of about 5.4%.
Next I tried scaling the images after alignment before coadding. I have scaled them using an 11×11 area in Mare Crisium as reference. This area had a mean in the unscaled coadded image of 10300, and I have scaled so that each image has the value 10500 in that area.
scale_factor = 10500.0/AVG(im[360:371,230:241])
This gave a DS S/N ~1.5 and BS S/N ~100+. The BS S/N seems pretty stable of the exact scaling factor, but the DS S/N increases dramatically (S/N ~12) if I use for instance 15000 instead of 10500. I simply don’t think we can count on the S/N values calculated per pixel basis after scaling.

Conclusions and thoughts: If we scale the moon images to counter the setting/rising moon we seem to be able to achieve a S/N on the bright side similar to the poisson noise. It is much more troublesome to estimate the S/N on the dark side. The S/N on the dark side is very sensitive to the method of scaling and the scaling factor. But perhaps a S/N of 2-3 (as determined without scaling using only images selected to have at least somewhat similar bright side count levels) is a decent estimate.



Simple Errorbudget Co-add

Error budget Posted on Dec 01, 2011 16:06

I have aligned and co-added the V-filter moon images from night JD2455864. 16 frames were rejected due to poor correlation when aligning (turned out they were overexposed on the bright limb). This left me with a total of 110 frames.
I have selected an 11×11 pixels area on both the bright and the dark side. The selected areas are shown in the figure below.

The errorbudget formula used is for co-add mode, so exposure time cancels out, and in this first simple version, scattered light is ignored.

For the earthshine I find S/N = 1.7, and for the moonlight I find S/N 128.2.

I have experimented with two different approaches to improve the S/N of the dark side. One way is to increase the number of mean counts, O, in the selected area. For night 864 V-filter O was in the range 398-405. Unfortunately the bright limb was overexposed in the images with higher values than this.
The other approach is to have more images from the same night in the same filter. I have investigated N from 1-1000.
Finding the right exposure time, where the bright limb is well-exposed near saturation, is very important. We can achieve a S/N higher than 3, if we have about 400 frames and a mean dark side count level of 405.
Hopefully we can achieve an even better S/N closer to the new moon. To be continued…



Flatfield errors

Error budget Posted on Nov 02, 2011 18:11

I have compared four central 30×30 pixel areas in all the available sky master flatfields. A minimum of 3 masters were available in each filter. The selected regions a (blue), b (red), c (pink) and d (green) are shown in the figure. The flatfield used in the figure is the B master for night JD2455827.

For each filter and region, I have checked the difference between the largest and smallest mean value, expressed as a percentage of the smaller value. These (worst case) changes lie in the range 0.03-0.36%. I seems we can count on the error in a master flatfield to be very low!

The best master flatfields seem to be the B and V filter with only a single case of a change above 0.1%.

The worst region of the four is the green region. This is true for all filters. Perhaps these worst case changes in the worst region can be used as an estimate for the error in a master flatfield?



Photon noise in Canon camera

Error budget Posted on Nov 01, 2011 04:19

I have measured the pixel noise in the Canon EOS 35 mm camera. This is the same one as used to measure the halo of the moon at large distances (up to 15 degrees) from my Sydney backyard.

60 exposures were taken of a flat white surface, in groups of 10, with 5 exposures times from 1 second down to 1/200th of a second. (A 6th exposure time was discarded due to saturation).

Frames converted from CR2 format to fits format using cdraw and convert:

ls *.CR2 | awk ‘{print “dcraw -T -4 ” $1}’ | bash
ls *.tiff | awk ‘{print “convert ” $1, $1″.fits”}’ | sed -e s/.tiff.fits/.fits/ | bash

the tiff files this produces still have colour information, which I wasn’t expecting. The fits files are of course grey scale, but I suspect they are averaged over the three colours (RGB), so that the apparent photon counts are three times smaller than the actual photon counts.

The standard deviation and mean flux in 9 randomly chosen individual pixels was computed across the of 10 images obtained for each of 5 exposure times. The square root of the mean (times 3) is then plotted versus the standard deviation of the values, yielding 45 noise measurements (i.e. 9 pixels x 5 exposure levels of the pixel = 45). If the noise is anything like Poisson noise, these should lie on the 1:1 line.

This proves to be approximately the case as seen below.


It’s pretty noisy though! (i.e. the noise measurement is noisy).

Inspection of the fits files ds9 shows that there are clear correlations on various scales in both the X and y directions in the images — so the camera is surely introducing smoothing of some sort, so there is only so far one can push this analysis!

Conclusion : Poisson noise is a fair first order approximation for the behaviour of a commercial camera CCD chip.



Comparison of flats within a single twilight session

Error budget Posted on Oct 28, 2011 18:37

I have compared two well-exposed flatfields in the B-filter from the dusk session night JD2455856 with a time difference of about 28.5 minutes.

Each flatfield was bias-subtracted with the proper scaled superbias, had a fitted surface subtracted, and was then normalized. A percent difference image was created as the difference compared to the earlier flatfield.
perc = ((late-early)/early)*100

The mean of this image is of course very close to zero (0.0014), but more interesting is the standard deviation (0.53).

The strongest part of the familiar diagonal pattern is still visible in the percent image (see left part of figure). I have used Image J to rotate the percent image and plot the horizontal profile of the yellow box (top right part of the figure) selected to be perpendicular to the diagonal darker areas. This profile is plotted in the bottom right part of the figure.
It can be seen that the diagonal structure is 0.2pp darker than the immediate neighborhood.

The diagonal structure does change within a relatively short time-frame and this will result in an uncertainty in the science frames when they are flatfield corrected. So far it doesn’t seem to be a large change, but more investigations are necessary. The more likely explanation for the change is temperature fluctuations. Perhaps it is possible to investigate if the diagonal structure changes with a period comparable to the change in bias level…



Periodic bias level

Error budget Posted on Oct 07, 2011 11:50


I have ckecked that the periodic behaviour of the bias-level is the same over the whole area of the CCD. I have chosen night JD2455745 for the investigation because all the dark-frames have the same exposure time (60sec).

I found the mean of five small areas of the CCD and plotted them together with the mean value obtained from the full area of the CCD as a function of time since first frame. The selected areas represent the four corners of the CCD (1 pixel away from the edge) and the center of the CCD. I therefore expected that each area would have its own mean level, but the same period and amplitude as the whole frame.

I have tried this for two different area sizes, 8×8 pixels and 16×16 pixels. The periodic behaviour can be seen in both cases, but it is quite muddy for the small area size. The above image shows the 16×16 case. The red stars are the mean values of the whole frame and each colour represent one of the five 16×16 areas. All the five areas follow the same period with roughly the same amplitude as the whole CCD.

From this I conclude that it is safe to assume the periodic behaviour is a shift in the bias-level of the whole CCD area. It is likely to have to do with the thermostatic control of the temperature of the CCD.



Poisson nature of detectors

Error budget Posted on Oct 24, 2009 19:29

Understanding the nature of the statistics of imaging devices.

We characterize a detector from a Canon 350D camera.



Optical Density

Error budget Posted on Sep 14, 2009 18:59

Finding the Optical Density of a neutral density filter.

Tests were performed to simulate the practical problem of determining the optical density of a neutral density filter using Moon observations.

Experiment 1: A real image of the Moon was divided by 100.0 over half the illuminated disc, resulting in an image that resembled what an image of the Moon seen through a focal-place ND filter would look like. The image was one that Tom Stone of the USGS donated, from the ROLO project. A synthetic image of the Moon for the time of observation was then generated using the DMI lunar simulator. Two types of images were generated – one using the Lambertian BDRF and one using Hapke’s BRDF. The synthetic image was then rotated and scaled so that it aligned with the real image of the Moon. This procedure included the application of a multiplicative factor over that part of the image which coincided with the ‘filtered’ part of the observed image. The factor was one of in total 6 parameters that were determined, using least squares techniques by minimising the residual between the ‘observed’ image and the synthetic image. Results: The values found for the ‘filter factor’ in the minimisation were 99.57 and 100.332 for the Lambertian image and the Hapke image, respectively. Since the actual factor was 100 we see that we are able to determine the filter OD to within 0.6 and 0.3 %. A second experiment employed a factor of 1000 and we determined 999.57 and 1005.9 – i.e. 0.04% and 0.6%. Notes on the procedures used: The minimisation method requires a good starting guess for all parameters – i.e. image shifts, image scaling, image rotation, image intensity and the factor for the OD of the simulated filter. This starting guess was obtained by first performing the minimisations to align and intensity-scale the images and then fixing these and determining the OD factor and finally letting all parameters be determined from the previous best guesses. To align the images only the image half that was not ‘under the filter’ was used. The residuals’ RMSE is very sensitive to small shifts in the image placements, due to the amount of detail in the images of the lunar disc – small shifts soon place dark areas over bright ones. It is speculated that smoothing of both images before the above procedure is applied could solve this problem. This is analogous to suggesting that the Moon is not observed in focus or at least not in perfect focus. The above was not performed with realistic Poisson noise on the observed filtered half, nor anywhere in the synthetic image. Scattered light from the lunar disc is not modelled in the synthetic image, but could be. Use of the residual RMSE as target of minimisation may be a bad idea – the numbers are small on the bright side of the filter and low on the dark side: Any contribution to the RMSE from the dark side should be weighted higher. Perhaps RMSE of the residual relative to the observed image should be minimised.

Experiment 2: Realistic Poisson noise was included in the ‘observed’ image on the dark filtered side. At first a filter OD of 10 was used, and minimisation recovered 10.04492 and 10.045737 – i.e. 0.4% and 0.5% errors. For larger filter factors initial results were dismaying: a large bias was apparent in the rsults – must consider again.



Linearity of CCD/CMOS detectors

Error budget Posted on Mar 22, 2008 07:31

A discussion of detector linearity: http://turtle.as.arizona.edu/im/mdr16/linearity.html

Here are results of testing the Canon ESO350D linearity (section 1.3):

Nice article about Canon DSLR dark currenta nd gain issues: http://www.stark-labs.com/craig/resources/Articles-&-Reviews/CanonLinearity.pdf



Error budget version 1

Error budget Posted on Mar 17, 2008 17:14

Here is a first shot at starting to estimate the error budget for eartshhine observations, and the resulting terrestrial albedo. See Qiu et al paper (below) for the theoretical background for the relationship between observed earthshine/moonshine ratio and albedo.

Version 1 of notes:

Qiu et al paper: OK, there is a problem, send me an email instead.