Blog Image

Earthshine blog

"Earthshine blog"

A blog about a telescopic system at the Mauna Loa Observatory on Hawaii to determine terrestrial albedo by earthshine observations. Feasible thanks to sheer determination.

What next, for Earthshine observations?

Links to sites and software, Observation Resources, Optical design, Post-Obs scattered-light rem. Posted on Jul 21, 2021 10:53

We now take the next steps, after building the Mauna Loa system and operating it: into Space with NASA!

The reason is simply that, from space we can avoid the variability due to observing through an atmosphere and do not need a global network of earth-bound telescopes – one suitable instrument will do it all from orbit.

Even from NOAA’s Mauna Loa Observatory (MLO) — one of the best observatories anywhere on Earth — we could easily see variability in our results due mainly to very thin high-altitude clouds. Extinction at MLO is low of course, but variable. Sure, the ‘bad seeing nights’ can be eliminated by detecting the variability each night, but then you end up with not very many good data!

When we noticed a small Earth-Observing student satellite ( @flying_laptop on Twitter) from University of Stuttgart — we asked if they would try to catch some images of the Moon for us — and they did! The images were interesting and taught us the importance of optimized optical design — optimization for the sake of driving down ‘scattered light’ (really, a phrase covering aperture diffraction as well as various internal scattering processes). We worked with the students and reported on what we found at the 2019 annual European Geophysical Union meeting in Vienna, in the “Earth radiation budget, radiative forcing and climate change” session that Martin Wild, and others, organize each year. See our poster here.

Building on that experience, we are now looking at more ways to go into space, and also to improve on the earthshine instruments we can orbit.

One such effort is with the SAIL (Space and Atmospheric Instrumentation Lab) at Embry-Riddle Aeronautical University in Daytona Beach, Florida. With three friends there we are putting together a NASA Instrument Incubator Proposal (IIP) for development of an optical system optimized for the task at hand: observing high-contrast targets with quite extreme requirements for performance.

In our present ground-based approach we are never really observing the Moon without a contribution from the atmosphere, and must resort to various ‘subtraction schemes’ to get rid of either a ‘halo around the Moon’ (which is light scattered along the path to the image sensor) or a ‘flat sky level’ which can be due to such things as airglow, or the Moon-light scattering up from the ground onto particles in the atmosphere (this is not a ‘halo contribution’). Both kinds of contributions have to be removed before the faint earthshine can be used for terrestrial albedo studies. From space, both of these contributions would be omitted automatically, leaving only a faint contribution due to aperture diffraction — our goal is therefore to study how to build a telescope that has as little diffraction as possible.

With our group of experts in optics and satellite payloads at Embry-Riddle we are considering refractive optics, advanced sensors and ‘baffling’ to minimize unwanted light reaching the image sensor. Our IIP proposal is being submitted this week!

We have tried NASA proposals before, and this is the second try, building on reviews of our first attempt. The opportunities — the proposal call aims — vary, so emphasis is different this time: First time we focused on the sensors, now we are working on the optics.

Stdev and bias in EFM

Error budget Posted on Mar 16, 2012 15:35

I have worked with the EFM method for removing scattered light, testing it on a synthetic moon image from night JD 2455923.

The “true” scattered light is represented by an image:
S_true = observed – pedestal – usethisidealimage
The best guess scattered light is represented by the end result from the fit minus the pedestal:
S_efm = trialim – pedestal

I have investigated the mean value of S in the 3 boxes shown below for 100 S_true and corresponding S_efm images. It is the same synthetic image that is the basis of all the “observed” images, but they have slightly different noise added when they are convolved with the PSF.The results:

It is clear that we have significant bias. Not all the stray light is removed with the EFM method. On the positive side, the standard deviations of the fits are small. The counts here can be compared to a typical DS signal of 5 counts.

comparison of alignment procedures

Error budget Posted on Mar 08, 2012 18:30

I have compared two different methods for aligning images: Chae’s IDL procedure and Kalle’s / my center of mass Python procedure.

The example given here is the result for a stack of 11 images obtained within 0.7 minutes on JD2455864 in the V-filter, and it is representative for many other similar stacks.

I have aligned the stack with each of the two methods, determined the standard deviation of the stack and divided this with the coadded image of the stack. Finally the image has been multiplied with 100, so that it directly shows the relative error on the raw observation in percent. The images are basically the dO/O in the error budget equations.

The left images are aligned with Chae’s method and the right images are aligned with the center of mass method. The only difference between top and bottom images is the scaling. The top ones are shown on a square root squale that emphasizes the bright side of the Moon, and the bottom images are shown on a histogram equalized scale that emphasizes the dark side of the Moon.

The center of mass method out-performs Chae’s method for both the bright and the dark side. This is especially true for the earthshine. Both methods have difficulties with the bright limb.

The difference between the two methods is not so much the method for calculating the required offset. Instead it is the interpolation technique used in the subpixel move that matters. Chae’s method (shift_sub) uses bilinear interpolation, whereas the CM method uses exponential functions as the interpolating functions. With the CM method there is visible blurring of the lunar features, and it is this smoothing that lowers the standard deviation of the individual pixels.

In the form the methods have now, I would recommend Chae’s alignment whenever we wish to show a nicely centered pretty Moon image and the CM method whenever we wish to determine the intensity of a box of pixels on the Moon. Ofcourse we can play with the parameters of both methods and this might increase or decrease the blurring.

Coordinates of bad pixels

Bias and Flat fields Posted on Jan 08, 2012 12:24

Bad Pixels

Bias and Flat fields Posted on Jan 06, 2012 21:59

Our CCD has a number of bad pixels with lower than average sensitivity. They are a result of imperfect manufactoring, and most of them are collected in small groups. Their numbers and location are stable, and almost all of them are completely removed with even a mediocre flatfield.

But the most severe of the bad pixels, seem to have different strength in different master flatfields, and therefore they show up in diminished form in difference images between two master flatfields. The worst pixels may be 6-7% different from one master to the next, and therefore these pixels should be avoided. Luckily most of the bad pixels are close to the edge of the CCD. There is one semi-severe group in the central part of the CCD. I have seen examples of 1.7% variation in a couple of pixels from this group.

The figure is the twilight master flatfield in the IRCUT- filter from session JD 2455827, shown in power-scale to emphasize the small dark speck that are the bad pixels. The three most severe groups of bad pixels are circled in green, and the two second worst ones are circled in blue. These five groups of pixels should be avoided if possible – just to be on the safe side.

Improved periodic bias plot

Bias and Flat fields Posted on Dec 30, 2011 20:21

I have improved the plot showing the periodic bias level for multiple areas on the CCD. I have added arbitrary offsets to separate the different datasets from each other, and I have included a small figure showing the location of the five 16×16 areas on the CCD.

Colour filters

Optical design Posted on Dec 27, 2011 14:52

The filter transmission as measured by Rodrigo for the B, V, VE1 and VE2 filter up to 800 nm and the IRCUT filter for a much wider range as given by the manufacturer.

S/N image

Error budget Posted on Dec 07, 2011 16:20

From night JD2455864 V-filter moon images I have selected images with similar count levels in the bright part of the Moon. This resulted in 23 images. These I have aligned and coadded (mean-half-median method) thus obtaining a coadded object image O, and a standard deviation image, delO. From these two images (and images with estimates of B, delB, F, delF) I constructed a S/N image calculated pixel by pixel. I found the DS to have S/N ~2-3 and the BS to have S/N ~30-40. The S/N image looks like this

While the DS S/N value might be realistic, the BS S/N value is much lower than expected. This is likely due to the fact that co-add mode necessarily is observed close to a rising or setting moon. On this particular night the Moon was setting, and the 23 selected images are obtained with a 30 minutes interval – in which time the Moon went from an altitude of 22 degrees to 17 degrees. The typical BS count level falls in this period from 27900 to 26400. A decrease of about 5.4%.
Next I tried scaling the images after alignment before coadding. I have scaled them using an 11×11 area in Mare Crisium as reference. This area had a mean in the unscaled coadded image of 10300, and I have scaled so that each image has the value 10500 in that area.
scale_factor = 10500.0/AVG(im[360:371,230:241])
This gave a DS S/N ~1.5 and BS S/N ~100+. The BS S/N seems pretty stable of the exact scaling factor, but the DS S/N increases dramatically (S/N ~12) if I use for instance 15000 instead of 10500. I simply don’t think we can count on the S/N values calculated per pixel basis after scaling.

Conclusions and thoughts: If we scale the moon images to counter the setting/rising moon we seem to be able to achieve a S/N on the bright side similar to the poisson noise. It is much more troublesome to estimate the S/N on the dark side. The S/N on the dark side is very sensitive to the method of scaling and the scaling factor. But perhaps a S/N of 2-3 (as determined without scaling using only images selected to have at least somewhat similar bright side count levels) is a decent estimate.

Simple Errorbudget Co-add

Error budget Posted on Dec 01, 2011 16:06

I have aligned and co-added the V-filter moon images from night JD2455864. 16 frames were rejected due to poor correlation when aligning (turned out they were overexposed on the bright limb). This left me with a total of 110 frames.
I have selected an 11×11 pixels area on both the bright and the dark side. The selected areas are shown in the figure below.

The errorbudget formula used is for co-add mode, so exposure time cancels out, and in this first simple version, scattered light is ignored.

For the earthshine I find S/N = 1.7, and for the moonlight I find S/N 128.2.

I have experimented with two different approaches to improve the S/N of the dark side. One way is to increase the number of mean counts, O, in the selected area. For night 864 V-filter O was in the range 398-405. Unfortunately the bright limb was overexposed in the images with higher values than this.
The other approach is to have more images from the same night in the same filter. I have investigated N from 1-1000.
Finding the right exposure time, where the bright limb is well-exposed near saturation, is very important. We can achieve a S/N higher than 3, if we have about 400 frames and a mean dark side count level of 405.
Hopefully we can achieve an even better S/N closer to the new moon. To be continued…

No significant dark current

Bias and Flat fields Posted on Nov 20, 2011 14:05

I have plotted the mean dark value as function of exposure time from the dark frames obtained night JD2455883. The range of exposure times is 10-200 seconds and thus covers all higher exposure times we might be interested in. No dark current is observed, only the usual scatter due to the 20 minutes period. The period was clearly seen in a plot from the same data of dark count as function of time since first frame.

I think it is safe to say that we no longer have to make dark frames – only bias frames. This will save observing time, and therefore there should be time to ALWAYS obtain a bias frame before and after each science frame or flatfield. This will improve the scaling of the superbias.

The horizontal line in the plot is the mean value of the superbias. It can be seen that it is not in the middle of the scattered dark values. I have seen this before, as well as the opposite with dark values generally being higher than the mean of the supebias. However, in both cases values are within plus/minus 0.5 ADU and the scaling of the superbias means it is not a problem.

JD2455883 darks

Observing log Posted on Nov 18, 2011 16:02

I have obtained 100 dark frames with random exposure times in the interval 10-200 sec. This will allow me to determine with certainty if there is any substantial dark current.

The telescope performed as it should and all images were saved without trouble.

Flatfield errors

Error budget Posted on Nov 02, 2011 18:11

I have compared four central 30×30 pixel areas in all the available sky master flatfields. A minimum of 3 masters were available in each filter. The selected regions a (blue), b (red), c (pink) and d (green) are shown in the figure. The flatfield used in the figure is the B master for night JD2455827.

For each filter and region, I have checked the difference between the largest and smallest mean value, expressed as a percentage of the smaller value. These (worst case) changes lie in the range 0.03-0.36%. I seems we can count on the error in a master flatfield to be very low!

The best master flatfields seem to be the B and V filter with only a single case of a change above 0.1%.

The worst region of the four is the green region. This is true for all filters. Perhaps these worst case changes in the worst region can be used as an estimate for the error in a master flatfield?

Comparison of flats within a single twilight session

Error budget Posted on Oct 28, 2011 18:37

I have compared two well-exposed flatfields in the B-filter from the dusk session night JD2455856 with a time difference of about 28.5 minutes.

Each flatfield was bias-subtracted with the proper scaled superbias, had a fitted surface subtracted, and was then normalized. A percent difference image was created as the difference compared to the earlier flatfield.
perc = ((late-early)/early)*100

The mean of this image is of course very close to zero (0.0014), but more interesting is the standard deviation (0.53).

The strongest part of the familiar diagonal pattern is still visible in the percent image (see left part of figure). I have used Image J to rotate the percent image and plot the horizontal profile of the yellow box (top right part of the figure) selected to be perpendicular to the diagonal darker areas. This profile is plotted in the bottom right part of the figure.
It can be seen that the diagonal structure is 0.2pp darker than the immediate neighborhood.

The diagonal structure does change within a relatively short time-frame and this will result in an uncertainty in the science frames when they are flatfield corrected. So far it doesn’t seem to be a large change, but more investigations are necessary. The more likely explanation for the change is temperature fluctuations. Perhaps it is possible to investigate if the diagonal structure changes with a period comparable to the change in bias level…

Periodic bias level

Error budget Posted on Oct 07, 2011 11:50

I have ckecked that the periodic behaviour of the bias-level is the same over the whole area of the CCD. I have chosen night JD2455745 for the investigation because all the dark-frames have the same exposure time (60sec).

I found the mean of five small areas of the CCD and plotted them together with the mean value obtained from the full area of the CCD as a function of time since first frame. The selected areas represent the four corners of the CCD (1 pixel away from the edge) and the center of the CCD. I therefore expected that each area would have its own mean level, but the same period and amplitude as the whole frame.

I have tried this for two different area sizes, 8×8 pixels and 16×16 pixels. The periodic behaviour can be seen in both cases, but it is quite muddy for the small area size. The above image shows the 16×16 case. The red stars are the mean values of the whole frame and each colour represent one of the five 16×16 areas. All the five areas follow the same period with roughly the same amplitude as the whole CCD.

From this I conclude that it is safe to assume the periodic behaviour is a shift in the bias-level of the whole CCD area. It is likely to have to do with the thermostatic control of the temperature of the CCD.