Starting August 2023 the ISS will have onboard the danish astronaut Andreas Mogensen who will perform several experiments for danish scientists specifically.
We suggested that the impact of scattered light from the lunar bright side onto the dark side could be investigated in an experiment based on comparing images of the Moon taken from space with simultaneous images taken from the ground.
Starting August 2023 the ISS will have onboard the danish astronaut Andreas Mogensen who will perform several experiments for danish scientists specifically.
Old friend, and Bishop, Nick Knisely talks about the ISS earthshine experiment on his blog: https://entangledstates.org/2023/08/22/calling-all-citizen-scientists-lunar-photos-needed-for-climate-research/
We now take the next steps, after building the Mauna Loa system and operating it: into Space with NASA!
The reason is simply that, from space we can avoid the variability due to observing through an atmosphere and do not need a global network of earth-bound telescopes – one suitable instrument will do it all from orbit.
Even from NOAA’s Mauna Loa Observatory (MLO) — one of the best observatories anywhere on Earth — we could easily see variability in our results due mainly to very thin high-altitude clouds. Extinction at MLO is low of course, but variable. Sure, the ‘bad seeing nights’ can be eliminated by detecting the variability each night, but then you end up with not very many good data!
When we noticed a small Earth-Observing student satellite ( @flying_laptop on Twitter) from University of Stuttgart — we asked if they would try to catch some images of the Moon for us — and they did! The images were interesting and taught us the importance of optimized optical design — optimization for the sake of driving down ‘scattered light’ (really, a phrase covering aperture diffraction as well as various internal scattering processes). We worked with the students and reported on what we found at the 2019 annual European Geophysical Union meeting in Vienna, in the “Earth radiation budget, radiative forcing and climate change” session that Martin Wild, and others, organize each year. See our poster here.
Building on that experience, we are now looking at more ways to go into space, and also to improve on the earthshine instruments we can orbit.
One such effort is with the SAIL (Space and Atmospheric Instrumentation Lab) at Embry-Riddle Aeronautical University in Daytona Beach, Florida. With three friends there we are putting together a NASA Instrument Incubator Proposal (IIP) for development of an optical system optimized for the task at hand: observing high-contrast targets with quite extreme requirements for performance.
In our present ground-based approach we are never really observing the Moon without a contribution from the atmosphere, and must resort to various ‘subtraction schemes’ to get rid of either a ‘halo around the Moon’ (which is light scattered along the path to the image sensor) or a ‘flat sky level’ which can be due to such things as airglow, or the Moon-light scattering up from the ground onto particles in the atmosphere (this is not a ‘halo contribution’). Both kinds of contributions have to be removed before the faint earthshine can be used for terrestrial albedo studies. From space, both of these contributions would be omitted automatically, leaving only a faint contribution due to aperture diffraction — our goal is therefore to study how to build a telescope that has as little diffraction as possible.
With our group of experts in optics and satellite payloads at Embry-Riddle we are considering refractive optics, advanced sensors and ‘baffling’ to minimize unwanted light reaching the image sensor. Our IIP proposal is being submitted this week!
We have tried NASA proposals before, and this is the second try, building on reviews of our first attempt. The opportunities — the proposal call aims — vary, so emphasis is different this time: First time we focused on the sensors, now we are working on the optics.
The recent total lunar eclipse gave an opportunity to apply photometric methods. It is possible that the light from the Sun, passing through the Earth’s atmosphere is coloured by processes that ultimately also have a climate effect on Earth. Recent volcanic eruptions may cause the eclipse to appear more red than usual.
Can this be measured accurately?
The Internet was overflowing with images and image sequences of the eclipse. Among them this composite image by K. Lewis (www.photosbykev.com):
Using the method described in the paper by Park et al, we can convert the RGB values from the image file above to Johnson B and V colours (the possibilities are limited to these two bands – more can be done with the NIR filter taken out of the camera).
We extracted the mean and median B-V colour from the sub-frames above, and plot these values in a sequence, starting from upper left and ending in lower right frame:
We see the reddening of the Moon as it is eclipsed. The dotted horizontal line is a nominal value for B-V=0.92 based on literature values for the Full Moon, and the measured values from the composite photo were adjusted up by about 0.6 in B-V to roughly coincide with the 0.92 level. We see that the eclipse was redder than the Full Moon by about 2.5 in B-V.
The jagged outline of the curve, and the evident jumps in brightness in the main photo, seem to indicate that the detector used was not linear – this could be because the images were obtained not only on the linear part of the CMOS devices’ transfer curve.
What are published B-V values from eclipses, studied with professional equipment and attention paid to conservation of linearity? Figure 4 of the paper “Wideband Photoelectric Photometry Of The Jan. 20/21, 2000 Lunar Eclipse”, by Schmude, et al, 1999, in International Amateur-Professional Photoelectric Photometry Communications, Vol. 76, p.75+
(http://adsabs.harvard.edu/abs/1999IAPPP..76…75S) shows a change in B-V of about 2.5 magnitudes during an eclipse – like above.
We obtained other images of the 2015 eclipse and also converted the RGB values to B-V, but found eclipse B-V very different from the above. Attention to obtaining Full Moon images just before and after the eclipse takes place is required, as is some attention for exposure levels, trying always to capture the various stages of the eclipse at mid-sensitivity range of the device used.
Also important is that there are gradients in the shadow of the Earth on the Moon and the geometry of the eclipse and the times of observation – and the location on the lunar surface used – all influence the results obtained.
This is a trial video of all our good V-band images. On the left is the linear image, on the right a somewhat histogram equalized image. Want to embedd this video, but here is a link until further notice.
NASA recently released a set of images showing the Moon crossing the disc of Earth, as seen from the DSCOVR satellite orbiting Earth in the L1 Lagrangian point. From that point the satellite can keep constant watch on Earth, with the Earth disc always fully lit. NASA image, from the DSCOVR satellite.
This image is interesting because it brings together so many different pieces of information, and possibilities. With the Moon an unchanging object (surface cratering might change on timescales of 100s of millions of years) in the same frame as Earth it becomes possible to very accurately calibrate photometry of the Earth.
Why do we need good photometry of the Earth? Well, weather satellites already give us a flood of images of Earth, but the images – especially from older satellites – are mainly suited for weather forecasting – “Here is a cloud mass, and it is moving this way, so we think it will rain or be cloudy at such and such a place a little later”. This sort of information is of enormous value to society on a day-to-day planning basis, but for climate research the data have to be absolutely calibrated – in other words, it is no good if the pretty pictures of Earth are taken with different instruments that have different sensitivities, or with instruments that slowly alter their properties. The resulting data would not be inter-comparable.
Climate research needs to have data for Earth that tells the story of what happens over time – this is the difference between ‘climate’ and ‘weather’: Weather is tomorrows’ events, but climate is the mean of what things were like 20, 50 or 1000 years ago. If changes in climate can be detected, then we can start thinking about what causes the change. A lot is known already, but every bit of new, reliable information can be put to good use.
Looking at the DSCOVR images we note the cloud masses on Earth – if you have seen the animated film showing the Moon passing across the disc you will realize that the Earth is turning as we watch – so new cloud masses are being brought into view while others pass beyond the limb of the Earth causing variability in the average brightness of Earth. Link to the animation is here.
Using image-analysis software it is possible to pick out the parts of the image that are Earth and the parts that are Moon, and look at the statistics of the numbers. The images are ‘RGB’ images and each RGB-image contains one image for the red light, one for the green light and one for the blue light. Because the Moon is in the image there is the huge benefit that even if the camera settings or properties varied slightly in time then the ratio between Moon and Earth brightnesses in the three colour bands would be independent of these errors or slow changes.
So, with these wonderful properties of the NASA image, how useful is it, in reality?
The amount of clouds on Earth relate directly to the ‘radiative balance’ that drives climate. More clouds mean less light get into the climate system to warm it up. Therefore, variations in amount of clouds will alter the temperature. We all know it is cooler when a cloud draws in front of the Sun, but over larger scales and longer times, how much will mean temperature change for a given change in cloud cover?
Estimates indicate that a 1% change in global mean cloud cover will change global mean temperature by 1 degree C – roughly (could be half that, could be twice that), so if we could be sure to measure mean cloud cover to an accuracy of 1/10th of a percent we could expect to have data that gave us important information in the clouds-vs-temperature question.
In the NASA image the variations in the clouds seen, due to rotation, amount to some 3% in the short segment of images provided. The images are taken about 15 minutes apart from the DSCOVR satellite so in that short film (containing 12 images with a full Moon disc in view) we cover almost a quarter turn of the Earth. Aha, so we now realize that the amount of clouds on Earth varies by several percent from hemisphere to hemisphere and probably day to day. Next day the amount of variability is some other number and some cloud masses have dissolved while new ones have been formed – so on a daily basis cloud variability on Earth is high. Uncertainty is therefore high, as to the actual number of clouds present – weather forecasting may care where the clouds are and how many there are, but climate investigations need more precise (less scattered) data.
If we average many images of Earth the variability would average out and the uncertainty go down. How many pictures of Earth, like the ones here, would you need to have for the average cloudiness to be of use in the clouds-vs-temperature question?
The answer turns out to be that with about 1000 independent images the average would have an uncertainty of 0.1%.
How do we get ‘independent’ images? The phrase refers to ‘statistical independence’ and has to do with how long you must wait for a fresh cloud-scape to be present over the visible disc of the Earth. Weather systems can last several days so you can perhaps get an independent image of Earth every 2 or 3 days.
It therefore looks like data gathered over a decade could produce data for cloudiness that could be used for investigations into climate change mechanisms via the cloud-mechanism. This answer holds equally well for ordinary satellite data except these data have the added problem of instrument drift.
Of course, the Moon only crosses the Earth’s disc as seen from the DSCOVR satellite every so often – you certainly do not get a sequence of images like the above every day – or Month. So is it useless to use these data for the climate-change purpose then?
Well, all satellites have a basic problem – they do not last long in deep space due to radiation damage from cosmic ray particles and outbursts from the Sun, so they get replaced every several years or so. This will probably also have to be done with the DSCOVR satellite, if there is funding for it. The point I make here is that the presence of the Full Moon in the image frame every so often allows a re-calibration of the DSCOVR satellite instruments that give unprecedented accuracy. Geostationary satellites are between the Moon and Earth and do not capture the Moon in-frame very often, and when they do it is as a small object peeking out behind the Earth in a corner of the picture. These chance-occurrences are, however, used nowadays to calibrate satellite instruments, by EUMETSAT and others. Classically, satellites were calibrated on bright white surfaces on Earth, such as salt deserts – this gives about 1% precision and accuracy, while the lunar methods are able to reach a few tenths of a percent accuracy.
Another comment that can be made based on the NASA image is that the Moon appears dark against the Earth. In principle this is well understood because the lunar soil is dark while Earth is full of bright clouds, icecaps and a scattering atmosphere. Desert are also very bright, but forests are dark, as is the cloud-free ocean. With the images at hand we can readily measure how bright the Moon is relative to the Earth in each of the R, G and B colour-bands. The ratio should give the ratio of albedos (reflectivities) of Moon and Earth, because the geometry is the same – Sun is behind the DSCOVR satellite and both Earth and Moon are ‘Full’ at the time the images were taken. I did this and find a ratio about 2 or 3 times higher than the standard ratio expected. The Moons’ albedo is in the range 0.05 to 0.10 while Earth is near 0.3 so the ratio should be 1/6 to 1/3, roughly, but actually comes out as 0.8.
Why is this? I think there may be two reasons – first of all the back, usually hidden, side of the Moon is brighter than the front because there are more mountains and they are bright. The front has lunar Mare and they are darker than the mountains. Second, we are seeing the Moon under Full Moon conditions here and albedo, or reflectivity, depends on the angle of viewing, just as any surface you look along. Near vertical incidence the lunar soil appears brighter than at other phases. This is known as the ‘opposition surge’ and explains why the Full Moon, as seen also from Earth, appears brighter than at other times – not just because a larger fraction of the Moon is illuminated then but because a lit region simply reflects more of the incident light when the incidence and viewing angle is 0 degrees – straight down and straight back out again.
Why does albedo depend on the viewing angle? Any rough surface has shadows behind boulders and mountains and only when the angle of incidence is the same as the angle of reflection will the shadows be hidden from the observer – looking at a lit and rough surface from another angle and the shadows can be seen, leading to a lowered average reflectivity. The lunar also has small beads of glass and crystalline material which contribute to the enhanced reflectivity during Full Moon. Is there no ‘opposition surge’ for Earth? After all the geometry for Earth and Moon is the same here so a surge for the Moon might also cause brightening of Earth? I don’t know yet – but note the pretty ‘sunglint’ seen in the centre of Earth during the animation.
The images used here are not ‘science grade’ but have been optimized for viewing online so transformation of colours have been applied in order to yield a pleasing and realistic image – perhaps the scaling has caused an offset in the brightness ratios, explaining the large Moon/Earth albedo ratio. The raw science data will be released by NASA later, and consists of 10 very narrow band-pass filters.
We might pause to wonder why all this effort is worthwhile? The answer is simple and has to do with reproducibility of observations – we need to have many estimates of all properties of the climate system in order to understand it fully, and winnow out the erroneous data – a point made brilliantly by Ruth Mottram in her excellent blog sternaparadisaea.net.
As they say on Mythbusters “If it’s not reproducible, it ain’t Science!”
A very clever alternative to watching Earth from Space in order to determine its reflectivity, is to stay on Earth, or near it …, and watch Earth from there – by using Earthshine observations! And the rest of this blog is about our attempts to do just that.
We have generated ‘super flatfields’ by considering all the flat fields we have – we have many exposures through each filter taken on sky, the dome inside and the hohlraum lamp. While the patterns are the same for the three types (mainly a roof-tile like diagonal pattern’ we have dips in the corners of especially IRCUT and VE2 (and VE1). It is therefore not obvious which flat field is the ‘correct’ one. Assuming that the ones with the least vertical relief must be the ones closest to reality we have adopted one superflat for each filter.
We need to investigate if there are other ways of testing which FF is best – and we have the possibility of ‘scamp’ and ‘sextractor’ to generate background fields, which we assume are some sort of smoothed residual after sources are removed. Using frames from NGC6633 we generate these background fields from ‘raw’ frames and ‘bias-subtracted and flat-divided’ cluster images.
A sample of the background fields, shown as contour plots, for one science frame is shown here:
The upper field is for ‘raw’ and the lower for ‘bias-subtracted and flat-divided’. The contours seem to have a mean near 18 for both – the contours evidently are not the same. The differences between the two backgrounds amount to 0.1 – 0.2 counts.
We should center the backgrounds on sky-coordinates and average and see if the fields are consistent given method (we assume so). TBD.
If the patterns are stable we have learned that applying bias-subtraction and flat-division can alter the levels by a few tenths of a count. This may be important for photometry of the DS. Presumably it is better to correct for the flatfield than not to, but that depends on whether the flat-processed fields are consistent. Perhaps warp can be used to generate a stacked field, and its background?
Doing just that we get the average background frame for ‘raw’ and ‘bias-subtracted, flat-divided’ processing:
The relief in that field (took the most-overlapping 512×512 region) is from +0.1 to -0.02 — so some averaging has occurred, but there is still a one-tenth of a count effect of applying the flat (in, B, and when 14 images could be stacked).
So, we cannot rule out that applying flats have a large effect compared to DS photometry – i.e. 0.1 out of 1 is 10% and at best 0.1 out of 10 is 1%.
The effect is regional inside the frame and so we can well have an effect on the limited-region fittings we perform on the DS.
Might want to consider what the effect on the BS-halo, evaluated at the DS, is if the BS also has flatting-effects brought to it.
Basically, applying flats are necessary as long as they do not bias results due to bad production of the flat-field itself.
Here is an example of a two-way smear. The red area is the overexposed region, and it has bloomed along vertical columns in the image. To the right of this bloom is a smear extending hundreds of pixels. What is that?
Read-out direction is top-down.
What is causing this smear to the right? Is it related to the ‘depression of the sky in a broad band left-right’ that we see in some images?
The intensity of the smear is at the 500 count level – i.e. it acts just like part of the halo from the BS. This has, surely some consequences for data reductions? Do images that have not bloomed have this halo? Is this why the bright edge of the BS-halo is always hard to model?
Can we see if the fitting of the BS halo is easier in images that have been modestly exposed, compared to those exposed right up to 50.000 or 55.000 counts?
Atmospheric extinction is caused by absorbtion and scattering of light.
Usually photometry is performed on images of sources by collecting the flux from an area near the source and relating it to the airmass of that particular observing moment. In a Langley plot the relationship between magnitudes and airmass will be an (almost) straight line, and the extinction coefficient is the slope of that line.
As light passes through more and more atmosphere there will be more absorbtion as well as more scattering. Both remove intensity from the direct beam. Absorbtion just takes the light away (at least at optical wavelengths) while scattering redistributes the light near the source. If the method of measuring light from the source therefore allows re-capture of the light scattered – for instance by allowing for large collection areas in the image – then the effect of scattering will be reduced and a smaller extinction coefficient should resultr, as compared to the extinction calculated when scattering is allowed to remnove light from the calculation.
Thus, when considering an extended source like the Moon we have an opportunity to test the magnitude of scattering vs absorbtion by explicitly collecting all light near the Moon (for instance, by summing the flux in the whole image frame) and comparing to what we find when light is collected only from a small patch on the lunar bright side.
We have done this for 5 wavelength bands and a score of observing nights where conditions were right. We measured flux in each image in two ways – either as the total flux in the image frame or as the flux inside a small patch on the lunar brightside defined by selenographic longitude and latitude. Langley plots were constructed and slopes measured using regression, along with error estimates of the slopes.
Results are plotted here:
Langley extinction for whole-image fluxes plotted against extinctions determined from a small bright-side patch in each image.
If the image is rotated and inspected (sorry about that) we see that for the five filters extinction determinations often agree, despite the expectation that extinction determined from the small patch (allowin scattered light to escape) would be larger than the extinction determined from whole image. Only a few points deviate significantly from the diagonal line.
This implies that scattering at MLO was very low, compared to absorbtion – as scattering includes Rayleigh scattering thge plot for B should be considered carefully.
In a few cases the plots allows us to identify ‘bad nights’ when the extinction determinations do not follow the diagonal.
From the International Space Station (ISS) the Moon can of course be seen, and has been photographed with hand-held cameras. Someone took an image of the Moon with exposure settings chosen to highlight the earthshine. Below, such an image is discussed:
At the bottom the jpeg image is shown. A line has been drawn on the image. The slice of the image is plotted at the top. The R channel in the RGB file was used – G and B are similar but there is a higher sky-level than in R. More details on this image is available at http://eol.jsc.nasa.gov/scripts/sseop/photo.pl?mission=ISS028&roll=E&frame=20073#
On the plot we clearly see the dark sky above the Moon, the entry onto the lunar disk, the extensive dark side, the bright side and the transit of the bright atmospheric layer and then entry onto the dark Earth below.
We note that the bright side is saturated as the intensity reaches 255.
We do not know whether astronauts on the ISS have available high-dynamic range cameras. We think the above image is snapped with a standard 8-bit camera.
Using 16-bit handheld CCD cameras – such as a Hasselblad fitted with a 16-bit digital back – images of the earthshine could be obtained that showed bright side and dark side at the same time without bright side saturation allowing the DS/BS ratio to be calculated. That ratio is proportional to terrestrial albedo.
Such images could be of educational value and could highlight the role that earthshine studies can have in climate change research.
Since the atmospheric profile is quite uniform horizontally in the above image, we estimate the mean atmospheric profile and subtract it from the slice across the Moon, thus revealing the profile across the Moon, without atmospheric effects:
The top panel shows, in black, the profile across Moon plus atmosphere; the red graph shows the profile of the atmosphere adjacent to the Moon; the bottom panel shows the difference.
We note that the atmosphere-free profile in the bottom panel above shows almost no ‘halo’ from the bright side – unlike our own images, taken from MLO, through the atmosphere.
That piece of information is quite interesting: As we understand it, our own images have a contribution to a halo from the optics of our telescope along with a variable halo due to the atmosphere scattering. From the ISS a camera, with a lens, was also used and one wonders why there is no sign of this optics-halo? Is the ISS camera optics vastly better than ours? This is not likely, since, probably, any old hand-held camera was used. On the other hand, modern lens-coatings (such as in the Canon ‘Dragon-eye coating’ lenses) have high standards and are designed to suppress lens-scattering and internal multiple reflections.
So, we wonder if the ISS has some special lenses at hand?
We note that the above image is for an extreme ‘almost new Moon’ situation with little flux from the bright side to scatter and interfere – we should see if more ISS images of the Moon, showing earthshine, at phases more like 1/4 are available? If those images also do not have any appreciable halo due to optics we might be on to something.
Note that the image of the Moon is probably taken through a clear spherical dome or window mounted on the side of the ISS – what are the implications of this for observing the Moon through a dome, here on Earth? Is distance between dome surface and camera lens a factor?
1) Extinction not accounted for – this may cause the ‘slope to the left’ of the DS seen above.
2) The image was taken in August 2011 using a Nikon 3DS camera. That is a 14-bit camera taking .NEF frames. We have this frame from NASA now.
For a given stack of 100 images, how large is the uncertainty on derived albedo?
We investigate this using one of the ‘good’ stacks of 100 images for the V band. We call the stack ‘good’ because it seems that repeated fits to it with slightly different settings of the fitting algorithm give similar results.
We split the stack into 33 3-image averages, 16 6-image averages, 8 12-image averages, 4 25-image averages, and 2 50-image averages. We also fit the 100-image average several times. We fit each of these multi-image averages and plot the results:
– so the conclusion is that flux constancy varies.
Top left: log-log plot of how the RMSE depends on number of frames in each average. the RMSE is calculated in the areas used for the fitting of the model to the observed image. The fitted line has slope minus a third. Top right: log-log plot of how the estimated uncertainty of the Albedo-fit (given my routine LMFIT) depends on number of frames averaged. The fitted line has slope minus a third. Middle left: log-log plot of how RMSE and Albedo uncertainty depend on each other. The line has slope near 0.9. Middle right: Fitted albedo against number of frames. The error bars are the formal estimates of the albedo fit uncertainty (given by LMFIT). Bottom left: How the standard deviation of the Albedo determinations (in middle right plot) depends on number of frames averaged. To avoid ‘overplotting’ we have offset N slightly by adding small random numbers but still maintain a clusteringa round the actual N value.
We see some interesting things in this evaluation of ‘intra-stack variability’: The uncertainty (both RMSE and Albedo uncertainty) ought to drop as one over the square root of number of frames averaged, if the noise causing the scatter is normally distributed and independent and there were no biases. As it is, the errors drop with a minus 1/3 slope indicating that something is holding back error-averaging. We also see some hint in this in the middle right panel: albedo seems to depend on how many frames we average – the frames were also averaged sequentially (3-averages formed from frames 1,2,3 then 4,5,6 and so on). We can test if the order of averaging is important by doing some bootstrapping and randomizing (TBD) but there is also the possibility that increased averaging allows some subtle biasing factor to emerge. Finally, we see that intrinsic error (i.e. that not due to choice of image stack but due to factors like noise within a given stack) can be brought below 0.2% when averaging just 20 frames. We also see that the formal estimates of albedo uncertainty are not that far from the scatter of the albedo determinations themselves (but might be off by a factor of 2 or so).
So, in summary, we have found that scatter can be brought down to the few-tenths of a percent level by averaging and using our fitting method and observed images. [Wahey! That is actually a major point for the Big Paper! Rejoice!] We find that there is some sign of bias causing a monotonic evolution of the fitted albedo with increased averaging, as well as a decrease in scatter that does not drop as expected for averaged independent noise.
We can test if the order of averaging is important by bootstrapping, and this will be done. It could help rule out effects due to some time-dependence of the properties of the images in the stack.
We have now tested some more ideas: We test effect of alignment as well as effect of the order in which the images are selected for averaging inside the 100-image stack.
Upper left: red line and black points show the old result – sequentially selected frames are averaged, without alignment. Green points and line – the images are selected non-sequentially as in ‘bootstrap with replacement’ and are also not aligned. We see that the slopes of the lines are the same – about -1/3 but that the bootstrapped results have about 25% higher error than the sequentially selected.
This gave the idea that ‘something changes’ along the stack which results in increased error. It could be that the Moon drifts slowly across the sky so that mis-alignment is more evident in sums of frames selected far apart as opposed to in sequence. There could also be ‘other time dependent’ things going on – such as various CCD-related effects.
Upper right: to test the drift hypothesis above we next aligned the selected frames when bootstrapping. As we see there is no effect of doing this! So drift is not the problem – or is it? Alignment consists of sub-pixel shifts that may induce variance-changes due to non-conservative interpolation in the image frame. We have considered this in this blog (see discussion and further blog references HERE). Since the green data look the same in upper left and upper right we conclude that alignment (i.e. possible flux non-conservation) cannot be the issue.
Bottom right: This shows that the scatter on bootstrapped albedo determinations are now larger than for sequential access, and that levels between 0.2 and 0.3% can be reached, at best.
So why is it ‘bad’ to pick the images in random order, rather than sequentially?
Speculation department: If it turns out that alignment simply isn’t the issue then what? We have the shutter and we have the bias. In the above we assume that shutter works fine and that the bias is the same in all frames. We test here the shutter, by looking at the total flux of each image in the 100-image stack:
Top panel shows percentage deviation of the total flux in each bias-subtracted frame relative to the average of the whole stack. The standard deviation is 0.46%. Flux is not constant from frame to frame. Why? Second frame shows the maximum flux in each frame after smoothing. The two graphs are very similar. The last frame shows the mean of a dark corner in each image. The graph does not look like the other two – so it is unlikely that strong sky variations or changes in bias caused the changes in total flux.
We guess that the changes in flux is due to either changes in atmospheric transmission or shutter variability.
But how could flux in-constancy cause the problems we see with increased errors when bootstrapping? Apart from a few big dips in the total flux, most points seem to be randomly scattered wrt the neighbours. In that case sequential access as opposed to random access of image sequence would not matter, would it?
There is the ‘wobble’ of the image as seen in a film we have on this blog – waves of ‘something’ wash across the lunar disc causing small transmission changes. Sequential images may then be more alike than images selected far apart, and alignment, being whole-image shift based, would not repair the problem due to ‘wobble’. A test suggests itself – look at fits to single images in a stack and the scatter these have. Then consider whether that is more or less than the scatter seen between n-frame averages of the two types.
CONTINUED EVEN LATER:
It was tested whether albedo fit scatter increased or decreased when integer pixel shifts and sub-pixel shifts were used: It turns out that sub_pixel shifts is better than integer pixel shifts in most cases, but that a straight sum (i.e. un-aligned) is best! NB: This is the case for the present choice of 100-image stacks. YMMD.
Below, we have investiagted the phenomenon of ‘Mickey Mouse ears’ – i.e. the tendency for theoretical lunar images to have too bright cusps so that the residual image ends up with a bipolar structure ressembling Mickey Mouse’s ears. We have investigated what happens to the ears if a semi-empirical BS model image is used, based on scaling and clip-selecting the BS from the observed image. We showed that the ears went away.
Now we investigate what happens to the quality of the fit – for our goal is to derive terrestrial albedos with minimal error (both scatter and bias; bias we can live with as long as it is constant; scatter is a pain).
We selected 10 images that we fit with both semi-empirical BSs and theoretical BSs and tabulated the values for fit-parameters as well as some of their formal fit errors (output from LMFIT).
In this plot we show the changes, in percent, of some key parametrs and their errors, in the sense that the purely theoretical result is subtracted from the semi-empirical result:
Change, in %, of some key fitting parameters. (semi-empirical – theoretical)/(½*(semi+theor))*100.
A: albedo, is larger by several percent in the semi-empirical (S) fit, than in the purely theoretical (T) fit.
dA: error on albedo reported by the fitting routine, is mainly smaller in S compared to T.
alfa: the power to which the PSF is raised is less by several % in S compared to T. This implies a broader PSF in the final fit in S.
ped: the pedestal added to the whole image has both positive and negative changes.
dX: the shift of the model image in X direction needed to align model and observed image. Mainly smaller in S than in T.
cf: core factor on the PSF – has larger values in S than in T. So the PSF peak is ‘pulled up’ in S relative to the wings.
contrast: lunar albedo map contrast, has smaller values in S than in T. This is consistent with the ‘pulled up’ core which ‘copies’ detail better.
RMSE: the root mean square error in the area of the image used for minimization, is mainly smaller in S than in T.
So, what have we learned? The two formal estimates of errors – dA and RMSE are both smaller in S than in T, so S would appear to be a better way to go.
Taken together – the smaller errors (on A, and RMSE itself) in S compared to T is encouraging, but we really do not know how much closer to the ‘true value’ for A we are, just that we are probably closer – at least in terms of scatter, not necessarily in terms of bias.
Using the semi-empirical method compared to the fully synthetic method adds very little in terms of computational overhead, so we will use S henceforth.
We note that the width of the distribution of albedos in units of the mean is 8 percent smaller in S than in T. The S method gives tighter distributions of albedo. Of course, the intrinsic (geophysical) scatter in albedo is present in this number too.
It seems our synethtic model images of the Moon have a distribution across the lunar disc that does not match observations. When we fit such an image to the ‘band’ near the photo-equator and subtract the fitted image from the observed image we get a residuals image with ‘Mickey Mouse ears’ – i.e. the residuals in intensity are bi-polar.
We are testing a semi-empirical method where the BS of the synthetic image is replaced by the observed BS, scaled to the right value. When such an image is fitted to the observed image we get no Mickey Mouse ears problem:
Left image: Residuals after a well-fitting synthetic image has been subtracted from the observed. Note the polar nature of the residuals – the cusps are clearly too bright while the area between the cusps is too dark. Right image: residuals when a well-fitting image, based on the observed BS, is subtracted. There is very little large-scale structure now.
The two fits deliver terrestrial albedo as one of the fitting parameters.
For the left image (i.e. the entirely synethtic model image) we get A=0.37298, for the semi-empirical image we get A=0.36311, that is a difference of 2.7 %, which is considerable, given our science goals of 0.1%.
Whether the fit has been improved where it counts is a different matter. The RMSE for the two fits are identical (RMSE=0.026).
The large change in fitted albedo is a (regrettable) feature of our fitting method – change something a little bit and the fits change by several percent.
We suggest that a meaningful test of fit quality will be to fit the ensemble of images with the two methods above and see if the scatter is less in one than in the other. TBD.
We have commented earlier that for an otherwise well-fitted model image the ‘horns’ or cusps of the fitted image appear relatively too bright compared to middle-of-the-disc pixels. We investigate this now.
For an observed image and the corresponding best-fitted model image we extract certain pixels defined by the incoming and outgoing angle wrt to local normal on the lunar disc and compare the intensities in the observations to those in the model image. Here is an example:
The upper panel shows the observed image with certain pixels picked out (the even dark band along the terminator). These pixels all see the Sun under angles between 55 and 65 degrees wrt the local normal. In the lower panel we show the intensities of those pixels in the observed image (black) and the model image (red) plotted against the angle between local normal and the direction towards the Earth.
We see that the model ‘dips’ for large angles (i.e. when the direction towards Earth is almost paralell with the local surface) while the model values (red) are unwaveringly high. We also notice that the model values start low at low angles and rise with a modest plateau starting at 75 degrees or so.
If we repeat the above for other sun-angle bands we see the same pattern – model is ‘flat’, observations ‘dip’ when the angle to Earth is large, leaving the cusps always brighter in the model than in the observations.
This is an image of the Moon occulting Saturn. Saturn is evidently brighter than Earthshine – however, this is a phase near Full Moon so the Earth is near New, and thus faint. Still, eh?
From APOD. Image by Carlos Di Nallo.
With a 1-degree occulter I took an image of the Sun. A slice across that image, through the darkest spot on the occulter gives this profile, plotting from darkest spot and outwards:
A log-log plot of the radial profile of the occulted Sun, on August 1 2014, using f22 and 1/500 s exposure, and a 1-degree occulter. Pixel scale is 0.04 deg/pixel, so 1 degree is at 25 pixels and the 200 pixels are at 8 degrees. The image is from the R-plane, G and B are not shown.
The occulter is the dark part at left stopping near pixel 20. The drop-off after pixel 200 is some roof or something like that, while the curve between pixels 20 and 200 are what the halo around the Sun looks like. The CCD is 12-bit so saturation starts near 4000, and non-linear response, perhaps, before that. However, we see clearly that the dropoff due to scatter in the optics is faster than dropoff of halo intensity. This implies that optical scatter plays a smaller role in defining the halo than does atmospheric scattering.
Note that this is with a multi-element SIGMA telelens – albeit on a rather hazy day.
Of interest for our own little earthshine telescope is to repeat the above on some crystal-clear day (hopefully approximating conditions on Mauna Loa) and see if the optics drop-off is also faster than atmospherics then.
On Mauna Loa, of course, the effects of optics are what they are, independent of altitude while the atmosphere is MUCH clearer there – and there is less of it than in Frederiksberg.
In our fits of model-images to observed images we fit a model with several parameters. One of them is a ‘sky pedestal’ which is a constant that is added to the model image before fitting in order to accomodate sky brightness that is not due to the scattered-light halo.
We include this term because we figure that the sky might be uniformly bright for several reasons:
1) moonlight could be reflected from the ground up to the sky and back into the telescope
2) airglow could brighten the sky
3) unresolved stars and zodiacal lights shine from the sky behind the Moon
4) light could be scattered inside the telescope and contribute a more or less uniform ‘haze’
We show here the magnitude plot of that pedestal term against lunar phase for our 5 filters:
Error: axis labels have been switched!
In order to interpret this plot you should know that the pedestal term is ADDED to the model image, so a positive value of the pedestal implies that the model is representing a sky brightness, and that a negative value of the pedestal implies that the model needs to subtract something before it can fit the observation well.
Notice that the pedestal term is negative for all but the VE2 filter. That is, only in VE2 is a well fitting model one that has a sky brightness contribution. For the remaning filters the model needs a subtracted term for the fit to be good.
Notice also that the VE2 pedestal is about 4-6 times larger than the other filter’s, in absolute terms.
Notice also that the term goes to zero for large phases (i.e. near New Moon).
So – turning to speculation now – what is going on?
Only VE2 behaves as expected: As the Moon gets brighter more light has to be removed. This could be Moonlight scattered from the ground, onto the sky, and back to the telescope [See the paper by Bernstein, R. a. R. ∼A. “The Optical Extragalactic Background Light: Revisions and Further Comments.” The Astrophysical Journal 666, 663–673 (2007). They model the moonlight reflected from the ground near a telescope.]. Or light scattered inside the telescope – but not airglow (why would it depend on lunar phase?). Nor can it be zodiacal light because although it depends on lunar phase when you observe the Moon in particular it has the wrong phase-dependence – towards Full Moon the ZL near the Moon is low; towards New Moon the ZL near the Moon is stronger (because the Moon is nearer to the Sun and the ZL is strongest near the Moon).
So – the pedestal for VE2 behaves as we expect widely scattered light from the Moon should behave.
By the way, let us call this light ‘ADL’ from now on – ambient diffuse light, to tell it apart from the light scattered in the halo, which has a strong profile with distance from the Moon.
But for the other filters the pedestal behaviour is opposite!
I think this could be a sign of the fitting process having a problem – the method has to specify negative pedestals for something else to work. That could perhaps also be the case for VE2 but to a smaller extent than the conventionally expected scattering: if it is bigger for VE2 a contribution could still be taken out leaving a positive pedestal for that filter.
Do we have any reasons to believe in stronger ADL for the VE2 wavelengths than the rest? The VE2 filter is of a different nature than the rest – perhaps it scatters more light? This would match the phase-behaviour.
What could the role of airglow be? While the zodiacal light and the galactic background and stars come from behind the Moon and are blocked by this, the airglow is all over the image. What happens when we fit such images with just one pedestal term? Can the procedure go wrong in the observed way?
What do we know about airglow? Does it depend on wavelength?
Let us assume that: In the VE2 filter there is a lot of scattering, and the method of fitting deals with this by specifying a large positive pedestal. For reasons not yet understood, there is a tendency to make the pedestal a little too small – but this is not seen in VE2 because the scattering is so large. In the other filters the scattering is less and the error comes through as a small negative pedestal.
Work on understanding why the pedestal needs to be a small negative number for the other filters. Perhaps it is a consequence of some of the choices of frozen parameters in the model. We do freeze ‘core factor’ and ‘rlimit’, for instance. Optimally, the factors that need to be frozen should be done so at values that do not induce strange problems such as the present one.
Secondarily, find out if the small negative pedestal influences albedo determinations significantly. With the DS intensity being proportional to albedo we can see that the small negative pedestal values directly correspond to albedo biases in the 0.2% range.
In this entry we estimated extinction coefficients for all data from each filter – that is, we did not study each night individually, but took all B, V etc data seperately.
We found, at that time:
B 0.15 mag/airmass
V 0.10 mag/airmass
VE1 0.08 mag/airmass
VE2 0.06 mag/airmass
IRCUT 0.12 mag/airmass
Since then we have eliminated some observations that we now know had problems of one sort or the other, and have the opportunity to re-estimate extinction coefficients. We now find:
B 0.18 mag/airmass
V 0.11 mag/airmass
VE1 0.06 mag/airmass
VE2 0.09 mag/airmass
IRCUT 0.05 mag/airmass
kB is 0.03 higher; kV 0.01; kVE1 0.02 lower; kVE2 is 0.03 higher and kIRCUT is 0.07 lower. The changes of +/- 0.02 are as expected given Chris’ analysis of extinction from single nights, but the change in kIRCUT is large – however, it is now more in line with kVE1: the two filters are almost identical, so that is a step in the right direction.
We found these extinction coefficients by plotting extinction-corrected flux against lunar phase and fitting a third-order polynomial. For trial values of the extinction coefficient
In some of our images sky conditions were such that the halo obviously was ‘too large’ for good results to be expected. We seek to eliminate these images now.
One simple measure of a halo that is “too large” is by looking on the BS and finding those images where the halo drops slower than in other same-phase images. We calculate and index of halo-narrowness based on the ratio of brightness 20 pixels beyond the BS edge, and the maximum brightness in the image. We call this the Q20 index.
On nights with ‘large halos’ we also often see a bright sky quite far away from the lunar disc, on the DS part of the sky. We quantify this by the mean brightness of the 10×10 box in the lower DS corner of the bias-subtracted image.
We plot now the phase vs Q20 and the sky brightness vs Q20:
Each row is for one of the 5 filters, left column is Q20 against the illuminated fraction of the lunar disc (i.e. phase). Right column shows sky brightness on the DS vs Q20. Q20 is the ratio of the BS intensity 20 pixels from BS edge on the sky and maximum brightness of the image.
We see that Q20 is quite constant against lunar phase, implying that we are correctly normalizing the halo. We see only a small drop in Q20 for (perhaps) V and VE2 filters as phase becomes very small.
We see some outliers where Q20 is high compared to most of the data at same phase.
We see (right column) that sky brightness and Q20 are related – for large values of both, but otherwise Q20 is a constant.
In fitting images to models – discussed abundantly elsewhere in this blog – we realize the need to concentrate mainly on images with as little halo as possible, so we wish to eliminate the images with a large Q20 value. Where shall we set the cutoff?
On the plots the dashed line shows 1.3 times the median Q20. We (arbitrarily) choose to get rid of all images with Q20 above this line.
We also note that Q20 is largest for VE2 and smallest for B – light is scattered further the longer the wavelength is. Does this tell us anything about the scattering mechanism? I’d venture that if the scattering was basedon the Rayleigh mechanism we should see more scattering in th eblue – wonder how Mie scattering goes? At least some of the scattering is in the optics and is fixed and unrelated to the atmsopheric effects – perhaps that can be confirmed by considering some optics?
What of the sky brightness?
We see that VE2 and V have higher sky brightness values than the other filters – VE2 so more than V. Most B, VE1 and IRCUT images have sky brightness below 1 (count), while VE2 has mos between 1 and 10. V has most between 0.,1 and 1 butr a fair fraction above 1.
For VE2 we spculate that the sky brightness is realted to long-wavelength flux from the sky – this could be from water vapour – so we suggest that VE2 sky brightness is compared to MLO weather data for humidity. Since the H2O may be high up in the atmsophere and not a ground level where the hygrometer is, this may not be easy. Satellite imagery for the nights in question could be brought to play on this issue. [Student Project!].
We are uncertain why V should have sky brightness issue if nearby B and VE1/IRCUT do not.
We arbitrarily suggest to also eliminate all images with sky brightness above 2 counts (10 for VE2).
Removing such images from ‘Chris list of best images’ we are left with 494 in number, and the list itself is available on request.
At the MLO, weather is monitored by automatic instruments. The data are available here http://www.esrl.noaa.gov/gmd/dv/data/index.php?site=mlo.
For our observing times we extract all available met data and inspect them:
Left: (top) Vertical temperature gradient (T2m – Ttop) in degrees C against fraction of Julian Day (fJD). fJD=0 at midnight, so each night’s observing starts at 0.8 or so and proceeds to the right, wrapping around at midnight. T2m and Ttop are temperatures measured on the MLO met tower at 2 m above ground level and at tower top (much more than 10 m up!). (middle) Relative Humidity in % against fJD. Ignore bottom panel.
Right: (top) Histogram of vertical temperature gradient for all observing times (black) and for every hour of 2012 (red). (bottom) Histogram for RH.
We note that the vertical temperature gradient for our observing times almost always are negative – that it is warmer higher up than near the ground – this implies vertical stability. The opposite condition certainly occurs during 2012 but does not correspond to observing times – probably because conditions were bad (convectively unstable or cloudy?). We also see that the gradient becomes less negative through the night implying cooling and equilibration of temperatures. Changes in RH are small. RH during observations show a skewed distribution relative to what was available – so something about observing picks out not the driest nights.
We shall see if the few nights with large RH (say, above 40%) tend to correspond to particularly bad conditions, by inspecting observed images and checking for ‘drifting wiggles’ and plain old ‘large halo’.
An example of what our fit residuals look like. Note the structure around the BS (right, on the lunar disc). Also the straight-edge structure on the BS sky … do we have internal reflections going on here?
Structure on the DS is in the tenths of counts range.
Here is the same image, but now in % of the local value … don’t scream! We have +/- 50% errors on the BS – but less elsewhere …
If you fit each image in a 100-image stack with our model, and plot the histogram of albedos, you get this:
The dashed line shows the fitted V-band albedo of the 101st image, which is the average image made up of the aligned and averaged 100 images in the stack.
In fitting the above 100 images we found residuals that were ‘wavy’ – probably the waves of ‘something’ that is seen in the movie here. The residuals look like this:
If you fit the 100-image average image, you get this:
The residuals are much smaller – but also ‘wiggly’. The RMSE of the 100-image average fit is about 3.7 times smaller than the single-image fit’s. Can this, and the width of the histogram (1.8%), be used to form a scaling argument so that the uncertainty on the 100-image fit can be deduced?
I’d like to think that
error_fit_100_average = SD_histogram*(RMSE_100/RMSE_1)
which would give us an uncertainty on the fit of the 100-image average of 0.5%whichis quite a lot. In contrast, the standard deviation of the mean, SD_m=SD/sqrt(N-1), is 1/10th of the 1.8%, or 0.18%. This is more in keeping with the experience we have had with fitting ideal images with added noise.
The advantage of the fit of the 100 individual images is that you get the histogram and thus a measure of the actual uncertainty – when you fit the 100-average image you just get the fit – formal uncertainty estimates depend on knowing (independently) the noise on each pixel. One way around is to consider Monte Carlo Markov Chain fitting using the Metropolis-Hastings algorithm – this iterates the fit in a Bayesian framework and generates histograms of all fitted parameters, but is very very slow because MANY iterations are needed. Another way would be to bootstrap the single images, somehow.
Here are the similar histograms for the same night from V, B and VE2. Again, the dashed line is the albedo found in the 100-image average image, while the histogram is made up of 100 single-image determinations:
We now have 3 stacks for which the mean of the histogram and the albedo of the mean image are very close – this encourages us to think that the uncertainty of the determination based on the mean image is similar to the standard deviation of the mean for the histogram. Let us test this by fitting even more stacks and their component images along with the mean image. It would be a way towards using the mean-image determinations as the ‘real thing’ and designate the variations we see from stack to stack as geophysical and not based in analysis method bias.
Here is a movie of what 100 images in a stack, on a VERY CLEAR NIGHT, looks like if you look at frame-to-frame changes, and histogram equalize the images:
If you monitor 4 patches of 20×20 pixels each (one on the dark side of the Moon, one on the dark side sky, one on the bright side of the Moon and one on the bright side sky), you get these mean series:
There is some food for thought here. Consider the DS patch – it varies by tenths of counts (this is mean over a 20×20 patch) – the mean of that patch is typically several counts – so we are looking at several percent to ten percent changes with time here. Of course, we do not fit individual images, but only the stack average.
Notice how BS sky patch is dropping in intensity in this sequence – this is consistent with BS patch getting brighter – less light is being scattered to sky here.
If the sequence of images is scaled to the total flux of one of the images in the sequence we get a small drop in BS patch variability – it goes from 0.22% to 0.12%. SO either the shutter is variable and causing this, or the total amount of light in the 1×1 degree frame is variable, for other resons: scattering or extinction may remove light from the beam entering the telescope. But certainly, the light in the small 20×20 BS patch varies on its own even in flux-normalized images, so non-shutter variability is present.
If we take a stack of 100 images, calculate the total counts in each frame and tabulate this, and then take the list of totals and compare the mean of the list to its variance we find that variance is MUCH more than the mean. This should not be the case if the totals were Poisson-distributed.
We find that variance is 100s to 1000s times bigger than the mean of the list.
This could possibly be due to shutter variations – the shutter is … less than fantastic … after all.
Let us inspect the problem and see if it was less at the beginning of operations at MLO. Perhaps wear made it worse as time went on? We did collect 50000 images or so.
Perhaps atmospheric transmission changes on short time scales is this big? Each of the 100 images in a stack took about a second to capture. Does sky transparency change by a lot over that timescale, on a 1×1 degree field?
In this entry below we suggested 4 things to try to improve the accuracy and precision of our fitting method. A fifth idea is that byt fitting small, sharp-edged regions on the DS we are allowing small shifts in the image to ‘pull’ thge fit. A similar comment applies to the effects of atmsopheric turbulence. We have elsewhere shown that the lunar disc ‘wiggles’ when you look at an animation of 100 images. Some regions (5-10 arcminutes in diameter) shift sideways by what looks like a large fraction of a pixel, relative to other parts. If we use small patches to fit on quality will suffer if we fit across a wiggly patch’s edges.
Using larger patches might therefore be an idea, and we test that here, while keeping all other thinsg fixed. That is, we use the exact same pixels as before but instead of concatenating 11 row-averaged bands we average the 11 bands into one profile. Again, we use the set of 10 ideal images to which realistic noise had been added.
mean albedo: 0.28031400
std. dev. of 10 fits:0.00021
SD expressed as percentage of mean: 0.08 %
Compare this to what we found before, using concatenated strips:
mean albedo: 0.28051
std. dev. of 10 fits: 0.00030
SD expressed as percentage of mean: 0.11%
It would seem we have about 30% less spread in the values found. As before the known fixed albedo of this ideal image was near 0.2808, so we have moved slightly away from this, and the bias is now about 2 SDs.
The use of averaging the 11 bands into one (they follow the contour of the lunar disc) does seems to ensure that flux is not lost outside a narrow band. We still use radial bins (5 pixels wide) so flux can be moved from bin to bin that way. We hesitate to make larger bins, as the degrees of freedom (surely?) would suffer. We have about 28 radial bins.
So, we have less scatter but more bias! Not sure this is a way it’s worth to go. Alsao recall that thgese are ideal images – we are not sure what happens with real images. One day we might fit all images using the concatenated and averaged versions of the fitting method and see which has the smaller scatter.
We jave more suggestions from last blog entry to try:
1) Use more sky pixels
2) Use more DS pixels
3) Use BS edge
4) Average several images
4. We can always do and is independent of other method choices. 3 is scary, so I will try 1 and 2 next.
We conduct two experiments, to asses the abilities of the fit-a-model-image method.
Experiment 1: Same image, with differing noise.
We pick an ideal image, generated from JD2455945, and to this one image we apply a PSF convolution, and 10 times generate images with varying Poisson noise. We then fit these 10 images, and look at the distribution of results.
mean albedo: 0.28051
std. dev. of 10 fits: 0.00030
SD expressed as percentage of mean: 0.11%
The actual value for albedo used to generate the ideal image was 0.28083594, which is 1 SD above the mean value found.
So, we can (again) confirm that in principle, we can achieve fits with small bias and small errors, as per our original science goal.
Experiment 2: Same image with noise, different fit starting guesses.
We pick now one of the noisy artificial images and refit it 10 times – each tiome starting at a random guess near the supposed best guess. We pick our vatiation froma normal distribution with amplitude 0.03 and multiply the fixed starting guesses by 1+0.03*N(0,1) for the 8 starting parameters (there are 8 – 7 before – we added shift om image along y-axis also). These guesses were then allowed to converge and we collected the results. We found:
SD in % of mean=0.0051
We see that the sactter is very small, i.e. that convergence is almost always to the same value. However, there is a bias in that the known albedo (see above) was not found.
The bias is (0.280183-0.28083594)=0.0006529 which is 0.23% of the correct value.
From this we lkearn that
a) convergence may not always be dependent on starting point – the same best solution will be arrived at, and that
b) the noise added to the image itself will have enough influence, even if small (i.e. realistic for averaging 100 images in a stack, and coadding 20 ‘lines’ in each of 11 ‘bands’) to cause a bias in this perfect-case example, where noise is only Poissonian.
The bias is a mere 0.2% of the correct value which we could lkive with given our science goal of 0.1, I guess, but reality adds more problems to our images than just Poisson noise. These effects appear to give us a typical spread of 1% – about 10 times what we saw in experiment 1, above.
As it is, we are using 1/10 of the pixels in the image for our fits so we have at most a factor of 10 more pixels to work on. This could in principle give sus a factor f 3 reduction in the uncertainty, if all pixels are equally effective at driving the solution. If all went well, our 1% errors coul dbecome 0.3% errors which woul dbe acceptable (just), in the absence of a bias.
What can we do along this line?
1) Many of the DS sky pixels are not being put to use – they coul dhelp set the sky-level of the fitted model better.
2) Not all DS disc pixels are being put to use, but there is a limit how far onto the disc we can go before BS halo starts being a too importnat factor, but perhaps we could double the number of pixels.
3) We only use the BS disc and BS sky to calculate the total flux which is used to ensure same flux in model and observation. We might be able to also fit some of the edge near the BS or just try to match the level of the BS sky near the disc – but the levels are much higher and our cirrent use of errors based on differences between absolute levels would become dominated by the BS pixels in driving the fit.
4) We can use more images from the same night – at least inside some period of time where the geometry of Earth and Moon does not change significantly – perhaps half an hour or so. In that time some 10-30 images could be gathered in reality. If several filters are wanted then perhaps 10 images in each filter (in practise it seems we could reach that on some nights, inside a half hour) and then repeat for other filters.
In summary, if erros were due to Poisson noise onkly, we could hope to double the number of DS disc pixels, use a lot more of the DS sky pixels, and perhaps work towards 10 images in each filter inside each half hour, giving us, potentially, the required improvements in uncertainty.
We can work on this!
There is a “however”, however: All the noise is not Poissonian – there are issues having to do with:
a) Bias subtraction – and airglow – and zodical lights: letting the ‘pedestal’ solve for these may be a bad idea – perhaps we should fix the pedestal at predicted levels in each image? This needs some wor, since we are sensitive to errors there too!
b) The atmospheric turbulence causes ‘wiggle’ in the images,. described on this blog, and this is a factor in driving the error in the fits.
c) The model we fit is not faultless either – the BDRF is a simple one and we see hints of some azimuthal-angle problem ain fitting along the edge of the disc. Model couild be improved.
d) Other parameters in the model might be fixed too – such as the ‘core factor’. Experiments are still needed to see if this is a good idea.
After exhaustive (-ing?) tests of fitting the same image over and over again under varying conditions, we now turn to fitting all images in two colours from one night. We take B and V images from JD2456045.
We have before fitted ‘lines’ going from DS sky onto the DS disc itself, and then moved on to another position along the DS edge. Thsi helped us see that fits along the edge could vary systematically, and we have speculated that it was due to the BRDF-model used had some azimuthal angle-dependence that was different from reality.
Instead of getting several fits along the edge and combining these, we now choose to combine lines and make one fit, thus allowing the errors to balance out somewhat.
A typical fit, for 11 lines, looks like this:
The upper panel shows 11 concatenated ‘lines’ going from 50 pixels off the DS edge to 70 pixels onto the disc. The white curve is observations and the red is the model. The lower panel shows the difference between the two at ‘best fit’.
We have 8 parameters in the model that together describe such things as the terrestrial albedo (the main goal of the exercise) and parameters that adjust the PSF, as well as the relative alignment of model and observed image.
In the above, the residuals are small, buts how systematic behaviour near the ends – going along the sequence of the zig-zags we are essentially sweeping along the edge of the DS – so something is not well at extremes of the sequence, but near the middle the residuals are very small.
We smooth the concatenated lines (and corresponding model line) before fitting – this helps reduce a problem we had before, namely, that repeated fits of the same image starting at slightly diffeent (1% different) starting guesses could finish up at slightly different ‘best fits’.
For all images in B and V for one night we can compatre the various quantities determined in the fitting:
From top to bottom, left to right: Albedos in B and V images against time of day (not really Julian Day, but fraction of day since that JD started); alfa of PSF wings against time; PSF core radius (here held fixed at 3 pixels); pedestal against time; image shifts in X and Y; lunar albedo contrast factor; PSF core factor; and RMSE against time. Black crosses are B-band, while green crosses are for the V-band.
Here we see a clear difference in B and V albedo, with some scatter. The PSF is behaving diffrently in B and V (core factor, pedestal and wing alfa). Lunar albedo amp contrast factor is unaffected by B and V. RMSE is bigger for B than for V. Image shifts are near 1 pixel in each idrection.
The above choice of a fixed core radius was prompted by noting that leaving it free to vary inside the range 2-10 would cause the fit to pin the core radius to the lower limit in V and the uppe rlimit in B. We argued that some of the factors core radius/core factor/ andpedestal were degenerate and chose to fix the core radius at a compromise value. We note the extreme values of the core factor parameter for B, compared to V. In essence, a large core factor implies that most of the flux is in the core of the PSF and little is in the wings. It is hard to say if this is what we expect for B vs V – the colour-dependence of light-scattering in optics (and atmosphere) is not understood by us. Yet.
The observations in B and V were not obtained at the same times (because we use filter-wheels) so it is not easy to combine the B and V albedo determinations. Some of the scatter seems to be correlated (first points in left top panel above), but the rest do not. The means and standard deviations and standard deviations of the means, and the SD in percent of the mean, for B and V are:
B: 0.3608 0.005 0.0012 1.3%
V: 0.2902 0.002 0.0006 0.8%
mean SD SD_m SD/mean*100
So we have scatter on albedo determinations of near 1%, like BBSO. Formal errors on the determination of albedo for a single image depend on being able to estimate the uncertainty of the data. As we clearly have some systematics going on (plot at top of this entry) this is not possible.
For the night JD 2455945 (the subject of our paper) we have fewer good images, but get is:
B: 0.37007500 0.00020615528 0.00011902381 0.055706352
V: 0.28093333 0.00023094011 0.00016329932 0.082204595
mean SD SD_m SD/mean*100
The B and V here are ALBEDOS, not magnitudes. Converted to magnitudes the Balbed/Valbedo ratios on the two nights have Johnson B-V colours that differ from one night to the other by 0.06 mags.
Thus we are told that one night’s earthlight was 0.06 mags bluer than the other. This is several times our estimated observing precision.
Is that a lot or a little? Our EGU 2014 poster tells us which …
We have been looking at the ‘pedestal’ which is a part of the model we fit to our observations. With a black sky and perfect bias and dark-current removal, the pedestal should be 0, and the fitted model should have a halo that sloped off from the Moon and perfectly explained all brightness seen on the sky.
In reality, there is background galactic light, zodiacal light, airglow and any terrestrial light (Moonshine reflected from the ground!!), to allow for. To this end we have operated with this pedestal in the model, which was a vertical offset of the model. In looking at fits we notice that the pedestal is degenerate with the halo-parameters in such a way that they tend to lie along a line when a lot of fits are compared, each started at slightly different points – even for the same image repeatedly fit. The use of this pedestal therefore leeds to a bias in the determined albedo. Which is not good.
We therefore have tried to evaluate expressions for the additional sky brightness so that the pedestal could be fixed at an assigned, physics-based, value and not interfere with the fit itself.
In doing that we had reason to revisit and elaborate on Henriette’s error budget analysis. In particular, we performed the analysis allowing for error terms in DS and BS fluxes, dark currents, and exposure times. We introduced the uncertainty on the superbias factor and see now that it is the dominating factor when area-means are considered.
For area-means of 20×20 points, the error due to the superbias factor is as large as the error due to Poisson statistics on the DS. For smaller areas than 20×20 the DS Poisson error dominates.
Since levels of sky brightness are of the same order as the contribution due to factor uncertainty, or larger, we see that we have a problem.
We could try to estimate sky brightness better, but as we realize that fitted albedo varies, for a single image treated in different data-reduction ways (bias removal, and stacking), we need to also re-consider the use of ‘anchoring’ of the model halo in the DS sky part of the image. This is what BBSO achieve by fitting a line to the DS sky and subtracting it from the image.
Improvements in estimating sky brightness could also be pursued, but all constant contributions will disappear in ‘anchored’ procedures. The point is to not let the fit be biased by such terms.
We continue the investigation into how well albedo can be determined by looking at justa single image and applying various methods to it.
We compare the ‘old’ method to a new method. The old one consists of plotting row-averaged slices that start on the DS sky and proceed onto the DS disc. We place the slice at 9 different locations up and down the edge of the lunar disc. The ‘new’ method consists of averaging the whole image (and fitted model) into one row by averaging along columns. We do two runs of the old method – varying the distance we go onto the DS, and one run of the new.
Black crosses indicate the albedo found by fitting along 9 slices ono the DS disc going from 50 columns away from the edge to 100 columns beyond that. Red crosses indicate the same but going to 50 columns onto the disc. Green crosses indicate results found when column averaging the whole image but starting from 9 different starting guesses of the fitting parameters. The dashed and dotted lines give means and standard deviation of the mean for all trials.
We see that the three methods give a spread of values, with few repetitions. The standard deviation of the Mean is about 0.25% of the mean, but the standard deviation of the distribution is about 1% – single determinations will therefore vary a lot, while the mean over many trials can give a mean with small error.
We notice that the green crosses – which are results of identical trials in the ‘collapse the whole image into one row’ method but starting from slightly different starting guesses – all lie below the mean and have some scatter – so the new method is biased wrt the old one and the methdo itself is finding ‘best fits’ that depends on starting conditions. That the convergence is not to the same number each time show that the LMFIT method used reaches a stopping criterion. We have chosen max 500 iterations.
Neither of the above findings are reassuring – fitting method choices determine the outcome. Possibly, averaging over a large number ffo different methods would give us a result that is stable and near the truth.
In summary, we are looking at method-dependent biases of half and whole percentages, and errors of means near 0.2%.
It is not clear if the bias comes from lunar albedo-map and reflectance-model choices, or are stochastic in nature. THe effect of stochasticity is seen from the green crosses where all things are fixed, except the starting guess for the iterated model-fit.
We explore how well albedo can be determined from just a single image, varying the fit approaches.
A single 100-image stack of the Moon was selected, aligned (or not, see below), bias-reduced and averaged and then subjected to our model fit. The fit selects 9 21-row wide ‘strips’ cutting across the DS edge of the disc so that we have 50 columns of sky and then 100 columns of disc to fit on. The 21 rows are averaged in observation as well as model.
Applying this method we get 9 estimates of the terrestrial albedo. We selected three methods for centering stack-images:
1) Align the images in the stack once allowing for sub-pixel shifts, then average
2) Align iteratively, refining the alignment wrt an updated reference image, then average
3) Do not align at all, just average
In this figure we show the albedo determined from the three sets:
The determinations are shown in sequence, separated by vertical dot-dashed lines – method 1,3 and 2 (in that order). The overall mean and standard deviation is shown
We see systematic deviations: all Method 1 values are below the mean, all method 3 values are above the mean and method 2 is a bit above and a bit below.
The mean of these values is 0.3047 with a standard deviation of the mean of 0.0014 (0.45% of the mean), which is not too bad! However, we could clearly do better if the method-dependency was not present. How to remove this?
Methods that shift images by non-integer-pixel amounts risk being non-conservative – i.e. the total and area-fluxes are not conserved. We have elsewhere in this blog estimated how bad the problem can be [see http://iloapp.thejll.com/blog/earthshine?Home&post=369 also see http://iloapp.thejll.com/blog/earthshine?Home&post=299]. We found that it was slight, means can only change by much less than 0.1% during a sub-pixel shift of the image.
Not aligning images at all risks comparing ‘smeared’ observations with ‘knife-sharp model images. The smearing occurs due to image-wander during exposure and can be as much as a pixel or two.
Revision of our alignment methods seem in order, given the above. Possibly shifts by whole pixels only? This is at least conservative away from edges.
Note that method 3 (the middle set) has the least internal scatter. For this set the standard deviation of the mean is just 0.2% of the mean. That is pretty strange, actually, since this is the ‘do not shift images before coadding’ method. Hmm. Then again, perhaps it is not strange, and we are simply seeing that the likely most fuzzy image has the most stable fit – as the strips select different parts of the light-and-dark lunar surface to fit on, contrast causes the fit to wander if the image is sharp. But on the third hand, our method now allows for ‘contrast scaling’. Hmmm, mark 2.
Shifting and co-adding involves two issues – the interpolation required at a given non-integer pixel shift may not be flux-conservative (our tests seem to show this is only a small problem). The shift itself may be poorly estimated. Our shifting method is based on correlation – it does not look specifically at such things as ‘are the edges still sharp’ – i.e. Sally Langfords Laplacian edge-detection method. The use of edge-information only is not clearly the best method since it is based on very few image pixels and thus suffers from noise. Perhaps a hybrid method can be envisaged – “correlations and maintaining-sharp-edges”?
Since the start of this project we have worked on a method that fits suitably-convolved amd scaled theoretical images of the Moon to real observed images of same. Below we show that the method has improved, by fitting a profile from the same night with two methods from 2013 and now.
This is a pdf file showing the three plots:
As you can see, the present method leaves residuals that are structure-free – this is not the case with the older methods.
We are therefore approaching a situation where all data can be fit pretty much perfectly by a multi-parameter model.
The model uses these fitting parameters: a vertical offset representing sky brightness, a shift of the image along rows, a parameter for the width of the PSF core, a parameter that vertically scales the core inside that width, a parameter that sets the slope of the wings, a factor that scales the lunar alnbedo map contrast, and the terrestrial albedo.
Since the results still show signs of a dependency on just which part of the Moon we are fitting on, we conclude that the BRDF functions we use – Hapke 63 – are not capturing the right azimuthal-angle dependence of the reflectance. This area is now where improvements are to be found.
If we succeed we shall actually be contributing to Hapke-type work, by providing empirically validated BRDF functions.
In this post below we compared our canonical PSF (raised to various power) to the PSF shown in Figure 6 of the AvD paper.
We have since re-estimated /below) our PSF from radii near 0.1 arc minutes to 13 arcminutes using excellent data from multiple images of the star MARKAB. This PSF is empirical for one specific night and does not need the raising-to-a-power that the canonical one does. We plot it as is on the AvD figur, trying now to re-create the twopanels of that Figure:
Right panel shows our PSF as surface brightness in magnitudes per area.
I’d say our empirical PSF (green line) looks like it is realistically modelled by the canonical one raised to 1.8 and still beats the AvD PSF (the crosses).
We extended our PSF beyond 13 arcminutes by a third order power-law.
We have assumed a few things: AvD speak of normalized flux – we take this to imply that the PSF is divided by its VOLUME, not its area. We assume that calculating the surface brightness involves taking the total flux inside some distance from the peak and dividing it by the area.
We have some excellent data from observing the star Markab, in Pegasus. We have 334 images at 4 seconds exposure, and each is bordered by dark frames of the same expsoure time – thus we can subtract bias and any dark current easily.
We did this, and aligned and coadded the many images. Then we inspect the PSF by cosnidering the intensity around teh star in concentric rings:
We see that the core of the PSF is about 2.5-3 orders of magnitude above the point where the wings start. We see the wings drop off and join the sky level. The red crosses are median bin values. The blue curve is a polynomial fit, the coefficient of which are given in the table. The polynomial is found for a fit to 1/radius against PSF for the wings, and a Voigt function for the core out to about 1 arc minute.
Some points: The curve is not the PSF – because there are remnants of sky brightness – about 0.88 counts [star scaled to 50000 counts at the peak]. The PSF would be well approximated by the above polynomial without offset (first coefficient).
The part of the PSF from 1-3 arc minutes is roughly linear, as if a simple 1/r ^alfa power-law applies. This is not the case when we consider the above.
We might revisit the extended halo we see in lunar images with the above insight into the effect of a sky level.
An interesting paper has appeared on Arxiv:
This paper discusses the use of novel lens coatings in commercial grade lenses (Canon) and DSLR cameras to build a multi-lens, co-additive system. They discuss the PSF they have in their Figures 6 and 8 (8 is about ghosts).
We can compare the PSF we believe we have to theirs:
Here we have used the same axis-scaling as Figure 6 of AvD. The three curves are our normalized PSF profiles for three settings of the ‘power’. These are 1.8, 1.6, and 1.4 – the largest at the bottom. Remember that the ‘canonical PSF’ we use already has a power slope of close to 1.7 so with the extra powers above we reach 3.06, 2.72 and 2.38, respectively. We know that the power of 3 is the limit for the wings of a diffraction profile, so use of values of ‘power’ near 1.8 implies we think we are diffraction limited – in the wings. Actual fits to observed images tends to give powers nearer to 1.7 or a total power-law exponent near 2.9. The points on the plot above are taken (by eye) from Fig 6 of AvD.
They seem to have not only a broader PSF than us, but the ‘wings’ appear to be linear in lin-log: that is, their wide halo is not a power law as we infer, but is rather an exponential.
On the face of it, our optics are ‘better’ than theirs, but hold on:
1) we do not actually measure all of our PSF – we infer that a power-law tail is in order; we only have actual profile measurements from point sources such as stars and Jupiter (almost a point source) inside several arc minutes. The rest of the wings are inferred from how the halo around the Moon looks at distance.
2) In their Figure 8 an image of Venus and a star is seen – and a ghost of Venus. The authors state that “the ghost contains only 0.025% of Venus’ light”. I think that number is so small that our requirements of 0.1% accuracies in photometry would be met. Note that ghost and the PSF are unrelated – the PSF wings describe light scattered while ghosts are reflections off optical element surfaces. The paper says that it looks like not all lens elements were coated with the novel coating.
What do we get from this?
I would like to investigate the PSF we have some more – this can be done with repeated images of a point light source in the lab: this would clarify the non-atmospheric part of the PSF. Despite current problems with FWs we may be able to pull this off with what we have.
I would like to understand where the (inevitable) ghosts are in our system – did Ahmed park them on the Moon itself? If they are same size as the Moon and well-centered they may not cause any damage since they merely replicate the image information (if same size and in focus).
It seems the AvD system has very weak ghosts but larger scattering than us (if we understood our PSF wings correctly) – if the weak ghosts are due to these new lens coatings then they are of interest to us in possible future designs of the system. If the scattering really is as high as it looks, compared to our (guesstimated PSF wings) then we have a better system than them, period. Mette commented that these coatings can cause scattering – wonder if the AvD people chose the coatings to lower ghost intensities or to get rid of scattered light? They do not say so directly, but do compare to large telescope PSFs (their Fig 6), and find less scattered light in their own system.
We need to understand how they measure their PSF into the wings.
Tested a script while looking through the telescope. Script was set up to switch filters and take sequences of images.
Everything seems to be badly scrambled! Colours of filters do not match what is requested, and the sequences of images received are incorrect – for instance, when asking for 10 images at 2 seconds each I get about 3 images at 2 seconds and the rest are rushed through rapidly.
Shutter opens at strange times after a sequence.
It seems like the system is badly shaken up now. I think some of the above behaviour may explain some of the odd image-sequences we had at MLO, but the majority from MLO were good: now it all seems random.
My guess is that the LabView software has been jumbled somehow – it is easily done since the coding is not deterministic as text code is – it is all little coloured lines and boxes. It is easy to pick something up and drop it down in the wrong place. A test and a fix for this might be to restore an older version of the software – for instance something after Ingemar fixed the bad coding problems we found while at MLO and before the system was hit by lightning.
Another possibility is that hardware has been damaged in such a way as to scramble/ignore software instructions. Not sure how to test for this – we used a ‘sniffer cable’ to listen to the erroneous traffic to the mount, but I am not sure what to listen in on here?
Wonder if there is a way to ‘reset the hardware’? Perhaps the FW are ‘off by one’ – some latch is not clicking home?
Inside the ‘breakout box’ there are lights to watch as a script runs. There is a light on a ‘hager’ relay and it evidently lights up when the two Thorlabs filter-wheel controllers are being adressed. They have displays that show numbers and I am guessing they show the position of the FW.
While running a sequence through the 5 filters I saw the numbers change. I think I caught a rythm corresponding to several passes through the 5 filters, but I swear there was an irregularity [hard to prove afterwards as there is lots of uncomfortable crouching and waiting to do]. Also strange is that the FW displays show two sets of numbers first, these then change to another set, then the hager light goes out and you can hear the shutter start working [its progress is uneven, however, despite being asked to take 10 identical images]. Here is a sequence:
where “A->B” means that the FW display first showed an A which then changed to a B, after 5 seconds or so. Notice how FW1 always goes “2->1”. FW2 seems to be cycling all numbers. Notice how the start position in one pair is always 1 higher than the end position in the previous set. There is a cryptic sentence deep inside Dave’s LabView code about how Thorlabs uses one set of indices and his code another – this could be it, I guess.
I think FW1 is the ND filter and is always set the same way – in the above test I was asking for the ‘AIR’ position and I think this is what we see. I will now run a test where I ask for one of the ND filters instead and we will see if the numbers change!
OK, I did that, and it seems I am right. FW1 is the ND filter. I set it to ND2.0 and the sequence 2->1 above changed to 5->6.
Should run a sequence now of just changing the ND filters.
Ok, did that too. This time I used a script that always set the ‘B’ filter, but cycled through the available ND grades – that is AIR, ND0.9, ND1.0, ND1.3, ND2.0. The typical sequence of readouts from the Thorlabs controllers was:
we see how the colour filterwheel (FW2) always sets the same filter and always seems to start at the one numbered one higher. Visual inspection during the cycling revealed that the colour of the light switched between the colour of the lamp and Green – i.e. the FW2 settings actually acquired were ‘Air’ and ‘V’ – not ‘B’ as I asked for.
FW1 meanwhile seems to have cycled irregularly – 1,1,3,4,5, … (2 is missing). It was not really easy to see whether various ND grades were inserted or not, but clearly AIR was one of them since I could see colours of the light.
Note that in a sequence “A->B” a sound – such as of a filter-wheel changing position – is heard before A, and before B.
Not sure what to make of this. It could be a signals problem where feedback from the actual position of the wheel is missed or missunderstood by the software so that new or wrong commands are sent.
Is the software that directs the FW giving absolute commands or relative ones, as in
“Go to the blue filter”
“go to the filter after the one you are at, which I assume is the IRCUT filter, so that you will arrive at the B filter since it comes after IRCUT”?
Why does the wheel change position twice during a ‘set the FW-wheel’ command? Makes sense only if it always went to some home position, and then proceeded from there – but clearly it is not starting from the same position each time, given the readout above. Again, could it be that the intent is to send it to home each time but this goes wrong and it doesn’t actually arrive at home, but assumes it did for the next relative command?
NEXT time, we need ABSOLUTE ENCODERS. I want a little man inside the telescope with a cell phone calling us with the ACTUAL position of each FW!
Manually testing the shutters and FWs.
We disconnected the CCD camera so we could look through the telescope along the optical axis, at a lamp. We ran the camera, so that the internal shutter opened and closed regularly, and then manually changed the color and ND filters.
Strange behaviour was observed: Setting the color of the filter was not a simple affair – the colour would be a bit random, and the ND filter would reset – apparently by itself – after a few seconds.
We will try to run scripts next, to see if the command of the devices from the scripts are more consequential and whether the ND resets in the middle of a sequence. Our experience from masses of observations from MLO (before the lightning struck) is that stacks of images are all OK as long as the first image is all right.
An artist, Jeremy Sharma, in Singapore, has asked us what Pantone colour code the earthshine colour corresponds to. He heard of us through the Guardian story.
I can see two ways to answer him: One is via a conversion from colour temperature to Pantone code (or, equivalently, hexadecimal RGB code), another is to take a realistic image of Earth and averaging the RG and B channels.
I cannot, yet, find a conversion from colour temperature to Pantone code or RGB intensities, but the temperature to use is given by B-V=0.44 which is near 6400K, I think.
For these images of Earth:
I can take the average of the R G and B channels of both images – omitting the black sky around earth.
the RGB averages are:
89.5127 92.4462 115.971
or, rounded to nearest integer
90 92 116
The hexadecimal equivalent of this triplet is #595C73
(use IDL ‘Z’ format to work that out …)
On this page
(and links thereon) it is possible to find Pantone colour codes and the equivalent hexadecimal code. The above value does not appear on these pages – but a tool at
allows you to type in the above hexadecimal number and see the corresponding colour as shown on a computer terminal.
The above colour can be ‘brightened’ or ‘darkened’ by multiplying each of the RG and B values by some number (same for all three) and then converting to hex code. In doing this avoid saturating any of the three (they must be smaller than 255).
I did this so that a lighter rendition of the same colour would appear, and the result is
which you can also type into that page and see (they allow showing of two such colours next to each other).
Real earthshine is thousands of times fainter than moonshine so I doubt you can ‘see’ the colour if you make the numbers realistic (in fact, RG and B would be 0 in such an image since the numbers are integers and cannot be smaller than 1 without being 0). The two above colours have the same tones but differ in brightness.
I expect that a dye maker or a paint shop with a smart machine can mix your colour according to the hexadecimal code, instead of the Pantone colour code.
For our ‘splendid night’ JD2455945 we can have two estimates of the albedo of Earth:
One estimate is provided by the ‘profile fitting’ where albedo is directly determined as a model parameter of the Sun-Earth-Moon system, and another estimate is given by the B-V determination we have from the cancelling halos.
From profile fitting albedoB/albedoV: 1.217 +/- 0.003
From B-V: albedoB/albedoV: 1.202 +/- 0.005
The error on profile fitting results is propagated from the standard deviation of several determinations of albedo in B and V and thus is ‘internal error’.
The error on the second estimate is taken from the paper and is ‘internal’ also – the dependency on error of the solar B-V value drops out.
The two estimates are similar but, given errors, formally different.
The fitting result is based on 6 fits in total – three in the B image and three in the V image.
The amount of clouds on Earth changes from day to day, and as clouds reflect sunlight (with B-V=0.64) more clouds cause B-V of the earthlight to drift towards the solar value, while less clouds shows the Earth bluer. How much do we expect the terrestrial B-V colour to change day to day on the basis of changing cloud masses?
Inspection of GOES West full-disc satellite images of Earth shows that the local noon (i.e. disc is fully illuminated) brightness variations due to changing clouds amount to about 3.2% of the mean brightness. As the Earth below is dark (Pacific Ocean) almost all of the brightness is due to clouds so we are not too far off by saying that the amount of clouds varies by 3.2% around its mean value on a daily basis. With Earth on average being almost 67% cloudy we can re-use the Stam models of expected terrestrial spectra to see what the expected changes in B-V is due to cloud variations.
We find that B-V will vary, with a standard deviation near 0.005 around the observed value of B-V=0.44.
This is smaller than the total observational error we have. We have shown that the theoretical Poison-noise limited uncertainty would be 0.005 in B-V, but we cannot observe that well. Yet.
So – we should not expect to see values for B-V very different from 0.44, which helps explains why we get the same value as Franklin’s mean value using just a single (but precise) observation.
We may be able to see 2 and 3 sigma deviations in the cloud cover, however.
The purpose of our telescope was never to observe daily cloud variations – in the long term we hope to be able to qualify that we can set limits to climate-change induced changes in albedo.
Also, the above is just an investigation into whether we can use COLOUR changes to quantify albedo changes – we still have to quantify how well our direct-photometry measurements can see albedo changes. Paper II.
In an interesting paper by Antilogus, et al subtle effects of electron interactions inside the CCD material itself are discussed, which could be a tool for understanding and perhaps quantifying CCD nonlinearity.
We have tested our own Andor BU987 CCD camera for these effects and see them.
Plots of row- or column-neighbour means vs correlations. Top row: first panel: full set of values of means vs column-correlations
– we note the strong non-linearities that set in at about 50.000, and
Top row: second panel: for just the range of mean values below the onset
of non-linearity a robust regression is performed (red line).
Bottom row: same as top row but now for row-correlations.
We note the general correspondence to results in the ‘brighter-fatter’
paper: rows are more strongly correlated with their neighbours than are
columns – the ratio of slopes is about 2.0.
Can this be used quantitatively to correct for non-linearity in our CCD? The camera linearity was already tested for linearity while in Lund. See Figure 8 in the report.
Testing the system on a lamp-illuminated flat field. With a script, sequences of images are taken with commands to change filters. Kinetic sequences.
There is obviously some problem with either the shutter or the setting of the FW, because while some images are well exposed at more or less recognizable exposure times (we use a different lamp from the hohlraum we had on MLO) some are simply almost all black at those same exposure times.
To test if it is the shutter that is failing to open, or the FW that does not set, we must eliminate one thing at the time – first let us run a script with exposure times of varying lengths, and simply listen for how long the shutter is open.
If it seems the shutter is always open as long as we ask it to be, we move on to the FW – perhaps we can open the telescope box and simply look at whether it turns (pant a white spot on the side!).
Now that the camera is up and running again, a series of exposures have been taken to test the bias field.
Below is the old and new compared, and the difference in percent:
The bias level has changed by 1.3% or so, and the slope of the new field is different from the old – there is also some, smaller, change in curvature. The change in mean level is not so important since we scale the superbias to the current bias mean value anyway, but the change in slope is important. The relative level from up-slope to down-slope is something like 0.1% which is the adopted science goal for our project.
Further testing of the bias field will be performed as we proceed and if the above pattern is stable we shall simply start using a new superbias for currently acquired science frames. If studies show that the bias goes up and down and changes slope willy-nilly, as it were, then we have more to worry about!
As stated before, scripts can now be run and images gathered. We have solved a minor problem with disc-access that suddenly popped up. Still not sure what this is due to, but can now save images from automatic scripts. This clears the way for a row of lab tests that will start now.
It was possible to start a simple observing script today: Mount was not attached so nothing moved except shutters and FWs (and the SKE is still not working) – files were obtained but it turned out that something with the setup of permissions on the PXI has changed so images could not be written to the ‘NAS’ drive (a network drive in the instrument rack). Am seeking help from DMI IT department for this.
If that starts working we can run tests on bias fields at least, and later on flat fields (with a screen of some sort in the distance) and then focus tests and all this.
Henriette sent us this:
Jens Hoijemakers in Leiden wrote a thesis about a telescope you could put on the Moon and monitor Earth with – an earthlight telescope!
He has a YouTube video: http://www.youtube.com/watch?v=AcofvAjVT7Y
His thesis is here:
And here is a paper.
Our paper, and the story in The Guardian seems to be generating quite a lot of activity on the net. Here is a collection of what I can find:
Original Arxiv.org paper – Jan 9 2014:
From Jan 10 2014:
The Economic Times of India – :
and so on …
Several stories captured by a news ‘bot: http://www.newsfiber.com/p/s/h?v=EBXOI5VPyVXw%3D+aivNYXh2b1I%3D
A search on Google gives a lot:
This search: http://bit.ly/1f2a3fM gave 746 hits on Jan 21 2014.
In this entry we started exploring the Stam 2008 set of model Earth spectra. That entry was based on the ‘land only’ set of Stam models. Ocean-only models are also provided, so we can mix these with the land models to get more realistic Earth spectra. In Stam ocean models the albedo is assumed to be 0 apart from a specular contribution, so all colours are dominated by the Rayleigh scattering and a bit of sunlight reflected at the glint point. In land-only models the vegetation red edge has a large contribution, so we expect some changes when a land-ocean mix is introduced. We assume that Earth is 26 % land surface in what follows.
As before we calculate the spectrum of a range of cloudyness percentages. We find that:
B-V for Stam cloud free forest model is: 0.19
B-V for Stam cloudy forest model is : 0.54
B-V for Stam cloud-free ocean model is : -0.23
B-V for Stam cloudy ocean model is : 0.53
Note how similar the 100% cloud cases are for land-only and ocean-only, as expected. There is a small effect of the surface below the clouds (which is one of the features of the Stam set of models – others have ignored this effect).
Note also how blue a cloud-free ocean only Earth would be – this is, I expect, the B-V of our own blue skies on a cloud-free day, as we have discussed elsewhere. I think we should try again to measure blue-sky B-V with our DSLR calibrations.
As before we estimate slopes of the B-V vs cloud % relationship.
Change in % cloud cover per mmag change in observed B-V: 0.31 %pts/mmag
Change in % cloud cover for total observed error in B-V: +/- 6.3 %pts
Change in % cloud cover for internal observed error in B-V: +/- 1.6 %pts
The total B-V errors we have are +/- 0.02, mainly due to uncertainties in the Solar B-V as pointed out by Chris. If we leave all measurements relative to the fixed (but somewhat unknown solar B-V) we have internal errors of just +/- 0.005 mags and that implies we could determine Earth’s cloud cover to within something like +/- 1.6%-points on the basis of observed colour alone. Not bad!
With a technique for observing B-V on all nights we could thus complement direct albedo-determinations (found with edge-fitting for instance). The colour-method has drawbacks and benefits relative to the direct albedo determination in that a difference is used instead of an absolute fit.
This is a point towards why we need to understand how the halos are formed and just why they cancelled on that night!
It is also fuel for fixing the SKE because with an SKE more images could be obtained where the remnant halos in B and V almost cancelled on the DS.
With the CCD working again, we have some hopes of understanding the SKE problems, as well as performing some very careful lab observations with halo due to optics only.
We managed to get actual images from the CCD camera today. Hans will tell the story of what was wrong.
So – testing all systems together, next!
Stam (A&A 482, pp.989-1007, 2008) has provided spectra of model Earths. From these we can construct B-V colours and compare to our observations.
Using the F0 and F1 model coefficients from Stam (cloud free and 100% cloudy, respectively) we combine these linearly as a function of cloud-cover fraction, and apply Johnson B and V filter transmission curves, and calculate B-V.
We check the method’s correctness on the Wehrli Solar spectrum (we get B-V=0.64) and for Vega (we get B-V=-0.014). Both are very close to the accepted values, so we trust the method for other spectra.
We generate a set of models going from cloud free to 100% cloudy, and get:
Is it just my eyes or is this a really lousy rendition? I wish the blog software allowed for better pictures! Anyway, what we (almost) see is that the observed B-V for earthlight corresponds to about 40% clouds on Earth, with our observational uncertainty of +/- 0.02 mags corresponding to quite a wide range of cloudiness: +/- 8%.
Colours are thus not the way to determine cloudiness on Earth!
Conversely – the variations in cloudiness on Earth is considerable from day to day (at some phase angles +/-20%) so we should be able to use colour variations as an indicator of cloud variations, along with the direct albedo measurements.
The Stam model is based on assuming a Lambertian forest surface; Earth seen at phase angle 90 degrees, and polarization ignored.
Our (considerable) errors on observed colour is due to the conversion from earthshine colours to earthlight colours and the problems of taking the effect of the lunar surface into account. Our observational error on earthshine colour is 0.005 mags. This would allow us to determine Earth’s cloud cover to about 2%-points from B-V colours alone.
A telescope in space (or on the Moon) looking directly at Earth could have colour errors that were even smaller.
The model has to account for orientation of Earth and so on, of course. This is not the case in the Stam model.
‘de Kennis van nu’. The program will be available online, from the web-link http://www.wetenschap24.nl/programmas/de-kennis-van-nu.html
Our paper also made it to an online DMI news item.
Only a few hours after posting our A&A accepted manuscript on arxiv.org:
we were called up by Ian Sample, a Journalist from The Guardian newspaper, who had noticed the catchy title of the paper – he interviewed, and then he wrote this:
A&A editors have asked us if we would like to have our Figure 1 on the cover of A&A and I said ‘YES!’. They could not promise they would use it – but let’s see.
Sloooowly the system is coming back to us: We now have LabView Engineering Mode control of Front shutter, Iris, Mount, Filter Wheels, and now also Focus stage.
We still need to get the SKE stages working.
We need to plug in the camera and see if it can be brought under LV control.
With these things running we are closer to getting some technical data from the system, such as bias frames, all-colour flat fields (for the first time), and we could study the shape of the PSF in different filters.
Important next steps are:
a) get the CCD working and under LV control
b) Try running simple scripts again without telescope mounted
Future, ambitious, steps are:
c) assemble the actual telescope in a testing facility and run pointing tests – perhaps test long sequences of observing scripts indoors and actually debug the system so it would work under field conditions.
d) See to the never-attempted integration with a weather station
At MLO the aerosol load of the atmosphere is measured and a database of hourly, daily and monthly average values for the absorption cofficient, in three optical bands, exists. Here is the link:
The data cover the period in 2011+2012 where we observed..
Our fitting procedures have produced estimates of the power a PSF has to be raised to in order for the convolution of model images with the resulting PSF to produce a ‘good fit’ at the edge of the DS disc in lunar images.
We compare our ‘alfa’ values with AERONET absorption coefficients, and get this plot:
We used daily average AERONET values, which goes some way towards explaining why the scatter for our data (the ‘Power’ alfa) is large while AERONET seems less messy. The overplotted red and green lines are two different robust regression fits. The slope is negative implying that large absorbtion coefficients correspond to small ‘alfa’ values. This matches how we empirically undersatnd the behaviour – on bad foggy nights (presumably with lots of aerosols in the air) we got broad PSFs with small values of alfa.
Repeating the above with hourly-average data the situation does not improve,
So our first interpretation of the above is that ‘alfa’ is only a poor descriptor of the aerosol load – ‘alfa’ is really just a dummy fitting parameter for us – alfa is the value that causes a good fit at the edge of the DS disc. It describes the shape of the PSF. Apparently thisis only weakly coupled to the total amount of absorption.
We must understand the AERONET data better first.
Got the FWs moving today, using very basic LabView modules. Engineering Mode does not seem to want to talk to its ‘subVIs’ any more.
in the VI ‘set DO line’ you can select port and line combinations. So far we know that:
port0/line22 operates the relays in the Breakout Box labelled RE2 and RE4.
port0/line23 – RE3 och RE5
port0/line20 controls the Thor FWs. The two relays adjacent to the Thor controllers come alive. The FW can be selected via the pushbuttons on the boxes.
If we can get the camera alive next we could at the very least do lab work on the CCD properties – make color flat fields, study the PSF in various filters, and so on.
Today Hans was able to operate the front shutter and the front iris using LabView. The licensing problem seems to have been cleared up (it seems to have been a typo in a form somewhere – perhaps it happened at MLO, or later at DMI). The CCD camera and the Magma no longer are listed with ‘yellow triangles’ in the system – we have yet to plug it in and see it operate.
Next steps are to use same methods to get all devices running – Engineeering mode seems to be dead somehow, but all the ‘sub VIs’ are there and so far we have gotten the front shutter and iris VIs to work – can slew telescope from sub VI also.
We use synthetic models of the Moon, illuminated by a model Earth, to generate fits to our observations. The properties of the theoretical models determine not only the fit itself but also the quality of the fit. Two important choices are made in generating the theoretical models – the lunar
surface albedo map has to be chosen, and the lunar reflectance model has
to be chosen.
We can choose surface albedo either directly from
the Clementine map, or as this map scaled visually to better match the
1970s albedo map by Wildey.
We can choose the reflectance models
in several ways – current models being tested are ‘the new Hapke 63’ and
the ‘Hapke-X’ reflectance model.
We test how well these choices work by considering the RMSE of the ‘edge-profile fits’
generated between the theoretical Moon models and the observed ones.
Combing the two choices above in all possible wasy we get these results:
There are four histograms here but pairwise hide behind the other. The major effect is that due to the choice of albedo map (blue vs green above) – the scaled Clementine map has 25% lower RMSE than does the un-scaled map.
The effect of reflectance model is very minor and can barely be seen
The RMSE is calculated from the profile of 20-row averaged ‘cuts’ at the dark side edge, starting 50 pixels from the edge and ending 50 pixels onto the disc. Fitting was performed vith the MPFITFUN routine in IDL, using a 4-parameter fitting model where intensity offset, horisontal pixel shift, terrestrial albedo and PSF-parameter ‘alfa’ were varied to obtain the optimal fit. RMSE was then calculated from the residuals between observed edge-profile and model same.
The Clementine map was scaled (see http://earthshine.thejll.com/#post253 ) against the Wildey map in the sense that the lower-resolution grid from the Wildey map was interpolated by the Clementine map and the scatter-plot generated. An obvious offset and slope difference existed and from the robust linear regression of one onto the other a scaling relationship for the Clementine map pixel values was generated and applied to all pixels in the Clementine map.
So, we may be able to help constrain the sort of work current space-missions do! We should not avoid to point this aspect of the work out, in the ‘big article’.
Puzzling, at first, is the almost complete absence of an effect due to the choice of reflectance model. I can see effect clearly when total flux vs phase angle is plotted, so we simply do not have a large sensitivity to reflectance at the edge of the disc. Fair enough, only a local set of pixels are involved and the flux of the model bay be quite wrong while the fit at the edge is till very nice. So we should constrain our models by fitting many different aspects of our data – both the reflectance model against observed total flux as function of phase, and the albedo map by edge-fitting.
The subject of Clementine vs Wildey has been discssed in this blog before. Here:
Hasinoff et al have (link) shown how images of the DS edge can be used to invert for the illuminating light field (i.e. earthshine) and receive low-resolution maps of terrestrial albedo distribution.The pdf file is here:
Some notes on this:
1) The edge pixels that can be observed from Earth are differently illuminated because the source of light is extended and at a slight angle with the observer’s position.
2) Sally Langford described a method to find the edge of the lunar disk by using the Laplacian operator on the image and in essence do edge-detection. This method will be influenced by the fact that the edge is not uniformly lit. This applies, in principle, to our edge-fitting method too – except we have so low resolution that only 1-2 pixels at the edge show the effect. We do fit these – perhaps we should not.
3) We should be able to test the method Hasinoff et al describe from purely archival images found on the net – or we could try to get our own images (with the eyepiece projection method it should be possible to get high-magnification images of the DS edge). Or with the help of some willing amateur astronomer.
Or by applying for ‘service time observations’ at some large telescope.
In the last plot in this entry http://iloapp.thejll.com/blog/earthshine?Home&post=375 we found the rise in fitted albedo unreasonable. Let us review what we have:
1) The albedos are derived as a scaling-parameter inside the Earth model we use. This model assumes the Earth is a Lambertian sphere with uniform albedo.
2) We use Hapke 63 to describe the reflectance of the Moon, and Clementine albedos for the lunar albedo.
3) We fit the DS edge – adjusting parameters in the synthetic lunar-image model so that a good fit is found between observed DS-edge profile and model DS-edge.
4) We see a 30-40% rise in albedo as Moon approaches Full (and Earth New).
5) We see a much weaker dependence of time-of-day in albedo. From considerations of the Opposition Surge (which may be inadequately represented in Hapke 63) we expected a drop in albedo towards end of day, and a rise at the start of day. We do see weak signs of the drop at the end of the day, but at the start it is ambiguous.
5) makes me think the problem is not the Opposition Surge in Hapke 63. 4) makes me think it is the total flux of the synthetic models that depend incorrectly on lunar phase. 1) makes me think that perhaps the Earth is not very Lambertian and that the discrepancy becomes much worse, particularly towards the crescent New Earth (Full Moon).
If scattered light is incorrectly included in the fit at the edge we would find a larger fitted albedo as lunar phase approached Full Moon, as we do – but there is NO WAY the fitting method is making a 30-40% bias-like error – it would be blatantly obvious when the fits were inspected.
What’s going on? I would like to compare the observed run of total fluxes against the model-based expectation.
First we fit the edges of realistic but synthetic lunar images (made with Chris ‘syntheticmoon.f’ code):
Here models with albedo=0.30 have been edge-fitted – just like observations. We see that the edge-fitting method is completely able to retrieve terrestrial albedo.
So, there is a problem of some sort with the total flux of synthetic models. Let us look directly at the total flux from the synthetic images:
Here are shown (black) lunar model image magnitudes (adjusted to just about the right V magnitude at Full Moon) against lunar phase [FM=0], and expressions fro the same from Allen (red: 1955 edition, orange: 1975 edition), and from the JPL ephemeris system (green). JPL is based on Allen 1975 but adjusts fro actual distance to the Moon. Only ou rmodels allow for libration – hence the East-West assymetry in the black symbols.
It seems our synthetic images have a smaller range in total flux against phase than do the Allen and JPL expressions. By 1.5-2 magnitudes for some phase ranges, relative to Full Moon flux – or Full Moon is less bright by 1.5-2 magnitudes than are the Allen expressions, relative to intermediary phases. Our Moon images, relatively speaking, lack flux as we approach Full Moon.
In normalized-image edge fitting this will cause the model expression that is fitted to have been scaled by too small a number, making the step-to-sky at the DS edge too large, so albedo should compensate during the fitting by adjusting down. We see the opposite …
Could it be that our near Full Moon albedos are all right? But they are so large – 0.4 and more at small phases: but what do we know? Perhaps we are making a relative error in estimating what the ‘answer should be’?
In this entry: http://iloapp.thejll.com/blog/earthshine?Home&post=374 we considered a blind deconvolution method.
The author, Michael Hirsch has kindly been in touch with us and pointed out that the isoplanatic patch is small so using the OBD on instantaneous images from a typical stack, will yield speckle-like PSFs that are then smeared by the method. A smeared PSF will cause the concurrently deconvolved image to have too sharp edges! This is what we see.
We should instead either
a) use subimages the size of the isoplanatic patch and receive the PSF for that image and a deconvolved version of that image, or
b) generate a series of stack-average images and apply the OBD to that, receiving the time-average PSF and the deconvolved image.
Of course, we thought we were doing b), but should instead perform the time averaging before OBDing.
Doing a) would not solve our halo-problem as the isoplanatic patch is small and we are looking for info on 1×1 degree PSFs.
Testing to follow …
Note that night JD 2456076 is the one with most (15) 100-image stacks in all filters. You could get 30 in a row if you combined IRCUT and VE1 …
In this entry: http://iloapp.thejll.com/blog/earthshine?Home&post=372 we looked at the evolution of earthshine phase angle as a function of time of day. This is an important subject for us because the lunar surface reflectance function has an ‘opposition peak’ which results in the infamous Opposition Surge near Full Moon.
Our observations have sampled the peak from just under 1 degree to about 1.5 degrees. From the plot in the above post it is clear that at JD start (noon in Greenwich, midnight at MLO) we sample the peak in the direction towards the peak (large to small phase angles). At the end of the JD we are sampling away from the peak (small to large angles).
The two situations are represented by Westerly and Easterly observing directions – Moon setting and rising, respectively.
There is a consequence of this ‘daily sweep’ of the range near the OS: as our reflectance model – Hapke 63 – does NOT contain the right OS we will see a rise or fall in the fitted albedo.
On the basis of the above argument I am making a PREDICTION: when we look at the fitted albedos we should see them INCREASE at the start of the day (because phase angles are dropping and to compensate the lacking reflectance the albedo has to go UP), and at the end of the day the fitted albedo will DROP.
Next, is a plot of the V-band albedos and the time of day:
We see albedos plotted against (JD mod 1). For each day one colour has been chosen, but some colours had to be reused. Each day has sequences of 3 points – this is because three fits of the model image to the observed image is performed.
To the left we see no slope, or perhaps a little rise towards the end now and then. To the right we see a general slope downwards. As predicted.
If the reflectance was correctly representing the Opposition Surge there should be no change in albedo as the day passed. We may be able to model the OS so that we can get the slopes to go away thereby determining the OS observationally. Troublesome will be the absence of a clear slope to the left, though. Hmm.
The large difference in albedo from day to day is mainly due to a phase-dependency in the determined albedos. The variations in terrestrial albedo are more subtle. We need to first understand why the fitted albedos depend relatively strongly on lunar phase. It could be due to problems related to model-fitting (although I doubt it since we essentially fit the step-size at the edge of the DS disk and this cannot be corrupted by scattered light). It could also be due to an incorrect reflectance function representation in the model used. This would not be the same dependence as discussed above related to near-opposition angles. The phase dependence I speak of is shown in the second row of panels below (the first being a repeat of the above plot):
Above (second row of panels) each ‘strip’ is a nights observations. V-band albedos only shown. In this representation New Moon is towards the sides and Full Moon is at 0 degrees.
By careful analysis of the halo around stars, Jupiter and the Moon, we have arrived at an understanding that the PSF is related to a power-function 1/r^alfa, where alfa is some number below 3.
It is also possible to estimnate the PSF using deconvolution techniques. In particular, the ‘multiframe blind deconvolution’ method produces estimates of both the deconvolved image and the PSF, without any other input than the requirement that the output image and the PSF be positive everywhere.
There is matlab software provided here: http://pixel.kyb.tuebingen.mpg.de/obd/
and a paper here: http://www.aanda.org/articles/aa/pdf/2011/07/aa13955-09.pdf
Fredrik Boberg and I – well, Fredrik – has tested the software on images pulled from a 100-stack on (nonaligned, but almost aligned) images. The output comes in the form of estimates of the PSF and estimates of the deconvolved image. The deconvolved image has an unnaturally sharp limb – but more on that somewhere else – for now, let us see the PSF!
The estimated PSF is highly non-rotationally symmetric, but does have a central peak. The ‘skirt’ wavers up and down over many orders of magnitude. A radial plot of the PSF and a surface plut looks like this:
The red line shows the 1/r³ profile. The elliptical shape of the PSF causes the broad tunnel seen above – the divergence at radii from 30 pixels and out is the ‘wavering skirt’.
The PSF does appear to generally be a 1/r³ power law, which is encouraging. It appears to be quite flat out to 2 or 3 pixels – which also generally matches what we have seen in the PSFs empirically generated from imags of stars.
Test on a synthetic image convolved by a known PSF.
pre-align the images in the stack – perhaps a slow drift causes the non-round psf. Repeat the above for several stack from different nights.
Repeat the above for stacks taken through different filters – any indication that PSFs are different?
This paper develops the theory of the properties of EMCCD cameras, like the Andor 897 that we have:
Very relevant if a new method to handle bias and noise were to be developed in our projects.
We use Hapke’s 1963 formulation for the reflectance function in making the synthetic models that are at the heart of converting observed earthshine intensities to terrestrial albedos. H63 has a part, at small phase angles, that correspond to the reflectances for the DS – on the BS, phase angles are large as the source is the Sun and the observer sits on Earth – but for the DS the source is the Earth. Seen from the Moon, the Earth is about 2 degrees in diameter – just what is the effective source-Moon-observer (i.e. phase) angle? For our roughly 500 observations we calculate the effective DS phase angle from geometry and a simple model of the sun-lit part of the Earth based on NCEP cloud cover, and geometry. The intensity-weighted distance between MLO and each sun-lit pixel is summed so that the effective photo-centre distance, in degrees as seen from the Moon, is arrived at. This is the plot:
Photo-centre phase angle for DS observations plotted against time of day. No correction for actual Moon-Earth distance have been performed; a standar distance of 384000 km has been used. Each linear sequence of points corresponds to an observing night, showing the evolution of phase angle as Earth rotated.
We see our observations distributed in a characteristic way: most observations early in the JD have decreasing phase angles; most observations at the end of the JD have increasing phase angles. The median phase angle is something like 1.8 degrees, with some as low as 0.5 degree and some as large as 1.6 degrees.
Since MLO is on Hawaii, opposite Greenwich in longitude, the JD change occurs for some observing nights in the middle of a set of observations – so the above plot should really be plotted from -0.5 to 0.5 instead of from 0 to 1.
We see that we are sampling H63 at reflectances near the ‘opposition peak’ and that the range of phase angles we have may sample enough that the errors H63 has in representing this part of the true reflectance function could be important in our work on reducing observed intensities to terrestrial albedos. We ought to have available several reflectance formalism so that we can see the importance during data-reductions. Hans has provided such additional formalism in the synthetic code, but is is complex to use (for me) so we must make an effort to get used to the new code, for the ‘large paper’.
Since we are calculating absolute calibrated B and V magnitudes (on the ‘lucky night’ 2455945) for the DS and the BS we can convert these to surface brightnesses, for comparison with e.g. Pallé et al published work.
The formula for surface brightness is
mu = mag + 2.5*alog10(w*w*N)
where mag is the magnitude determined from a patch containing N pixels, with each pixel covering wxw arc seconds. In our case
w=6.67 arseconds/pixel and
N is 101 and 113 for the 6×6 selenographic degree patches we use (+/-3 deg). With the magnitudes for B and V, BS and DS from the paper – but using N=1, since we report average magnitudes per pixel, we get:
mu_B = 14.29 m/asec²; SD=0.06
mu_V = 13.54 m/asec²; SD=0.06
mu_B = 6.21 m/asec²; SD=0.05
mu_V = 5.31 m/asec²; SD=0.06 (all SDs are internal error estimates based on pixel bootstrapping with replacement).
BS is about 8.23 magnitudes brighter than the DS – there are published numbers for these quantities (e.g. Franklin). His Table 1 has differences of about 10 mags between DS and BS. He may be talking about magnitudes per area – not magnitudes per pixel, like we are.
Pallé et al in this paper
give plots showing mags/asec² and for the phase we have (about -140 on their plots) they have a BS-DS difference of 8.4ish mags – so we are within 0.2 mags which seems possible, given the scatter they show in Fig 1.
Not sure I like or understand why Franklin is 1.5-2 mags different in the DIFFERENCE – would that come about when you differ by mags/pixel and mags/area?
During data-reduction we found the many ‘dragged images’. These are images where the shutter probably did not close before readout started. They look like this one (histogram equalized):
The ‘drag’ is from the BS but is probably present everywhere, but not visible from the DS. The BS has a maximum near the right hand limb where the counts are near 10000. The ‘drag’ is near 240 at most. There is no gradient in the drag along the direction of the drag:
The absence of a gradient in the drag helps us understand its origins – it is the effect of constant-speed readout. The readout speed is supposedly 1e-6 s per pixel (in the image header: this may be read from the camera but could be a fixed number entered at setup ….). Reading 512×512 pixels should take 0.26 seconds. The exposure time for this image was set to 0.0274s. Being a ‘dragged image’ we have no idea if this is right.
I do not quite understand the ratio of maximum drag value to maximum BS value. If the drag/BS ratio is 240/10000 and the BS exposure time is 0.0274 s the drag must have lasted just 240/10000*0.0274 = 0.00066 s. The camera is a frame transfer system – is frame transfer that fast? Perhaps the 0.26 s readout time is a slower line-by-line operation? This page mentions ‘a few msec’ for the frame transfer: http://depts.washington.edu/keck/pg22.pdf . We have 0.66 msec, so – close.
If the shutter got into the act in mid readout the amplitude of the drag is affected by this. I think many dragged images we have seen are like the above one – dragging is seldom much worse than this.
Let us check other dragged images with higher exposures and see if the drag/BS ratio is constant or not.
The above was prompted by pondering whether we could have a ‘shutter-less’ camera. It would appear not – at least not if the drag is due to open shutter during readout. The readout would have to be 24 times faster than it is and I am not sure CCD cameras come with such fast readout. 24 MHz readout?
In our data-reduction methods, image-alignment is often involved. It is based on interpolation. An important questions i whether the mean is conserved during such alignment?
To test this we iterate the shift of an observed image, measuring the mean of a patch given by selenographic coordinates. We iterate many times – i.e. reuse the previously shifted image for next round – in order to get a clearer answer. Visual inspection shows the iterated image getting ‘fuzzy’ as the shifts recur. That is, image standard deviation suffers. How bad is the situation for the mean of an area?
The answer is ‘not very much’. Iterating the shifts 40 times we find thatthe per-shift loss in mean value over a realistically sized patch on the DS is -0.005 %. The mean in the patch was about 10 counts which is ‘bright’ for the DS. At other lunar phases the counts may be just 2 or 3 – still just a factor of 5 from the above.
We would never shift the same image 40 times, of course, but the iterated shifts above allows us to build up a change in the mean that can be fitted with a regression. We have learned that one shift of an image may ‘cost’ 0.005% of the mean flux. This is a lot less than the goals for accuracy we have set, which are ‘0.1%’.
Added later: an identical test, but using ROT to rotate an image bya random amount, iterated, showed that errors per rotation were almost always less then 0.01%.
We are interested in just how good the method we use is when it comes to measuring B-V in small areas on the DS and the BS of observed images. In this entry we could find an error as low as 0.005 (once we stopped measuring incorrectly). Is this thelower limit?
We now use synthetic images with realistic levels of Poisson noise added to get the answer. We use ‘syntheticmoon_new.f’ from Chris Flynn and input the ideal image from JD2455814 – i.e. the almost Full Moon – and coadd 100 images, each with realistic noise added, and folded with a realistic PSF (alfa=1.7). We do this twice, with different noise seeds – convert to instrumental magnitudes, and then subtract the two images, pretending they are B and V images and call the result our ‘synthetic B-V image’.
The resulting image is then measured in the areas designated by the agreed selenographic coordinates – those used in our little paper in Table 1. The errors found are:
0.0004 in the synthetic image
0.001 in the case of the real observed images
So, something is generating extra noise for us in the observed images, increasing the noise by more than a factor of two. This could be due to many things – real images are bias subtracted and flat fielded – synthetic are not; there is no readout noise in the synthetic images; image registration does not have to be performed (but could be simulated) in the synthetic images and we have seen variance do strange things during the necessary interpolations that take place during image alignment. There may be more issues to think of here.
This was for a situation with full illumination on both patches. What happens when one is in Earthshine only?
We will simulate this, but, for now, assuming the DS is 1000 fainter than the BS we would expect the error level to be sqrt(1000) larger in a DS-BS image, or about 0.012 mags. This is close to what we see (0.015).
Added later: Ona synthetic image of our lucky night (JD2455945) we find that the lower error limit on B-V BS-DS is
and, for the observed image (see this entry)
So, again, we have a factor of two or so more nois ein reality than the best-case synthetic world predicts.
There is potential for improving our technique! For now we should report the above in the little paper. Fine tuning bias subtraction, flat fielding and image alignment – and understanding the role of image ‘wigglyness’ in alignment issues is needed.
In this entry we introduced a Monte-Carlo based model for the spectrum of earthlight. We are able to use the model to estimate B-V for outgoing light, given input for cloud albedo, surface albedo and so on – and cloud fraction.
From the NCEP reanalysis project we can take the global total cloud fraction product (‘tcdc’) and calculate the global mean value for different times (we pick year 2011 here) and use the values for the B-V model.
I have done this. In 2011 TCDC varied between 50 and 55 percentage points. delta(B-V) [i.e. the differences between the solar spectrum B-V and the earthlight B-V, in the model] varied from 0.090 to 0.075 at the same time. That is, earthlight became more solar coloured for larger cloud cover – good – and implies that Earth is bluer when there are fewer clouds.
The change in B-V is 0.015 mags – can we measure that at all?
In our little paper we have relevant results. Uncertainties on B and V are at the 0.005 to 0.009 mags level, so that differences are at the 0.012 level (worst case) for a B-V value – on the ‘lucky night’.
But on the Full Moon night of 2455814 we have errors of just 0.001 and 0.002 on B-V.
We need to understand why the error can be so different. We promised 0.1% accuracies at the start of this project – and seem to be able to get it with 0.001 mags error – but why not on both nights?
Obviously, the error on the DS will be larger since there are much fewer photons – but this does not help explain why the errors are similar on the ‘lucky night’.
One point is that on 2455814 we measure B-V from one image generated by alignment of the B and V images in question. On 2455945 we measure B and V in seperate images and take the difference of those means. There are ‘cancellation’ issues at play here – surface structure will add to the variance of single -band images while some of the structures cancel (particularly if the images are well centred) in difference images.
Added later: yes, there is an effect (obviously, duuuh) – if we calculate the DS-BS difference from areas in the B-V image, instead of in the B and V images seperately, and perform bootstrap sampling on the pixels involved, we get the mean over the resamplings and its standard deviation to be
-0.155 +/- 0.005
whereas the difference between <B> and <V>, and its error calculated from error propagation, is
-0.154 +/- 0.014.
That is, we have a third the uncertainty. This will be used in the paper.
The uncertainty is still a bit high, though. (5 times what we promised, or to be fair about 5/sqrt(2)=3.5 times. since the above is a difference and not a directly measured quantity – which was the thing about which we made promises!).
It could be that noise on the low-flux DS is dominating here – this remains to be seen. And we still need to understand why it is still higher than for the Full Moon night – but things are making a bit more sense now.
The Moon is occasionally present in corners of images of the Earth taken by Earth Observation satellites. Use such images and the known colours of the Moon to link the filter system used by the satellites to the Johnson UBV system. Use the resulting tool to calculate B-V colours of the Earth through time. Map out seasonal variability. Test sensitivity of the result to imposed variations in e.g. ice cover (by image manipulation) or similar for desertification.
I have looked through all suitable images close to full moon (i.e. at high Sun-Earth-Moon angle) and only came up with a few relevant nights, all within 10 degrees of full moon. We were keen to make a colour map across the face for figuring out what colour range is there, and what colours are reflected under BS sunlight, rather than DS Earthlight.
JD2455814: is the best full moon data available — it’s shown on the left. We ran it through our colour pipeline and get a B-V ~ 0.9. The B and V images turn out to be slightly different sizes, which made the colour map hard to produce. Later it has turned out that this may be due to distortions introduced in the shift to align the images at all. Work is proceeding on this.
JD2455905: data taken when we were following an eclipse — Dec 09, 2011, total lunar eclipse. No non-eclipsed images, so nothing useful on the Brightside colour night (we should check the Moon is red though).
JD2456082 is also eclipsed — partially. No useful colour data as we have only eclipsed images. June 3, 2012
JD2455847 looked like a promising night — full moon at an airmass of 1.54. However, when I make a colour map (right panel above), it has a B-V of ~0.4 across the face. Can only assume that the exposure times are wrong in the headers (as has happened occasionally). One corner of the moon is slightlty eclipsed for
some odd reason in one filter only (V) — opposite Tycho. Perhaps we hit
the dome in this filter? These data should clearly not be trusted further.
In researching material for our first paper we come across the data in Allen for the colour index of Earth: “0.2”. There is no evident source of that piece of information so in order to interpret it and use it in the paper we have to do some detective work. Here it is:
References given by Allen (various editions)
in the table that gives B-V of Earth as ‘0.2’.
The references are both in the caption of the table, and in the B-V/U-B
column of that table. The table is on page 299 of the fourth edition
reference numbering as in Allen 4th ed.
1. Astronomical Almanac, 1998, USNO
In the 1997 edition there is on p. E88 a table with a dash ‘-‘ for
Earth’s B-V. For the Moon there is 0.92 without a reference.
2. Allen 1973 – the two colour columns of the table on page 144 of this
3rd edition – gives references to Irvine and Harris (see below).
In Allen 1955 there is on p. 159 a table with ‘C’ for planets and
Earth has 0.2. In the notes it is explained that C is the ‘colour
index’ and that it is offset from the larger published value by 0.1
to 0.2 … What was ‘C’ as published and where and what is C compared
to B-V? The references on p.160 of the 1955 edition are not specific
for each piece of information, but we find as number  a reference to
Danjon. There are two references, it seems – 1936 and ‘Danjon, Comptes
Rendus, 227, 652, (1948)’. The 1936 is an observatory publication on
earthshine observations I am trying to find. The CR reference is about
Mercury and Venus. Using information from Wildey’s paper on Mariner 2 data
for Earth where the mean Danjon C.I. of earthshine/light, BS Moon and Sun
are given, we can estimate the linear transformation from the C.I. system
to B-V. We get, using Wildey’s data for Moon and Sun C.I. but not his
B-V data for same (using instead Holmberg et al for Sun) that
B-V = .112+.671*.(C.I.)
With 0.2 for C.I. we get B-V=0.25.
We can test the effect of using B-V for Moon = 0.92 (an often cited
value), this gives
and C.I. = 0.2,0.3,0.4 gives B-V=0.11, 0.20, 0.29, respectively.
3. Irvine et al 1968, AJ 73, 251, 807
Paper is on B-V for the planets – not Earth.
4. Harris, in Vol III, (eds) Kuiper and Middlehurst, p. 272.
The Harris chapter in the book briefly discusses the photometry of Earth
– mainly in the form of a citation to Danjon’s chapter in Book II of the
series by Kuiper and Middlehurst. The Danjon phase law is given for earth,
but no comment on colours.
Summary: The source of the ‘0.2’ for Earth in Allen is thus not source 1,
nor 3 – but 2 and 4 point at Danjon. The 1933 An. d’Obs. Strasbourg Vol
3 reference is in fact available online (now) at ADS. It is in French, and
it is long – but towards the end things hot up because a summary of color
index data for the Earthshine (Lumiere cendree) and Earthlight (Earth
itself) are discussed, although in the Rougiere system. The effective
wavelengths of this system are given in chapter 4 on p 171 and are
Rouge 606 nm
Vert 545 nm
Bleu 467 nm
so Rouge is between Johnson V and R, Vert is between Johnson B and
V, closer to V, and Bleu is between Johnson B and V, closer to B.
Of interest is a description of the areas on the Moon used – it seems
Crisium and Fecunditatis are used but only after 1928 is this fixed.
Also of interest are mean values (mean over several years) for the colour indexes,
p 174 earthlight C.I. = 0.33 from -0.01 to 0.6
p 175 earthshine C.I. = 0.62
The variability is large and is said to be mainly seasonal. Note the
addendum at the very end (p. 179) where everything important seems to be
updated in a breathless paragraph! An updated value for C.I. for the Moon
by Rougiere is given as 1.10 (cited by Wildey in the Mariner 2 paper),
and the EarthSHINE C.I. is therefore updated to 0.64. There is a note
that there isn’t space to also update the earthLIGHT number but I assume
it is upped by 0.02 also. So, if Allen is referring to Danjon’s C.I.
as 0.2 and states that it is 0.1 to 0.2 lower than the published value,
we can see that Allen is talking, indeed, about the C.I. for Earth itself
– earthLIGHT – and not earthSHINE.
So, using the linear equation developed above
B-V = .112+.671*.(C.I.)
and now inserting C.I.=0.35 (with limits at +0.01 to 0.62)
we have for the earthLIGHT B-V = .35 (with limits from 0.12 to 0.53).
For an earthSHINE colour of C.I. =0.64 +/- 0.3 we get a B-V range of 0.54 +/- 0.2
Conclusion: This spans our observation as well as Franklin’s, but with large uncertainties. Note the numerically identical B-V and C.I. values – so Allen is right in as much as replacing the ‘C’ from the 1955 edition with the ‘B-V’ of the 1973 (and on) editions.
Data to plot
1926-1935 0.54(+/- 0.2) to 0.85 = -0.31 to -0.11
1967 -0.17 +/- 0.05
2012 -0.16 +/- 0.02
Plotting this we have:
The Danjon data are an average of measurements from 1926 to 1935 and upper and lower end of the bars indicate the extent of seasonal variability, surmized by Danjon. The Franklin bars represent +/- 1 standard deviation based on the 12 determinates from 2 nights in his paper, and our error bars represent +/- 1 standard deviation based on scatter due to photon statistics in our measurement from a single night in two images.
Here is a link to a site that lists transmission curves of many color filters – e.g. Wratten filters.
It looks like Schott RG665 is most like VE2, and might be the long-pass filter we need on a modified SIgma SD100 camera. We need it with 58mm thread, though.
Wratten #25A is available from B&H with 58mm thread but has its cutoff at 620 nm.
B&H have a 58mm Polaroid that looks ok. Transmits from 720 nm and up.
We have noted interesting behaviour of our data at the lunar limb, link here. It seems that our data show a convergence of intensity ratios, between certain symmetric points on and near the limb, towards unity. We saw the same in model data.
The model we use is based on the Hapke 1963 BRDF (i.e. reflectance) model and looks like this:
BRDF ~B(phase)*S(phase)* 1/(1+cos(e)/cos(i))
where e and i are the angles of emission and incidence. For a given instant the phase is the same everywhere (almost) so the only angular dependency that remains is the last term above. On the limb we can have various values of i but only one value of e – namely pi/2 – recall that i is the angle between local normal vector and the Sun, while e is the angle between local normal vector and the direction towards the observer.
cos(pi/2) is 0, of course, so the last term above is unity everywhere on the lunar limb. The ratio of the reflectance in two points on the limb therefore reduces to unity. The ratio of two limb intensities should therefore be equal to the ratio of the albedos at those two points.
So, there is no clue here as to why we observe, and the model gives, intensity ratios near unity along the limb. Mystery remains!
Our observation of unit intensity ratio certainly is consistent with Minnaerts reference to Schoenberg’s statement – namely that the ‘intensity along much of the bright limb is a constant value’. Intensity ratios between limb points would give unity.
In 1941 Minnaert published a paper about lunar photometry – in particular the reflectance. There is an idea in that work which we can use to view our data, and perhaps learn something.
The intensity we observe in any pixel of the lunar disk is:
Intensity ~ albedo * reflectance(i,e,alfa)
Where ‘albedo’ is a measure of the darkness of the material in the pixel and ‘i,e,alfa’ are angle of incidence, angle of emission and lunar phase, respectively.
Consider now two points A and B on the lunar disk placed such that iA=iB and eA=eB. Furthermore we have that the phase, alfa, is almost constant all over the image (alfa is the angle between a point on the Moon and the observer and the Sun – and apart from the finite size of the Moon compared to the distances to Sun and Earth this angle is the same for all points on the image of the Moon). In these two special points we thus have
reflectance(iA,eA,alfa) = reflectance (iB,eB,alfa) so that the ratio of observed intensities from A and B are simply:
I_A/I_B = albedo_A/albedo_B
By finding many pairs like A and B we could investigate how ratios of albedos across the Moon vary – or compare the ratio from observations to our map of albedo – at the present “Clementine scaled to Wildey”.
We now do just this!
From the synthetic image software code that Hans wrote we have, for every pixel on the model image, its angle of incidence and emission. We can therefore select an arbitrary point A on the BS and find its symmetric point ‘B’ also on the BS, extract the observed intensity ratio A/B as well as the same ratio for the ideal model image and tabulate these against solar zenith angle (i=SZA) and earth zenith angle (e=EZA). We have done this for 1000 randomly distributed points and show the results here:
Top panel shows observed (black) and model (red) A/B ratios plotted against EZA. Small EZA (in radians) correspond to points near the middle of the terminator on the disk while large values correspond to points closer to the limb, near the sky. Second panel shows the same plot but now using solar zenith angle (also in radians) as the x-axis – small SZA correspond to points near the sub-solar point, in this image near the limb, while large SZA correspond to points near the terminator where the Sun is low in the sky as seen from the Moon. Bottom left plot is the ideal ratio against observed ratio. Green line is the robust regression line and blue line is the diagonal. Bottom right is a small image of the Moon with points A and B plotted as white dots.
We emmidiately notice the tendency for observed ratios to be larger than ideal ratios – at least for the median to large values of the ratio: the observed ratios lie above the diagonal line in bottom left panel.
The observed ratio has a larger span not just because of ‘noise’ – careful inspection of ‘slices’ across e.g. dark Mare [not shown here] reveals that observed A/B ratios reach more widely separated extremes there than does the ideal ratio slice across the same slice.
This seems to imply that the albedo map we use is too ‘flat’ – it should be scaled so as to give more extreme darks and lights by perhaps 25%, judging from the plots.
We also have to wonder why albedo ratio is a function of EZA and SZA – nearer the terminator or middle of the disc ratios are simply higher than near the limb. As intensities are smaller there while brighter near the limb we wonder what is going on – remember that the ratio is reflectance-independent by construction! Note also that both observations and models behave in this way. It cannot be ‘nonlinearities in the camera’ since the model behaves in the same way, nor ‘scattered light in the observations’ since the scattering-free model behaves in the same way. The model is also assuming a spherical surface – no hills, boulders or crevices for shadows to hide in.
Here is a repeat of the above for a different lunar phase:
Similar features are seen here – towards small SZAs the ratios go to 1. As before the general picture is that variations observed from dark to light are larger than in the model. This reminds us that we are using a fixed albedo map in the synthetic model – we can expect different dark/light ratios in different wavelength bands. The above scalingmay sereve to achieve this?
We shoudl repeat for many examples from each band and each lunar phase to see which features are general.
There is a comment by Minnaert in, “Planets and Satellites” ed. Kuiper and MIddlerhurst, Vol III. p. 222 to the effect that the “bright limb shows the same brightness over 3/4 of its length and that this was noted by Schoenberg (1925) and this was only rarely checked” – wonder if that last remark is still true? We can certainly confirm that the albedo-ratios tend towards 1 near the limb, but we do not understand yet if these two observations are linked. Time to read Schoenberg!
Our observed images are a convolution of a point-spread function (PSF) and the Object (the Moon). We know from analysis of images of various sources – the Moon itself, Jupiter, and stars, that the PSF has the character of a power law along the lines of a core with wings that follow 1/r^alfa with alfa near 2.9.
It is in principle also possible to determine the PSF by de-convolution. Clasically we would expect that
Image = Object * PSF
Where the ‘*’ implies spatial convolution. Taking FT of both sides we get
F(I) = F(O) x F(P)
where F is the forward Fourier transform, and the ‘x’ implies multiplication. Rearranging and taking inverse Fourier transforms (f) we get:
P = f(F(P)) = f(F(I)/F(O))
We have I (the image) and if we had O we could calculate the PSF P. Usually P is very noisy because of amplification of small noisy signals at high frequencies due to the division above. We can average over several estimates of P, however.
For us, this is easy because our data come in the form of 100-image stacks. Unfortunately we do not know O, the object. But we can make synthetic images of O that are highly realistic!
We have applied the above procedure on an image-stack of the Moon, for which we also have a synthetic image. We average the 100 PSFs that are generated. The radial profile from the peak looks like this (black curve):
The blue line is a power law with alfa=2.1 and the red line is a power law with slope 0.3. For this particular night we know that a single power law (plus a narrow table-lookup core) of alfa=2.6 has been estimated for this image using forward modelling techniques where we fit parts of the scattered halo at about 100-150 pixels distance from the photo-centre of the bright side of the Moon.
The two estimates of the PSF are thus quite different, and I wonder why.
Possibly it is a sign that the edge-fitting can be performed well with a large family of PSFs. We know that the fit is excellent at the edge of the disc.
I suggest that a next step could be to test both PSFs or types of PSFs and see how they perform in various situations. How close to the observed image is the ideal image once it is convolved with the above profile?
We have indications that the halo seen in B and V images are somewhat similar, but that VE2 images of the Moon have decidedly different halo profiles. Because the filters used in B, V and VE2 are fabricated (and function) differently we may have the situation that this causes the differences in halo profiles. B and V are ‘coloured glass’ filters while VE2 is a thin-film filter. In a DSLR detector the R,G and B filters are essentially bits of coloured semiconductor material. As a test we inspect the halo seen around the Moon in images taken with a DSLR camera. In such images the R,G and B channels are obtained at the same time at precisely the same observing conditions.
On the internet (http://bit.ly/13YZIOU) we have found a very detailed large-scale JPG (i.e. 8 bit only) 10-min exposure image of the sky near Orion containing the Moon. We submitted the image to nova.astrometry.net and received a solution back, including image scale. We took the resulting WCS-equipped image and extracted the profile of the lunar halo in R, G and B and plot these against radius from the estimated disk centre. We have subtracted a sky level for each colour, estimated by eye, and show the profile in the upper panel and repeat it in the lower panel with straight lines fitted:
The profiles are saturated out to about half a degree but after that they follow a remarkably similar shape in this log-log plot. Fitting power law functions (1/r^alfa), we may even see some sort of ‘straight line behaviour’ between radius 0.6 and just short of 2 degrees, with another linear trend taking over out until the sky-noise is reached at 4 degrees or so. The slopes are fitted-by-eye only but are -2.9 (near the canonical ‘can-never-be-steeper-than-3’) value, and -1.3.
Before assuming that ‘DSLR RGB imaging’ will solve all our filter problems, let us recall that the VE2 vs other profile differences we have found are subtle; that the above is based on an 8-bit image; that nothing is known about the image treatment performed by the author of that image, and that we do not have data similar to VE2 here – the R channel is not VE2 [ … but read more here link !]. Let us instead try to do something similar with 14-bit RAW images.
Note that the above image was obtained by tracking the sky. Apparently, tracking allowed a long exposure that gave us a wide halo to study. In our own DSLR-On-the-sky wide-field images we have failed to get results similar to the above – exposures were limited to several seconds to avoid overexposure and trailing. Our own MLO telescope images are restricted to the 1/2 degree reach from disc centre.
Note that the above essentially speaks very well for the sort of DSLR optics used – the amount of scattered light near sources must be low or we would not see so steep a PSF! Again, this may be easier to do with a wide-field lens such as used for the above image – trhings may be different with a tele-lens that allowed a closeup of the Moon. The ‘core’ of the PSF should be better investigated, if we can get some of our own images of e.g. Jupiter or bright stars – the above image is VERY wide-field and contains zillions of crowded stars, not likely to give us good point-source PSFs.
In this post we commented on the problems we have with the requested and actual exposure times. It turned out NOT to be a good idea to correct the requested exposure times – certain colour problems got worse!
What happened was that B-V for the BS plotted against time of day showed a decided reddening towards the end of the Julidan day – this corresponded to the setting Moon. Airmasses were growing towards the end of the day.
When time of day and B-V were plotted we could see slopes using UNcorrected exposure times of 0.4 magnitudes/day. If we DID correct the exposure times as explained in the above link we got almost a whole magnitude per day in slope!
This got me thinking about whether the extinction corrections we perform are correct – they depend on knowing what the airmass is and knowing what the extinction coefficient is. If either, or both, are wrong we can expect colour-terms to pop up.
The airmass we have been using assumed surface pressure and 0 degrees C and 0 % relative humidity and a wavelength of 0.56 microns. More appropriate is the pressure that occurs at the MLO (we guessed it is 0.6*760 mm Hg), and a temperature of at least a few degrees C (we never observed in frost), and a relative humidity of 20%. Some of these quantities could actually be replaced by the available MLO meteorological data available – more on that later. We should also use an appropriate wavelength for B and V of 0.445 and .551 microns, for B and V respectively. However, using these better values will at most give an offset in B-V and cannot explain an airmass-dependent colour term. [or can it? gotta think about that one …]
Next we must consider the formula used for the airmass. Looking in the IDL code ‘airmass.pro’ provided by Marc W. Buie, we see it is the “cosine based formula derived by David Tholen” with an update 2003. The formula appears to be:
cz = cos(zenith[z])
am[z] = sqrt(235225.0*cz*cz + 970.0 + 1.0) – 485*cz
When we have a reference for this I shall return to the question of whether the approximate formula is good enough.
Finally there are the extinction coefficients. Since we see an increase in B-V with airmass it is either the V magnitude that is being over-corrected or the B magnitude that is being under-corrected. Experimenting, I find that kV=0.08 and kB=0.17, instead of the 0.1 and 0.15 that Chris found from standard-star observations, givea a colour-free B-V progression with time of day (at Moon-set – Moon-rise is too underrepresented to be useful here). Is it within the error margins that kV and kB should have these values? Old man Chris says he estimated the values 0.1 and 0.15 ‘by eye’ here but I think he also listed some regression results elsewhere, with errors – but I can’t find ’em!
Anyway, using the values 0.08 and 0.17 gives a mean BS B-V of 0.898 +/- 0.054 (error of mean +/- 0.004). This value is relevant when keeping the discussion about the “van den Berg value for B-V=0.92” in mind, link here. Our own average of vdB62 data gives 0.88 +/- 0.02 – within the errors we have, easily.
We had a method to measure exposure times – but the method depended on an LED inside the telescope system, and it appears that the system scattered light which interfered with exposures. So we had it turned off. From data collected before the device was turned off it is possible to look at the relationship between requested and supposedly actual exposure times.
For requested exposure times below 10 ms it appears we did not get anything useful. For requested exposure times above that there is a bias which decreases with the increasing duration requested. It is simple enough to fit a regression line to the above, it is:
and use this relationship to correct all exposure times. We have tried this.
On the left, results without correction of exposure time. To the right with.
We have extracted B and V magnitudes from the set of ‘good images’. We have converted to magnitudes using Chris’ NGC6633 and M22 calibrations. For B and V images taken within ½ and hour of each other we have calculated the mean B and V values for the BS and then B-V. We plot these values against lunar phase as well as the time of day, and we see that there is a phase- and time-of-day-dependence in B-V. When we compare B-V from images where the exposure times have NOT been bias-corrected with B-V from images where the bias HAS been corrected we realize that the problems are much worse for the bias corrected images, suggesting that the bias-correction is not valid.
That in turns suggests that the method for measuring exposure time is faulty. As far as we know the method was based on passing IR-light through the shutter aperture to a detector and then calculating the opening time of the exposure from data collected. We are not sure if this was based on a timing mechanism or a flux-collected method.
From our own images of a constant light source we ought to test the stability of the shutter. From the lab we have Ahmad’s laser images, and from MLO we have many stellar images, Jupiter images and also hohlraum images.
Minnaert 1941 paper on the ‘reciprocity principle’ and other symmetry-considerations:
Photometry of the bright side of the Moon has been carried out for at least 100 years. An important paper is the one by van den Bergh:
He observed the Moon at 5 different occassions – two of them at Full Moon and one of them at half Moon, and derived B-V colours for 12 regions, measured relative to Mare Serenitatis. He also observed standard stars and tied photometry of Mare S. to these – thus tying the B-V photometry of the 12 regions to the standards.
Here are some results and insights:
For Mare S. he found B-V = 0.876 +/- 0.022. Over the 12 regions on the Moon (20” to 40” large) he reports B-V relative to M. Ser. This offset has mean value -0.017 +/- 0.005, where the mean was weighted (by me) with the square of the mean error reported by vdB. Knowing B-V for M. S. we can say that an average over 11 regions on the Moon (I dropped one measurement as it was flagged problematic) is 0.859 +/- 0.023.
I would say that this is what vdB62 reports. From somewhere we have the notion that he reports 0.92 for B-V – I do not see that in his paper. But he does quote Harris for saying that the mean over many regions on the Moon is 0.92, but does not comment further on that.
I notice that vdB does not allow for the different phases of his observations, and I think I know why – the photometry may depend on the lunar phase, but the colour does not (at least for these phases). We should check that, as we have BS B and V images galore!
Another paper, that by Wildey and Pohn:
contains masses of UBV photometry, and seems to discuss lunar phase and corrections thereof. Should be looked at. Many tables to digitize, though. Bachelors project?
We have some observationally based estimates of B-V for earthshine. We are working on various models of the same that would allow us to check our assumptions and set error limits.
A somewhat sophisticated model of the color (B-V) of earthshine has been made. It consists of a Monte Carlo simulation where scattering events are decided by random number generators and probabilities bazsed on albedos.
I draw photons from the wavelength distribution given by the Wehrli 1985 Solar spectrum. For each photon we allow a Rayleigh scattering event to occur with probabilities depending on the wavelength. If the photon is not Rayleigh scattered its further fate is given by factors describing the fraction of the surface covered by land (0.3), and the albedos of clouds, land (either Soil or Vegetation), and ocean. Some photons are absorbed while others are scattered into space. The scattered photons
are collected and the emergent spectrum built up. If repeated for a large number of Monte Carlo trials a smooth spectrum results. The B-V colour of this is then calculated, using Bessel transmission curves for the Johnson B and V filters. The trials are done assuming that all land is covered by either vegetation or is bare soil. Cloud albedos
are wavelength-independent, while ocean albedos are zero above 750 nm. Tables of albedos for land types and ocean is given by “Spectral signatures of soil, vegetation and water, and spectral bands of LANDSAT 7.Source: Siegmund, Menz 2005 with modifications.” found at this link:http://bit.ly/14jD3qm.
The B-V of the incident solar spectrum is calculated as a check. The expected solar value of 0.623 (+/- 0.001) is recovered, where the uncertainty is the S.D. of the mean over the trials. The B-V values of the emergent spectra are subject to stochastic noise, so a number (20, each with 2 million photons) of trials is performed and the mean of derived photometric values, with standard deviation of the mean, calculated.
The results for earthlight are:
B-V of earthlight, assuming all land is vegetation-covered:
0.5692 +/- 0.00057
where the uncertainty is the S.D. of the mean.
B-V of earthlight, assuming all land is bare soil:
0.5759 +/- 0.00068
where the uncertainty is the S.D. of the mean.
The difference is 0.0067 +/- 0.00088 with earthshine due to the soil-only model being redder than the vegetation-only model. The difference in B-V is small, but significant.
A validating test is to look at how much flux is taken out at 550 nm – we know that extinction at that wavelength is near k_V = 0.1 so about 10% of the flux is removed. Since Rayleigh scattering is symmetric in the forward and backward direction (for single scattering events) we estimate that half of the light removed from the beam is scattered into space. Thus we expect the flux of Rayleigh scattered light to be about 5% of the incoming flux. We measure the Rayleigh/Incoming flux ratio to be 0.0521 +/- 0.0003. That is, as expected, about 5% of the sunlight is scattered by Rayleigh scattering into space, a similar amount being scattered downwards and giving us the blue sky, and totalling an extinction at 550 nm of near k=0.1. This assumes all absorption is scattering due to Rayleigh. At 550 nm that is not a bad approximation.
For the B band at 445 nm we find that on average 2.3 times more flux is taken out by Rayleigh, corresponding to an extinction coefficient 2.3 times larger than in V. That is a bit steep, surely? Chris has measured k_B=0.15 from extinction data.
So, the model suggests that when you switch all land areas from vegetation only to soil only the B-V will redden by less than 0.01. The color of both modes is near B-V=0.57, with realistic land-ocean, and cloud fractions, as well as albedos. The model indicates a colour very close to the cruder model from the paper (B-V=0.56 at k=0.1). We use the exercise here to estimate that errors due to omission of land- and ocean- albedo is of the order 0.01 mags, and that B-V of earthlight is near 0.56.
Now, what did we find observationally?
We found DS B-V is 0.83. We also know that the sunlight has B-V=0.64 but appears as Moonlight at B-V=0.92. so that we can infer that one reflection on the Moon reddens light by 0.28 in B-V. The earthlight is also reddened by striking the Moon and we can infer that our observation of the DS implies earthlight that is 0.28 bluer, or B-V=0.83-0.28=0.55. That is very close to what our model suggests!
Note that all uncertainties here are governed by photon statistics –
not underlying physics or observational skills. Note also that we use
the phrase ‘earthlight’ when we mean the light scattered by Earth and
observed in space. By ‘earthshine’ we shall mean the color of the DS of
the Moon, which is illuminated by the earthlight. We have not even tried
to model the effects of hemispheric geometry, or any effects of
reflectance-behaviour on angles.
——— added later ————-
This was done under an assumption that corresponds to ‘airmass is 1
everywhere’ – this is wrong, of course, since airmass for those surface elements near the edge of the terrestrial disc is larger than
1. It turns out that if you assume that airmass goes as sec(z) then the
area-weighted effective airmass for a hemisphere viewed from infinity is
2. We repeat the above trials with twice as much Rayleigh scattering.
The colours we get now are:
Assuming all land is plants : B-V = 0.5063 +/- 0.0009
Assuming all land is bare soil: B-V = 0.5152 +/- 0.00075
The difference is: 0.0089 +/- 0.0012
The Earthlight is now bluer than under the airmass=1 assumption (good), and the difference in turning land pixels from bare soil to vegetation is still small.
If we assume complete ignorance of the soil/vegetaion mix we get:
B-V earthlight, for airmass=1 : 0.57 – 0.58
B-V earthlight, for airmass=2 : 0.51 – 0.52.
The difference between airmass=1 (not realistic) and airmass=2 (realistic) is 0.06 mags.
I conclude that the model indicates a B-V of earthlight, at the most realistic assumption of the effective or average airmass involved in Rayleigh scattering, near 0.515 with a small dependence on wheather the land is vegetation or bare soil on the order of 0.01 at most.
I should repeat the above with VE1 and VE2 filters to see what change in NDVI vegetation index we might expect.
We have mounted the Aurora weather sensor on the roof of DMI and found out how to run it from Linux. A plot is generated automatically and updated each minute. The link is here:
Top frame shows sensor temperature (i.e. how hot the sensor itself is) and the radiance temperature of the sky measured inside a wide cone looking up.
Next frame shows the difference between the two – when it is large the sky is probably clear – i.e. no clouds and no humidity.
Third frame shows the amount of light reaching the sensor – so you see day and night and clouds passing in front of the Sun.
Bottom panel shows rain falling on the sensor.
Future work will now be directed towards producing and statistically validating a warning signal (‘Close The Dome!’) on the basis of these readings.
We might mount a webcam also.
We have the unique B and V images from JD2455945.17xxx in which the ‘halo’ cancels almost perfectly, allowing us to see the colour of the DS. At the time of observation we can ask what the Earth was like. Here is the image generated from the Earth Viewer at Forumilab ( http://www.fourmilab.ch/cgi-bin/Earth ):
Above is the model image of Earth seeen from the Moon at the time /Jan 18 2012, about 1600 UTC) of our observations. Below is imagery from the GOES West satellite. There are also GOES images from 12 UTC and 18 UTC, the above one is from 15UTC and is the closest in time.
Evidently the Moon and the GOES satellite were not in line as the aspect is slightly different but we get the idea: Southern Ocean and Antarctica was white with cloud and ice, North and South America were partially cloudy, and the sunglint was on ocean off the coast of Peru, approximately.
Here is a simple model image, as if seen from the Moon at the relevant time. The image is generated from geometry and the ‘NCEP reanalysis cloud product’. You notice some similaririties in cloud cover between the satellite image above and this model image. The model only does the perspective projection of pixels onto a sphere – there is no modelling of limb brightening or anything like that.
The images does allow a simple analysis of the effects of various extreme values for land albedo, and used for two assumed wavelength bands could be used to set limits on what we expect the colour of the earthshine to be at this particular time. More to follow!
Ingemar was here and looked at the LV code. We semeed to learn the following:
1) The NI Report Generation Toolkit license has run out and needs replacement.
2) The newest set of LV codes are on the NAS under ‘Earthshine Project/LabView Sou…/’
3) The Engineering mode refuses to run because of some error messages related to the AXIS – these are for the FWs, the SKE and the Dome. No matter what you want to do with the Engineering Mode it will always start by initializing all its sensors, and if one of them fails, nothing works! Since we have no dome there will indeed be a problem here!
Ingemar was able to turn off some of this (NB: set it back to as-was, later!) and indeed the code progressed to a further point (when it then ran into other problems). Front shutter was tested while the error-checking was turned off in Eng. Mod. but did not work.
Of the above 1) may not be important as long as scripts are not being read. But 3) seems pretty serious.
Are the new licenses for the MAX correctly installed?
Can the code that refers to the dome be turned off?
Will things work again once the dome is turned off and the other AXIS cables are plugged in?
Why did front shutter also not work – it has no motor?
Added later: Some of the above is now clearer. See here. A number of the errors were related to wrong LV license number, and the new IP numbers for DMI must be set insid ethe LV code. For reasons we do not understand the COM-port numbers had changed.
In using the fitting of model to observations at the edge of the lunar DS we have the following situation:
It is seen that the ‘edge’ of the observed image is not ‘sharp’ – it slopes over perhaps 3-4 columns on either side of the midpoint of the jump from sky to disc. The models often have problems matching that low slope and tend to be ‘sharper’ at the edge, at ‘best fit’. We fit +/- 50 columns on either side of the edge so more pixels on the sky participate in driving the fit than does the ‘edge-pixels’.
The shape of the edge is driven by convolution with the PSF – and the PSF core dominates in setting the sharp details. The slope of the DS sky halo is given by the wings of the PSF.
There is an opposition between fitting edge and halo slope – lower alfa would broaden the PSF and ‘lift’ the halo wings, increasing the slope at the same time as ‘dulling’ the edge of the model edge. With more pixels driving the fit at the sky halo the edge is bound to be ignored – thus too large alfa’s probably are the result. The pedestal added to the model image to match sky brightness is competing with alfa’s effect on the sky halo – part of the level of the sky halo is provided by the pedestal value while part is given by the effect of alfa on the PSF itself.
1) Exponentiation of the canonical PSF to power alfa may not preserve the shape of the PSF near the core in the required way.
2) More weight could be given to the ‘edge pixels’.
3) Slope of the edge is not just due to rounding by the PSF – there is also some ‘jitter’ from co-addition of 100 frames so alfa is used to account for more than the effects of the PSF.
4) The fitted pedestal is sometimes negative – it should probably always be positive due to sky brightness after bias subtraction, but the bias varies by 1 count due to the temperature effect and is scaled, as described elsewhere. The pedestal is often near -0.1 counts – are we sure that the bias has been correctly scaled to the level of 0.1 counts?
Following up on this entry we test the Force method on ideal images. This should enable an understanding of the methods actual strengths and weaknesses.
We take an ideal image and fold it with two PSF’s – one with alfa=1.74 and one with 1.76. The 1.76 image is the ‘sharper’ of the two, so we now subject this image to a test where it is folded with various PSFs so that it becomes ‘less sharp’, subtract it from the 1.74 image and look at the difference, expressing it as a percentage of the known intensities (known from the ideal image it is all based on).
Sharper pdf here:
In the upper panel we see the cross-section of the ideal image (black line) and the 1.74 image (red) and the 1.76 image (blue). In the lower panel we see the percentage differences. Black curve is for the difference between the red and blue curves (errors on the DS larger tan 1% everywhere, growing fast as we approach the BS), while the green and red curves show the percentage difference between the 1.74 image and a sequence of images formed by folding the sharper, 1.76, image with PSFs using alfa=1.98, 1.96, … , 1.88. (top green curve has alfa=1.98, next down is red and has alfa=1.96, etc) The difference can, evidently, be less than 1% (for alfa=1.90) but not all over the DS at the same time – there is a tendency for the error to dive faster near the BS and in fact become negative. Near the DS edge to the sky it is possible to have errors in the 0.1-1% interval.
This test is realistic in that the PSF used is developed from real images; in that the procedure of modifying one PSF to another through the use of raising a canonical PSF to a power (alfa) is theoretically underpinned; and in that the brightness levels used in the ideal image are realistic.
The test is unrealistic in assuming that an image with one PSF can be transformed into an image with another PSF by convolution; the procedure of generating PSFs from a canonical PSF through exponentiation is probably only good in the wings of the PSFs.
The test shows that the method is not foolproof. Possibly it is better to first deconvolve an image and then forward convolve it in order to match PSFs – but this procedure needs to be tested on its own.
The test seems to show that for areas near the DS/sky rim a halo can be removed so that errors are in the few tenths of a percent range.
After some deliberations we now return to the ‘edge fitting method’ and its results. There are 19 nights on which more than 17 B and V pairs of closely spaced images exist. We consider those B and V image pairs with negative lunar phases, and look at how the B minus V (albedos, not colours!) values are distributed against lunar phase. We see this:
The error bars are estimated on the basis of error propagation in the B and V albedos from the edge-fitting uncertainty procedure. We have added small offsets in phase to get separation of points. The above actually contains data from 19 nights. To better see things we calculate the nightly median value Of B and V albedo, and now plot these:
Here, error bars are given by the standard deviation of the nightly values. Each point is labelled with the JD number. We see three night-after-night sequences: 2456073-76, 2456016-17, and 2456045-47. Given the error bars, these sequnces follow the same pattern – a rise in B minus V (albedo) as you go from new moon (left) to half moon (-90 degrees). Since the three sequences painmt the same pattern we are confindet in saying that ‘Something Is Going On Here!’.
At the moment we do not know if this is geophysics in the shape of Earth albedo changing with phase – or some aspect of the halo (which increases from left to right above) being harder to model as phase grows.
Next it may be appropriate to study what the Earth actually looked like in the above three sequences. The negative phases selected for the above plots all correspond to the Sun illuminating Earth from Western Pacific/Australia/East ASia and westwards. As time passes on a given day the Moon sinks further in the West (as seen from Hawaii) and more of Asia contributes to the earthshine. On days in a sequence, at the same local hour, less of Earth is illuminated as seen from the Moon so the contribution to earthshine drops but is more and more ‘sicle-like’. SUmmarizing:
For the negative phases selcted we expect:
a) during a single day – contribution to earthshine by continental areas increases, ocea contributions decrease,
b) same time of day but consequitive days – earthshine contribution comes from areas closer and closer to the edge of Earths disc.
I think this means that – if clouds are evenly distributed – we should see reddening of the albedos during a sequence taken on one day, and a blueing in a series of days.
We see b)! Have we seen a)? Below is a plot of the B albedo minus V albedo values for each of the days in question plotted against JD day fraction.
sharper pdf here:
We see little evidence of any up- or down-turns in this. Perhaps 2456104 shows an upturn – i.e. the opposite of what we thought we’d see. The remaining narrow sequences above seem to show a slight downward trend as the day passes – i.e. reddening. Potentially, clouds dominated these days so that little of the surface was visible?
In understanding this this post, we now compare the results to those from the edge fitting method. We remind thereader that the Force method gives us the difference in B-V [magnitudes] between the BS and the DS, whil ethe dge-fitting method gives us albedos fro Earth in e.g. B and V bands. We look at the same range of phases in the two methods and repeat the relevant ploits here. First the edge-fitting method results:
Upper panel shows us B-band albedo minus V-band albedo against lunar phase (upper panel) and against the average alfa (B-alfa and V-alfa averaged).
We repeat the plot from the Force method here:
Here are the pdfs:
In the upper panels we see a slight dependence on lunar phase – B albedo minus V albedo as well as B-V [mags] do tend to rise towards Full Moon. Note that this implies opposite colours! The rise in B albedo relative to V albedo means that the Earth appears bluer as we approach Full Moon in the edge-fitting method, while larger B-V [mags] in the Force method implies a redder Earth as we approach Full Moon.
In the lower panels we see that the alfas used in the edge fitting method and the derived albedo differences have little relationship with each other, while the ‘incremental alfa’ needed in the Forcing method bears a strong relationship with B-V, as discussed before.
This seems to rule out the worst of the possibilities given in the Force posting – namely that if B and V albedos depended as strongly on alfa as B-V does then we would have a serious problem. In the present situation we are not in that position – but where are we then?
For both methods we seem to have outliers for phases less than -120 degrees. Let us take a closer look at these points. They are from night JD 2456073. The upper plot shows that the B and V albedos we could derive from edge fitting’ differs by .1 between the upper group and the lower group. From the Force method we see that B-V [mags] differs by 0.5 between these two groups.
The images involved have these names :
There are more data points for the edge-fitting method, in the plots above, than there are for the Forcing method. This is because more B and V combinations were picked tested for the former. The images picked were observed withing half an hour of each other. These images should be visually inspected.
[later:] Tried that – it gets messy: differences in the level-differences of the DSs in B and V can be spotted but do seem to depend on knowing the exposure times, and this is one of the things we do not think worked – better to trust the edge-fitting method, since it is a ‘common mode rejecting method’.
In this blog-entry we discussed a method to force halos to cancel by convolving the sharper of two images in a B/V pair, until their halos matched and could be cancelled by subtraction.
From the many possible B and V image pairs that we have, a number has been selected, and the method applied. The criterion used was that the linear slope on the DS disc should be within 1 sigma of 0. The result, for negative lunar phases, is this:
Here is a sharper pdf:
In the upper frame above, we plot the B-V difference between the BS and the DS. On average, B-V seems to be 0.4 bluer on the DS than on the BS. Given that the BS has B-V near 0.92 [Allen, 3rd ed.] we can convert the above to B-V for the DS by subtracting it from 0.92. That would give us B-V values for the DS near 0.5. Fred Franklin found less blue values since his B-V for DS was 0.17 below the BS value. That is at the lower limit of what we find.
We plot +/-1 error bars – they seem to get bigger as phase approaches Full Moon. We note that the scatter is larger than the errors – so something other than the errors we can estimate are causing the scatter.
Is there a trend in the above? By eye, it seems like there is – as phase approaches FM we have more positive ordinates – implying that the DS gets bluer with lunar phase.
The points fall in ‘columns’ corresponding to single nights – so something is causing a colour-change as the night progresses. All data are of course extinction-corrected so we doubt that the nightly colour-change is due to extinction. Is it a true color-change in earthshine?
We can compare the photometric B-V derived in the above way with B-V differences in albedo found by fitting the DS disc edge.
Now for the lower panel:
In the lower frame we plot B-V against the PSF-parameter alfa needed to bring the sharper image into the same state of blurriness as the less sharp image – so that halo cancellation could occur. There is a very clear relationship between alfa and the B-V achieved. I guess this is bad news? Nature (i.e. the B-V colour of Earth) should not depend on a PSF parameter used in the above convoluted way. Oh dear.
I still think it is worth comparing the present results with the ones based on fitting the DS edge. If we get a contradiction it may mean anything – but a strong agreement would imply that the ‘edge-fitting method’ has some gremlins hiding in it – i.e. that results are very dependent on those alfas.
It is also possibly revealing that the nightly points fall in columns [probably sequences once telescoped out to single nights – will check]. It is telling us something about things that change on an hourly basis – this includes airmass (i.e. extinction) and also the Earth-Moon-Observer aspect – so, still interesting!
Measuring B-V on the DS in B and V images with different PSFs: The Case For Forcing them!
We have detected some image-pairs where the B and V halo appear to cancel. We have plenty of other image-pairs where they evidently do not cancel. This is seen by structure on the sky – DS sky in particular as well as ‘slopes’ across the DS in the direction towards the terminator near the BS. We here investigate whether it is possible to ‘force’ the image with a narrower PSF to acquire a halo that resembles the broader image’s halo so that cancellation can occur.
If the differences between the PSF width-parameters (alfa) is small we expect to convolve the sharper image with a quite narrow PSF that only broadens the halo a little. We will use PSFs that are based on our ‘canonical PSF’ and simply raise it to a large number – this will generate a more and more delta-function like peak.
We identify centered pairs of B and V images that are close in time; identify the image with the narrower PSF – by using previously fitted alfa values; we then broaden the sharper image, while conserving flux, and inspect the difference between the B and V images. The results of such a trial is shown in the Figure:
Here is a less fuzzy pdf file of the above plot:
We see 15 panels showing a ‘slice’ through the disc centre, of B-V (magnitudes). Each panel has an increasing power (alfa) used to raise the canonical PSF to – at first we raise it to 1.5, then to 1.6, and so on. A smaller power will broaden the PSF a lot and thus give us ‘fuzzier’ images. We expect that when the image is too fuzzy we will see slopes in the B-V slice – just as when two observed images in a pair are subtracted and one has a broader PSF than the other. As alfa increases in the figure we reach a point where it is so large that the sharper image no longer is made appreciable more fuzzy – in the limit of large alfa it becomes an almost identical copy of the original image – hence, towards the end of the sequence of plots we essentially see what the slice across the original B-V image looked like.
In each image we notice the DS to the left and the BS to the right – both situated between the two vertical dashed lines, denoting edge of disc. We see a ‘level difference’ between the DS and the BS. We notice the slope of the DS – upwards for large alfa and downwards for smaller alfa – e.g. at alfa=1.7. Somewhere near alfa=1.8 and before alfa=1.9 the slope is horizontal.
We expect the real DS to be ‘flat’ [we need to check this against realistic models] so when alfa is near 1.85 it seems we have induced enough additional fuzziness in the sharper of the two images that the halos cancel and the level difference between DS and BS is what it is in reality, [We need to test this on images with known properties: Student Project!].
We notice also that the level difference depends on alfa, even when alfa has gone past the ‘now DS is flat’ point. This is a warning that incorrect estimation of
the limiting alfa value may give us spurious results for delta(B-V)! We estimate the slope on the DS (between vertical dotted lines) and look for the values of alfa where the DS slope is not statistically different from zero at the 3 sigma limit. We also do this for the BS. We find midpoints of the fitted DS line and the fitted BS line (red lines in uppermost plot) and take the difference bétween these, forming a sort of ‘+/- 3 sigma interval of confidence’ for the B-V difference between BS and DS. This is plotted here:
Here is the pdf:
We see thaat the generous +/-3 sigma interval allows quite a range in B-V – and also that if we do not estimate the right alfa in the above procedure – but make it too large then we will underestimate B-V.
For the 3-sigma limits we squint at the above graph and read off upper and lower B-V limits for the BS/DS difference: 0.5 to 0.7.
All of the above has been done for an average of an 11-row slice across the middle of the disc. The method could in general be extended to treat broader strips, or the whole ds – planes could be fitted instead of lines.
First, though, it is necessary to understand if the above is even correct! We need to look at artificial images of the Moon. These should have their own “B-V colours” then we fold the B and V images with seperate PSFs and try to force one to be as fuzzy as the other, and so on. The actual slope of the DS, when fitted with lines or planes could then be estimated and in fact used as the goal slopes for the above procedure. Work work work!
We have noticed that some images have a JD time that differs from the JD implied by the time information in the FITS file header. By checking (almost) all files that have a JD in the filename against the time in the header the following list of suspect images are revealed:
This list is the JD from the filename – the value is 1 hour larger than the JD implied by the DATE field in the FITS header. None of these appear on the list of good images as determined by Chris. and his ‘tunelling’ code which inspects magnitudes vs lunar phase.
In V band, extinction is about 0.1 mag/airmass.
The Sun’s apparent brightness is -26.74.
0.1 mag of extinction means that about 10% of the light is taken from the sun and spread over the sky, about 21,000 square degrees. Half of this is scattered out of the atmosphere, half down to make the blue sky.
The integrated magnitude of the sky would then be of order -26.74 + 2.5*log10(10*2) = -23.5.. i.e. 3.2 magnitudes dimmer than the Sun.
Spreading this over half the sky (21000 square degrees or 2.7E11 square arcseconds) gives a reduction in surface brightness by 2.5*log10(2.7E11) = 28.6 magnitudes.
This yields a daytime sky brightness in V band of -24.24+28.6 = 4 mag square arcsecond.
If we have any daytime exposures with the ES telescope — we could check this and get numbers in other bands as well…
On JD2455945 conditions were such that the BS halo in the B and V images cancelled almost perfectly, giving us the possibility of seeing the DS colour itself. We have a link to material discussing scintillations here.
Is it possible to understand which meteorological conditions led to this unusual situation? At the MLO there is a meteorology tower and data are taken and stored for every minute. A link to the data is here.
A plot of selected parametrs for the night in question is here:
The plot shows 3 days on either side of the observation moment. A more legible pdf version is here
a) The rising pressure. The daily cycle is due to heating of the atmosphere,
b) that the observation occured towards the end of the night – temperature at ground level was dropping,
c) the vertical temperature gradient was positive (it was indeed almost the maximum seen that night) – the air was warmer higher up,
d) wind speed was relatively low,
e) relative humidity was relatively low.
We next plot ‘alfa’ (the parameter that sets the width of the PSF in our model images, found by fitting), against four of the above parameters:
There seems to be no strong pattern. Slightly, there may be an indication that low vertical T gradients allow for larger alfas, and that low relative_humidity allows for larger alfas. Ignoring the outliers at low alfa (a sign of a very poor fit or night) we look closely at the dependence on relative_humidity:
It would seem that for RH between 10 and 20% a large value of alfa is obtainable. Large lafa implies a narrow PSF. However, fo rthe nighjt JD2455945 (where the halos cancelled) we have unremarkable conditions: alfa is small (1.54), pressure is medium (680 hPa), relative humidity is medium (42%) – only wind speed is lower (3 m/s) than most other data points. Th ethree values of alfa always determined from the same image are very stable, however, differieng by 0.001 only.
Speculating wildly: Is it because alfa is small that we get halo cancellation? With a small alfa the halos widen and perhaps their ‘tails’ cancel better?
Here are some papers that discuss meteorological conditions and ‘good seeing’ conditions:
Is it possible to ab initio calculate the expected colour of earthshine? Earthshine, in the visual band, is due to diffuse scattering from clouds and the surface of the Earth, and Rayleigh scattering from molecules in the atmosphere. Can we simply calculate the expected B-V colour of Earth, seen from space?
If we assume that the diffuse scattering on clouds preserves the colour of the light (I think this is true – clouds look ‘white’) and if we assume that clouds cover most of Earth (a fair assumption given that the oceans are dark and cover 70% of the Earth) then we can make a model of Earth’s spectrum consisting of a fraction of the solar spectrum plus the complementary fraction of light ‘blued’ by Rayleigh scattering. This model assumes no ‘true absorption’ in the visible bands (also a fair assumption; ozone may actually absorb at UV wavelengths but the solar flux at those wavelengths is small).
By considering the Rayleigh effect and imposing flux conservation we get the following B-V colours for the spectrum of Earth:
where ‘f’ is the fraction of the beam that undergoes Rayleigh scattering. We have elsewhere estimated the B-V colour of Earth (here we do not mean ‘earthshine as measured on the DS’) to be 0.49 +/- 0.02. That would seem to imply that we have a correspondence with (crude) theory if 10% of the beam undergoes Rayleigh scattering.
Since most photometrically measured extinction coefficients are in the 0.1 – 0.2 mags/airmass range it does not seem unlikely.
In meteorological observations pyranometers are sometimes used to monitor the clarity of the air during the day – they typically measure the global radiation (all wavelengths, all directions – i.e. from the Sun and from the sky) and the ‘diffuse’ radiation – i.e. only the part of the light not coming directly from the Sun. Perhaps it is possible to estimate the fraction of light removed from the incoming beam, by scattering, using such, essentially bolometric measurements?
Using a SURFRAD pyranometer data set from Boulder CO, USA, I determine the noontime ratio of Diffuse to Global radiation, and plot it for each day of a whole year:
There is a great deal of variability! This is due to clouds. On perfectly clear days I guess that the ratio of diffuse to global radiation is at a minimum since the contribution to the diffuse is only from the blue sky while the decrease of the direct is also only the Rayleigh. Clouds tend to increase the former and decrease the latter giving a larger ratio.
The incoming beam contributes to the diffuse radiation and to the global radiation. The same amount of light scatters down to Earth as scatters into space so:
Incoming = Global + Diffuse
Then D/I = D/(D+G)
Looking at the plot it is clear that near the middle of the year you
can see D/(D+G) near 0.1. This is not exactly the same as saying ‘the
bolometric extinction coefficient is 0.1’ but we are close.
Problematic is the large amount of clouds – I shall look for a station that has less clouds!
It seems that Desert Rock Arizona is less cloudy:
Notice that we are getting D/(D+G) as low as 0.07. The altitude of Desert Rock AZ is 1007 m, while the station near Boulder, above is at 1/4 the altitude (213 m). Does that explain the different minimum ratio?
At MLO they publish a transmission coefficient which is approximately the same we are trying to look at – it would seem that at MLO the transmission can be 92-94% when there is no volcanic dust: http://www.esrl.noaa.gov/gmd/grad/mloapt.html
There is a page that discusses the B-V colour of the sky at night: http://www2b.abc.net.au/science/k2/stn/archives/archive8/newposts/60/topic60381.shtm
We rely on subtracting ‘smooth scaled superbias’ fields from science frames, because of the 20-minute thermostat-oscillation in the bias mean, and a wish to avoid adding noise by subtraction of noisy bias fields. We generate the scaling factor for the superbias by averaging the bias frames taken just before and after the science frame. We seem to be assuming that the bias mean does not change – or does not change irregularly – during this procedure. Let us examine that assumption.
We take most bias frames as single frames, but happen to have 50-image stacks here:
We calculate the average of each subimage in these stacks and plot the results:
The camera can take many images per second in ‘kinetic mode’ so the above sequence last something like 10 seconds each. There is evidently some slight trend during that time – at the 0.05% (or 0.15 counts) level compared to the bias mean. This may not seem much, but if the scaled superbias mean is wrong by 0.15 cts it can mean an error on the DS flux of many percent – because the DS fluxes we are using are at anything from near 1 count above sky to maybe 10 counts above sky. For a DS at 1 count the 0.15 count error is 15% while it is 1.5% for the 10-count DS. This is quite serious.
It may be the reason that the halo-removing methods based on ‘subtracting a model halo’ did not work well – they relied on the bias being correctly subtracted previously. The ‘profile fitting method’ apparently worked better and this is probably because it inherently contains a fit to the sky (and thus bias, and also any insufficiently subtracted, bias).
Bias subtraction does more than subtract a mean level, of course – there is also some structure – notably a 0.2 count raised edge along the vertical sides, stretching some 10% into the field. Some row asymmetry is also evident, along with a very slight ‘ripple’ in the column direction (Henriette’s thesis has all this).
Subtracting the scaled superbias is therefore mostly a good thing, but it should be realised that important level-errors may be present causing blind reliance on the ‘complete bias removal’ to be dangerous – better to allow for an additional small pedestal in your modelling.
In considering B-V images, as here we are have to know how far the BS halo reaches onto the DS. We have looked at that before, here. We now revisit this issue. Here is a contour plot of the DS and the halo, with colours so that we can see how far the halo is likely to reach – we see it reach into the sky on the BS – how far do we think it reaches onto the DS?
Consider the yellow end of the greens, for instance – on the sky that contour lies 40-50 pixels from the BS rim – on the DS it lies adjacent to the red area which is the BS. The yellow-green contour therefore does not interfere much with most of the DS. The blue contours on the DS, howvere, are represented on the sky far away from the BS – so the blue areas on the DS may be interfered with by that part of the halo.
The above is V-band image. In B-V images we rely on much of the halo being similar in B and V and thus cancelling. The above noticeable gradient in DS brightness is absent in the B-V image.
With the whole telescope system back at DMI we are working to bring it back to life. After that we will try to fix the problems that beset us.
So far (latest activity on top) we have this:
July 8 2013: All PCs on internet reachable from outside, and so is the iboot and the NAS c an be seen from various machines.
July 4 2013: ibootbar is now reachable from woof by using, in browser on woof, ‘ibootbar’. On egregious you need ‘ibootbar.usr.local’. Eigil says woof is reachable from home using woof instead of via nordlyset in ssh menu. egregious should also be thusly reachable from ssh.
July 2 2013: Eigil Pedersen has made it possible to reach the woof linux machine by giving it an IP number. It can in turn see the iBoot viaits 192 IP number. Working on the NAS.
June 24 2013: Located the Aurora cloud sensor, and downloaded software. Will try to set up on Wine/Linux machine.
June 20 2013: Working to install static IP-numbers. Connected DEC-axis motor and its limit-switch so that we do not have to rely on dubious short. Devices still do not react when acessed from LabView.
18 June 2013: We figured out that the limit switches must be connected – then the mount comes alive. LabView still freezes when you click on whatever.
June 14 2013: Wired up part of the mount head with the Dome Breakout Box. Still need to understand why LabView is talking about “no licenses” for “Report Generation Toolkit” and the “Internet Toolkit”. Where were the licenses before?
June 12 2013: Booted up the Watchdog, woof and PXI computers. Labview looks strange – some VIs are not working and then none will run?
June 11 2013: Reset all the 115V selectors to 230 V and applied power – nothing blew up!
May/June 2013: Uncrated the control rack and the telescope tube. Inventoried the other boxes.
We help illustrate the theme of ‘simplicity’ for Aberdeen:
These are calibrated B-V images from JD2455945.17, as explained at this link.
The one on the left is generated WITH flatfielding of the B and V images used, while the one on the right is generated WITHOUT flatfielding. They are almost identical, but not quite. Here is the difference between the two:
The pattern is the expected pattern from the CCD chip. The histogram of the values is here:
So there is a difference which is distributed around 0 with S.D. 5 millimagnitudes. The effect of flattening would appear to be small – but recall that these are difference images.
Apart from various problems related to aligning images it is rather nice to work with difference images – problems tend to cancel out! However, note in the above how some of the structure on the DS in the B-V image looks like the features of the falt field – as if perhaps the flatfield we used did nothave enoughj ‘amplitude’ in the stripes.
[Update: See note at end!]
In this post Chris showed that the night of 2455945.1xxx has halos, in B and V, that seem to be at a constant distance across the DS. This implies that the ‘gradient problem’ is absent on this night – perhaps because the night was very clear or the haloes in B and V almost identical. [Inspection of the ‘alfa’ for the profiles shows that V images had alfa in the range 1.547–1.549 while B images had alfa in the range 1.544–1.547. These values seem close but halo shape is very sensitive to alfa, so analyze first!]. It is possible to generate a B-V image for this night using the only good images for B and V (in the sense of ‘being on Chris list of good images’, as explained elsewhere). We used one B and one V image:
both bias subtracted but not flat-fielded. Exposure times were taken from the image headers.
This is the result:
The colours are of course chosen arbitrarily, to aid excitement and imagination! For this image we corrected for extinction using
Vinst(idx) = -2.5*alog10(Vim(idx)) – Vam*0.1 ; kV=0.1
Binst(idx) = -2.5*alog10(Bim(idx)) – Bam*0.15 ; kB=0.15
and calculated B and V images from the B and V instrumental images, in an iterative manner:
BminusV=Vinst*0.0+0.92 ; BminusV is an IMAGE
for iter=0,10,1 do begin
V = Vinst + 15.07 – 0.05*(BminusV)
B = Binst + 14.75 + 0.21*(BminusV)
Here, ‘idx’ is a pointer that selects for all pixels on the lunar disc – both DS and BS.
Vonvergence was swift and independent of initial value. Convergence was to B-V=0.925.
A plot of a ‘slice’a cross the disc above is here:
We see a BS B-V value near the dashed line at 0.92 and a DS value between 0.75 to 0.8.
The colour image reveals some structure on the DS. It seems we see a faint flatf ield pattern? This should be looked at. On the BS colour differences eem to be lunar in origin and coul dbe compared to the literature on this subject.
The deep ‘cut’ in the middle is due to some small-distance difference in the B and V halos, we think.
Until we are told otherwise we think this is the first map or image showing the colour of the Dark Side of the Moon, which is also an album by Pink Floyd.
Images in other colours may be fothcoming.
Another version of the image above, here with a legend is here:
On the night of JD2455945.1ish the Earth was iilluminated over South America and shone on the Moon. Seen from the Moon the Earth looked like this, from the Fourmilab Earth Viewer:
So, mainly Pacific Ocean but also also all of South America. Need some satellite images of this to see how the clouds were distributed!
Note added later: The (far) above is unfortunately quite sensitive to how well the images are aligned. This needs to be looked at before anything else.
Here is a nice link to images of the
colour of the Moon obtained in sunshine and with simple DSLR cameras –
i.e. jpeg files: http://www.datarescue.com/life/kepler/moontests/raw_vs_jpeg.html
Here is a list of non-lunar images taken at MLO. There are stars, clusters, galaxies, and planets and asteroids. Many are suitable for studies of extinction.
Here are some images of the telescope assembled – perhaps it is possible to use the images to understand which plug goes where? Sharper jpegs are in /home/pth/SCIENCEPROJECTS/EARTHSHINE/JPEGS/ – they are the ‘imgXXX.jpg’ files.
I’ve also added images from the construction in Lund – those images are in same place as above, but are called imagexxx.jpg (xxx from 332 to 347).
This link shows more images from the telescope on MLO.
Here is an image of the cabling at the back of the CCD, during MLO.
Telescope tube plus its attached cables.
Rack of electronics.
Very long and thick bundle of cables.
Rod for Hohlraum source + foot
Smaller Wood box:
LCD monitor + VGA cable
Laser printer Samsung ML-2525
Small white box: Spare lamps + relay
Holhraum Sphere in its own box
Box of fan filters for rack
2 x Shutter boards + Edmund pack of flat soft things + a small mw laser + a filter labelled ‘IR cut’
European power supply for Axis camera
2 Axis cameras with robofucus lenses, and additional lenses and mounting brackets.
Rain and light sensor – DMI?
Väisala device in ‘chineese tower’ affair.
DMI IR rain sensor
Väisala humidity sensor HMT100
Cables + spare PXI parts + bracket for adjusting SKE
Uniblitz shutter # VS25ST1-100
Vincent Associates 710P Shutter Interconnect Cable
3 Thor labs FW drivers / electronics. Model FW102B. Implies that we have
three Thor labs FWs – the colour FW, the ND FW, and one more – somewhere!
Box of mixed cables – VGA + DVI + 2xUSB repeaters
2 sealed metal envelopes with “D.C.D. Panel Mount SA 4 COMP”
Tools for shutter tuning
Spare Ferrorperm Knife edges on glass.
Mixed VGA, DVI, USB cables along with 2 USB extender cables
US 115V power cables – to PC-type plugs.
Polar alignment scope
Point source lamp + scope
Magma + mounting HW
Dome breakout box
base w. electronics
Power for Axis camera (perhaps it moved to Wooden box?9
A part of the PXI – probably the spare or original PC
Spare HD for PXI
Mount handbox controller?
Aurora cloud sensor?
Various interface cards for camera to PXI?
Additional external cameras?
Here is a collection of links useful in our attempts to power up the rack of control system, at DMI:
iBootBar : http://dataprobe.com/support/index.html
Here are example sof a ‘ghost’ and ‘dragging’:
The dragging is due to no shutter being used – i.e. readout was only way to terminate exposure and hence frame was illuminated as the image was shifted to the ‘hidden register’ We hav a frame-transfer camera. The image is from the test phase in Lund and no shutter was installed then.
The BS ghost is faintly vissible: It is a copy of the BS and the lower cusp is seen poking out slightly down and to the left of the real BS. This was due to the CCD camera being aligned almost perfectly along the optical axis of the system – the ghost arises as the shiny CCD surface reflects light back into the optical system and reflections are produced at all optical surfaces – the back of the second secondary lens, the front of the second secondary lens, the back of the first secondary lens, the front of the first secondary lens and the back of the primary objective and its various surfaces. At MLO the camera was tilted at an angle so that the reflection from the CCD did not go back up the system. This tilting did not appear to have any adverse effects on focus etc.
Images from lab hard disc recovered from MLO.
We are starting a new blog over at
where issues related to how to design (and one day, hopefully, build) a better eshine telescope are considered.
The present blog will from now on only discuss how to use the data obtained from the Hawaii telescope.
OK, we updated that site with an entry about a NASA proposal.
Some of the notes discussed below, in the next several postings, are collected in this doc:
There are some considerations of how how much albedo change we expect during global warming – and the detectability of such changes are discussed.
But we – and BBSO – calculate albedo by comparing the earthshine measured on the Moon to the intensity of earthshine predicted by a terrestrial model based on a uniform Lambert sphere.
We test how well this works by taking a series of GERB satellite data for several weeks across a year and extract the total flux from the whole-disk images. The MSG satellite bearing the GERB instrument floats over lon,lat=0,0 so always sees the same part of Earth. Johanne has extracted images for every fifteen minutes for several weeks in a year. We plot that (top panel, below).
We use the eshine synthetic code Hans wrote to generate Lambertian images for a full cycle of sunrise over earth. As the code is based on what the Earth looks like from the Moon we pretend that one month is like one day and thereby can extract the phase law for the Lambertian uniform-albedo Earth in order to compare it to the GERB data. We also plot that (second panel, below). [Note that the difference between view from satellite and view from Moon may be important: Sun-Earth-Viewpoint angles should be the same, and if the Sun and Moon are not at similar latitudes as Sun and Satellite we could be generating artefacts in what follows: we should see if we can use the synthetic code for satellite viewpoints. For now we ought to find dates when the Moon was at latitude 0 (like the satellite) and the Sun also same latitudes – tricky to do.]
Lastly we divide Gerb fluxes by Lambertian fluxes, correct for the fact that Geostationary orbit and the Moon are at different distances and multiply that corrected ratio by the uniform albedo used in the Lambertian models. We plot that (bottom panel, below).
We see that the albedo does not come out constant. This is not surprising since the Earth has real clouds that drift around – but that is only what gives the thickness of the thick line of points in the last plot above. The ‘wiggles’ are due to the inadequacy of the Lambertian model. Near New Earth (Full Moon: never observed) the derived albedo rises. Near Full Earth (New Moon: attractive to observe due to strong eshine, but difficult due to Moon close to Sun) the albedo is flat. At intermediary values (20% and 80% of the cycle) the Lambertian albedo is relatively high so that the derived albedo is lowered.
How can we use these insights to understand what Johanne shows in plots of how derived albedo evolves during the nights?
Our aspiration is that the above can give insight into
1) the ‘phase dependency’ we see in derived albedos when we plot all data corresponding to all phases during the morning branch – i.e. Moon setting over Western Pacific/Australia, and
2) the nightly tendency to have falling albedo through the night, for that same branch.
As for 1 the reader should look in the May 3 presentation at slide 22; as for 2 the reader shoul dlook at slide 23.
We have to figure out whther the above plots explain any of these sightings. Could the almost quadratic phase dependency seen when all data are plotted be due to the ‘dip’ near 20 and 80% of the cycle? Could the ‘nightly slopes’ be due to the same?
Our efforts to measure albedo precisely and accurately are limited by the natural variability. Albedo is inherently a quite variable property of the Earth – viewed day-to-day [see presentation linked to below for details on the GERB data variability!]. Just how much does global-mean albedo vary – in the short run and over many years?
We have CERES albedo data for the period 2000 to 2012. It is given as monthly mean data for a 360×180 degree grid. We take this and calculate global-mean monthly values. Some results are here:
Data downloaded from: http://ceres.larc.nasa.gov/cmip5_data.php
these are the monthly-mean data prepared by NASA for the CMIP5 effort.
I calculate the global weighted means for the given monthly-mean values
of upward and incoming shortwave light at TOA.
The linfit slope is:
Slope: 5.4920926e-08 +/- 5.7758642e-07 in albedo units/day.
This is an INsignificant slope – per decade it would amount to 0.09 %
of the mean albedo. Using, instead of the slope, the +/- errors on the slope we get:
+/- 0.97 % per decade.
The mean we get is:
Mean albedo: 0.218214
this is small. Publications using these data say: 0.29 … there is some
problem with accounting for regions’ weights. I omit all areas where the
incoming flux at TOA is less than 12 W/m² – this helps avoid Inf’s and
NaN’s. I weight each area with the cosine of the latitude of cell middle.
Month Albedo S.D. S.D. as % of mean
1 0.228720 +/- 0.000820013 or, +/- 0.358522 %.
2 0.222061 +/- 0.000866032 or, +/- 0.389997 %.
3 0.206976 +/- 0.000856246 or, +/- 0.413694 %.
4 0.215791 +/- 0.000886397 or, +/- 0.410766 %.
5 0.219209 +/- 0.00129653 or, +/- 0.591460 %.
6 0.219878 +/- 0.00122390 or, +/- 0.556628 %.
7 0.215800 +/- 0.000900811 or, +/- 0.417429 %.
8 0.212036 +/- 0.000893605 or, +/- 0.421440 %.
9 0.203454 +/- 0.000899703 or, +/- 0.442214 %.
10 0.215973 +/- 0.00123064 or, +/- 0.569814 %.
11 0.227462 +/- 0.000582274 or, +/- 0.255988 %.
12 0.231050 +/- 0.00111927 or, +/- 0.484429 %.
What do we learn?
We learn that albedo is remarkably constant when observed by satellite. There is no discernible slope to the data but if we use the 1 sigma uncertainties on the slope as upper limits we find that per decade the albedo has changed less than 1%.
The monthly means follow an understandable annual cycle (maxima in NH and SH winters with minima in March and September). The spread around monthly means amount to 0.25 to 0.6% of the monthly mean value.
Climatologically it is an open question whether albedo ought to change with climate drift. During the observing period global mean surface temperatures have changed by about +0.1 degree C [see http://www.ncdc.noaa.gov/sotc/service/global/global-land-ocean-mntp-anom/201101-201112.png ]. This is during the ‘hesitation period’ that is much discussed presently. During other decades mean T has risen much more – but we have no albedo data from these periods.
Using the above 1 sigma upper limits on slope of 1% per decade we see that if the slope is due to changes in T then the relationship is 1% per 0.1 degree or 10% per degree. This is based on an upper limit and the true value is closer to 0% per degree.
Note that the previous argument is unrelated to the EBM based relationship that works for equilibrium climate only – there, and only there, the expected relationship is -1% per degree.
So, what does that give us? If we were to observe a larger slope we could use the data in the “satellites are getting it wrong” mode – as Pallé et al did for a while. If we measure no slope we can hope to set more stringent limits to the slope than the above satellite values do – can we determine the slope of global mean albedo to better than the 1% per decade above? In this presentation I found upper limits of 0.2% per decade based on a null hypothesis of ‘no albedo change’ and realistic observing limitations. The numbers used were based on Frida Benders CERES data, but they do not differ enormously from the present, longer, ones.
So can we reach 0.2% error per decade, observationally?
This requires a discussion of the single-frame errors we get as well as the period-mean data we can expect. More alter!
We have previously considered B-V images of the Moon. This was done with ‘raw’ images – that is, images where the halo had not been removed. Since we have the BBSO linear method implemented and since it does clean up the DS we can also calculate B-V images for the Moon based on these.
We have 55 pairs of B and V images thata re close in time (about 1 houror less apart). Using the standard star calibration relationships that Chris worked out fropm NGC6633 standard stars, we can convert images to instrumental magnitud eimages and from there to calibrated B and V magnitud eimages. We also corrected for extinction since the images were not obtained at the same time.
SInce the calibration relationships depend on B-V we have to assume some B-V values and iterate (Chris solves algebraically). The iterations converge quickly. We use onlythe brightest pixels in each image – i.e. the pixels delineating the BS – for calculating the mean B and mean V values needed to update B-V in each iteration.
The values for BS B-V that we converge to have this distribution:
The mean B-V=0.989, and the S.D.=0.019. The accepted value – e.g. Allen (4th ed), Table 12.14, gives B-V=0.92 (van den Bergh observations?). We therefore have a significant discrepancy. It should probably be noted that our values come from phases near 90 degrees, while the Allen values may be from ‘Full Moon’ conditions.
If we accept the above B-V (BS) values at face value we can continue:
The Sun has B-V=0.642 (Holmberg et al, MNRAS 2006). One reflection off the Moon reddends this value by 0.989 – 0.642 = 0.347. This value will also apply to earthshine that is observed after one reflection off the Moon, even if it is the DS. [We ignore here any colour-dependencies in the lunar surface mare vs highlands!]
If we can estimate the B-V of earthshine as seen on the lunar surface, we can work backwards to what the B-V of that light was before it struck the Moon – it will be the observed value minus 0.347.
Before trying this we need to understand to which degree the use of BBSO linear images, as opposed to ‘raw’ images, has helped us observe the true colour of the DS – has an important amount of the BS halo been removed from the DS?
We generate centered B-V images and plot the average of 20 rows across the middle of the images:
We see two panels – each panel is the result of using a fixed B image and two different V images – all three taken a short time apart. The black curve is the run of B-V values in the ‘raw’ image – that is, the image where no effort has been made to remove the BS halo. The red curve is from images cleaned with the BBSO linear method. The deep jag in the middle is the BS/DS terminator. The DS is to the left of this and the BS to the right. Since the BS is not altered by the BBSO linear method the red curve covers the black curve on the BS.
On the DS we see that cleaning the image has resulted in a slight reddening of the DS – it was ‘too blue’ in the red images.
We also see that the ‘linear gradient’ in B-V across the DS is unaltered qualitatively by the cleaning of the image. Why?
If we push on, ignoring the not-yet-understood gardient, and assume that the part of the DS closest to the sky has an un-polluted B-V value then we can calculate the colour of earthshine before it strikes the Moon, as explained above. First we extracted DS B-V values for that part of the DS disk that is to the left of 90% of the vertical columns on the disk. These values were on average 0.29 +/- 0.05 below the BS value.
If the BS value is given the canonical B-V=0.92, then we have a B-V for the DS of 0.63.
Franklin (JGR 72, no 11, p.2963-, 1967) measured B and V repeatedly on the DS. The difference between his mean B and his mean V is 0.64. We are close, but we are worried about scattered light!
Subtracting the effect of reflection once on the Moon brings us to the value for B-V of earthshine, before it strikes the Moon, that is, as it would be seen in space:
B-V_ES = 0.28.
There is one published B-V value for earthshine, based on Mariner II data in the 1960s. The paper is http://adsabs.harvard.edu/abs/1964JGR….69.4661W by Wildey. Unfortunately I cannot make head or tail of that paper!
Playing a bit more with the above, we can consider the effect of Earth on light – the Sun has B-V=0.642 when it strikes Earth. If the earthshine has B-V=0.28, then the bluing effect of Earth is 0.28-0.642 = -0.36 in B-V.
We update yet again the list of ‘best images’ that Chris has generated by inspection of compliance of absolute magnitudes against lunar phase.
We can removea few more images by hand inspection. We found about 10 that have ‘cable in view’ as well as various near-horizon problems. The list is here and now contains 525 images:
We note that Johanne is working her way through many images and finding ‘bad focus’ cases. Since we believe these are coincident with ‘not the right filter acquired’ cases, we shall eventually be further updating the list of best images.
We are using co-add mode – that is, we take stacks of 100 images and co-add them to increase the SNR. Since there is RON (read Out Noise) in each frame – about 2 ADU/pixel, tests have shown – it is important to know whether a sequence of added frames have a mean that converges.
We now test this directly on the stack of 100 images: “2456015.7742682MOON_V_AIR”. We extract a small square of size (2*w+1) around regularly spaced points covering the image plane and calculate a mean subframe (averaging along the stack direction), the average value of that mean subframe (a scalar for each of n coadded subframes), and plot these. These are shown in the PDF file:
The top panel in each page shows where the point was chosen. w was 4 so subframes are 9 pixels by 9 pixels, centred on the point. Second panel in each page shows how the mean value of the average subframes evolves, along with estimates of the +/-1 sigma error bounds (calculated based on the value the series converges to, or at least evolves to, in the last 10 steps of the series; the known RON (about 2.4 ADU/pixel in all observations, estimated from bias frames differences), the number of pixels in each subframe and the number of subframes co-added). Last panel shows the mean value of each sub-frame used (along the stack direction).
What do we see? In page 1 a point on the sky has been selected and we see a mean value that evolves inside the expected error bounds, and we see that the subframes have mean values with some spread, but no trend.
Page 11 shows a point on the DS. Nice evolution.
Page 24 is on the BS and is very hard to explain. This holds also for points ‘near the BS’.
It seems we have little to worry about on the sky and on the DS, but on the BS we see strange evolution of the running mean! We have before touched on such subjects in this blog, when we tested effects of alignment of images – perhaps this problem was hiding inside the other problem?
[added later:] Chris asked a good question. Here is a plot of the running mean of the whole frame, as frames are taken from the stack. The value is expressed as a percentage deviation from the middle value on the curve – about 1378 counts. A small drift is seen at the 0.05% level from one frame at left to all frames at right. I think this could be due to sky variations – or slight drift of the Moon inside the frame causing ‘light to fall off the edge’. If drift is the explanation we may have an answer for why the mean in a much smaller sub-frame, when near the BS, drifts so much more – image brightness gradients are being sampled in a small subframe. Experiment at top should be redone but with drifts taken out. No large drift was evident sp I guess we are learning that a small drift can be very important!
This has an implication for how we measure the albedo – if we use ‘DS patch divided by BS patch’ we run a risk . if we use ‘DS patch divided by total flux’ we are much better off.
The PSF of the VE2-band Arcturus data deviated markedly from the other bands (next to previous post). We check on this further by looking at the PSF profiles of the stars in NGC6633, which were used to calibrate the filters (in this post).
We plot the central flux normalised PSFs in B, VE1, VE2 and IRCUT versus V (shown in green in each panel). The PSF is formed from 95 different stars spread over the frame in each case: e.g. this shows the stars in the IRCUT image.
The PSFs look like this:
Same plot as above but in log-linear form:
None of the PSFs are as sharp as the V band one — they all have more
flux at greater distance from the center. Interestingly, the VE2 band
profile is not discrepant, so the conclusion is that the IRCUT data for
Arcturus may simply have been out of focus.
It would seem the lesson from this is that the PSF can vary quite a bit — but note well that this is in the core only — as this is not a test of the extended power-law wings at all, as we are only probing to a radius of about 4 pixel (~30 arcsec) with these data.
In the previous post, we noticed that the VE1 and IRCUT filters have very similar profiles to the V band — at least for the data we obtained pointing at Arcturus.
Here we show a colour map (V-VE1, on an arbitrary magnitude scale: properly calibrated colours will come later).
It’s clear that the deep artifacts around the edge of the moon are much smaller now, compared to the B-V colour maps we have been producing to date.
This shows that the core of the PSF of the V and VE1 filters are quite similar — and the power law tails as well — not just for Arcturus but for lunar images too.
This is a followup to the previous post on the V and B band PSFs.
Here is the stellar profile of Arcturus, shown in B (blue) and V (green). Clearly the B band light has a different core to the PSF than V.
Arcturus data were taken on the night of 2011-03-22. The files used are
Oddly, the V band data are affected by what we called “shutter bounce” — a non-axisymmetric feature to the right (along crows in the CCD) of the PSF, but the B band data show no sign of such a bounce (the PSF profile can be exceedingly cleanly centered, unlike V band). No explanation for this for the moment!
Here are the other bands, all compared to V (green symbols):
PDF version here:
Here are the same profiles, this time with the data averaged into radial bins. The solid green line is the V band profile in each case.
VE1 and V are very similar — but this is what we would expect. IRCUT is not too far off V either.. the B and in particular the VE2 filters differ significantly from V. The difference between VE2 and all the other filters is huge, so we need to look into that next!
It is clear from the colour maps we have been making of the moon, that the B band PSF must be quite a bit broader than the one in V-band. We have only measured the V-band profile with any precision to date: this is a first go at the B-band profile.
The images used are these:
both taken at an airmass of ~1.3 – so quite high in the sky (which is good).
A wedge shaped region is defined for each, as shown in this figure by the green vectors:
The (V band) image is on a logarithmic scale, with counts/pixel shown along bottom. The red circle marks the fitted (by eye) position of the moon.
Centers and radii are :
V band center (195, 287), radius 147 pix
B band center (197, 291), radius 147 pix
Plot shows log radius (from center of moon, in pixels) versus flux (counts/pixel) in V band (green curve) and B band (blue curve).
It’s clear that the dark side of the moon (DS) is bluer than the bright side (blue curve offset by ~0.05 in the log on the DS).
Plotting distance from the edge of the bright side, we get this:
Focusing on the falloff of light off the brightside edge (coincidentally the B and V band fluxes are almost identical in the two images, so they are effectively normalised at the peak flux) — we see in the figure above that the blue light has considerably broader scattered light than the V band. (Note that the radial scale is now linear, not log).
Next plot shows the falloff of light on a log-log scale, as a function of distance from the lunar edge (less 3 pixels, in order to catch maximum flux).
Clearly blue light is more scattered than V!
Up to now, we have been using a single V-band scattering profile for the light — which is based on this technique of measuring the scattered light off the lunar edge, fitting a power law to the falloff at large distance (the power law tail of the profile, which has a slope of order -2.7 to -2.9 on very clear nights, at least in the V band).
The power law tail in B appears to have the same slope as in V — the B band light beyond ~ 10^1.5 = 30 pixels from the lunar edge out to the limit of the wedge (250 pixels) follows V band closely.
However, there is a large excess of light from the lunar edge out to about 25 pixels. This is messing up our colour maps, producing the dark (very blue) artifacts around the moon.
More work is certainly needed! A deconvolution method to extract the true PSF from full moon images, in all our bands, might be the thing to try next, rather than the “poor man’s” wedge method! (Note that in the wedge method, we correct for the fact that the moon is not a point source, we do not use the profile of the light in the wedge directly!)
[edited versions:] We have discussed many ways of extracting albedo from our data. We first considered the ‘ratio over ratio’ methods – they consisted of extracting counts from patches on areas of the scattered-light cleaned images in the DS and the BS – the ratio of DS/BS observed to DS/BS model is proportional to the albedo of the Earth. The other method, used more recently, is based on ‘profile fitting’ near the DS sky edge.
We have now arrived at comparing the 5-color albedos derived from these two methods:
Right: Albedos for positive and negative lunar phases (Full Moon is at 0 degrees) from the DS/BS method where the “BBSO log” method has been used to remove scattered light. Left: Same but for the profile fitting method. Note different
vertical scales on axis. The same nights were considered for both
plots, but not all are present in both, due to outlier removal, etc.
We see, for both methods, a rise in the derived albedo as phases nearer Full Moon are considered. This is possibly due to effects of scattered light from the BS which has been incompletely handled by the respective methods. The values found with the two methods are quite similar – apart from increased scatter and less colour-separation in the DS/BS method. The data for the positive and negative branch of the phase diagram are not similar, in either case.
From tests shown elsewhere in this blog we do not expect the halo to interfere with the DS for large absolute lunar phases – i.e. near new Moon. The above diagrams shows lowest values for phases near 110 degrees. There is a slight increase in values larger than this – what can that be due to, if the halo is interfering less and less? Well, we must again remember that the above results are model-dependent and the Model may be adding its own fingerprint. For instance, it may be that the synthetic images we use have a phase-dependent error in their representation of lunar and/or terrestrial albedos. Note that the same synthetic models are used in the DS/BS method as in the profile-fitting method.
This question can be addressed by studies of the effect on the ‘bend’ seen above of different BDRF models.
We show here the B-V mean value of all pixels lit by sunshine. The data have been selected for being ‘good’, so the scatter is somewhat disappointing. Best one can say is that we do not contradict the published B-V=0 .92 value.
Blue and red points correspond to different parts of the lunar cycle – before and after Full Moon – waxing and waning, whatever you want to call it.
We have often come across good ideas for student projects. Here is a start of a collection of projects – just links, but text can be added to explain more.
How do meteorological conditions determine seeing at the telescope?
Was the bias pattern constant?
Understanding the PSF:
Albedo maps and their use in modelling observations:
Atmospheric turbulence studied via Moon images:
Colour of earthshine – Danjons work:
Image analysis methods – Laplacian method:
In this post we saw that the difference between B and V (magnitude) images could have the shape of a linear slope on the DS and plateau on the BS. We are trying to recreate that using synthetic models. It is surprisingly difficult!
Using V and V images we saw that differences typically had the shape of level offsets – not slopes. In the B-V images we saw linear slopes on the BS. I thought the linear slopes originated in different PSFs in two filters – different alfa-parameters, for instance.
Well, taking a synthetic image and convolving it twice with two slightly different PSFs and converting to magnitudes and subtracting gives this:
Upper panel shows the ideal image we are using – BS to the right and the rest is DS. Bottom panel shows the difference between the image convolved with alfa=1.73 and alfa=1.72*1.02. DS is columns left of 360 – there is no linear slope. There are plenty of features on the DS above, but none ‘slope away linearly from the BS’.
A straight line in a lin-log plot corresponds to an exponential term. The difference between two Gaussians of different width is probably another Gaussian. Are we learning that the real PSF has a Gaussian term in it that varies between filters? Since V-V images did not show this behaviour the Gaussian is not manifested by the inevitable slight image alignment problems. Our model PSF is an empirical core with power-law extensions – and the above experiments show that such PSFs do not yield linear-slope differences.
Perhaps we could study the real PSF by studying difference images in a thorough way? Student project!
In posts below we have discussed how to best investigate colour differences. Here we saw that sky brightness and exposure time problems can be detected.
Using selected good images in B and V we found the pairs that also were close in time, and generated B-V images. We noted (discussed here) that B-V on the BS is not always near the published value of 0.92, even in images selected for not having (obvious) exposure time problems. We wonder if the value 0.92 is more of a classical photometer value? That the colour of the whole BS on average is 0.92? Perhaps – but we also wonder if the reflectivity of the Moon has a phase dependence so that the B-V colour, even if a BS average, is lunar phase dependent?
Here, we choose to bring the B-V value of the BS in our selected images to 0.92 in order to study which values we get for B-V on the DS. This in an attempt to see if we can discuss what the B-V color of the DS, and therefore of Earth is.
For a range in lunar phases equivalent to illuminated fraction from 35% to 50% we plot the B-V values of a slice across the lunar disc, through the centre and 40 rows wide. We average over the rows:
We have aligned the images by the deep cut, which corresponds to the BS/DS border – the terminator. On the right of this we have the BS and on the left the DS.
We see that the BS is level. We have offset the image values so that the BSs are near 0.92 (by eye). We see that the DS has a slope. We see some level differences in these slopes but the slopes themselves are fairly similar. For one of the profiles the B-V reaches as low as 0.5ish, but there is still a slope.
On the basis of that I think we ought to say that “B-V for earthshine is at or below 0.5 in absolute terms”. Better may be to say that “the DS B-V is 0.42 below the BS B-V level.”
The B-V os sunlight is 0.656 ± 0.005 (Chris has measured a similar value of 0.64). If we also know that the sunlight on the Moon appears to us to have B-V of 0.92 then we can infer that one reflection from the Moon reddens the sunlight by (0.92-0.656)=0.264. The Sun also shines on Earth (lucky us!) and that light has its colour altered by the Earth – when it strikes the Moon and comes back to us it has B-V of 0.5 or less. Knowing what one reflection off the Moon does to the B-V color we infer that B-V of Earthshine is 0.25 or less. Franklin performed UBV photometry [with a ‘diaphragm’ of diameter near 1 arc minute – it means he observed areas on the DS, preventing the BS light to enter the photometer, but not BS-scattered light from optical elements before the photometer] on the Earthshine and found that B-V ‘for Earth’ was 0.17 below B-V for the Sun – this implies that B-V for Earth is 0.47. We have a value (from our lowest value) slightly above that.
1) The above slope is rather straight. The halo itself is there because the halo from B and V have not cancelled. This must be telling us something about the scattered light halos? Probably that it is linear in a lin-log plot – which is what we use using the ‘BBSOlog’ method. What would Ve1-VE2 images look like? We cannot tell since the halo is not (primarily) due to atmospheric scattering and Mette knows no rule for how scattering in lenses depends on wavelength. We have to try, before we know if Vegetation edge data can be extracted in this way.
2) We also see that we have no images without a B-V gradient across the DS. If we had had an SKE we might hope to achieve ‘clean’ B-V images. But we don’t. So we can’t.
3) All is not lost – when we fit images profiles directly we compensate for the halo and can therefore extract actual albedo data that does not depend on the presence of the halo.
To check consistency we now look at V minus V images, where the two V images are chosen to be closer in time than 30 minutes but not taken at the same time. We correct for extinction; we convert the raw images (bias subtracted, of course) into instrumental magnitude images by calculating the flux from the nominal exposure times and taking -2.5*log10, and then align them and subtract. We shoul dget images of 0s since the fluxes, once corrected for extinction, should show the same flux – at least on the BS where the Sun is shining. We do this and get a royal mess:
In each frame the insert shows a color-contour plot of the V-V image. The graph shows the usual slice across the middle of the image, averaging over 40 rows.
We see in upper left panel a clear offset at the terminator – i.e. the DS has different level but the BS are similar. Upper right shows a failry decent pair of images – the terminator is giving some problems and the DS as well as the BS differences are offset from 0 by a small-ish amount which coul dbe caused by an error in exposure time of some 10% or so (not unlikely). Lower left shows that while the two BSs are at teh same level then the DSs differ violently. The lower right shows a really nice example of two images agreeing.
What is going on? We hand-inspect the above images and see that in the case of the lower left image the sky level is much higher in one of the two images used, although they are observed less than 30 minutes apart. The counts inside mare Crisium are 3 and 11 in the two cases – i.e. a factor of about 4 or a magnitude difference of 1.5 – well outside the plot frame. So, from this we learn that we have a method to detect stable sky conditions! It also tells us that using images from different filters should be done with great care – even small differences in sky conditions will cause the DS to shoot off!
As it is, these images are merely bias subtracted – there is no individual correction for a ‘pedestal’ due to sky conditions. Luckily we can still use these images for albedo work in that we expressly fit a pedestal term!
What else can we learn? Well, any error in actual exposure time will influence the DS and the BS with the same factor – hence the magnitude differences plotted above will become offsets for both DS and BS – hence, upper right is consistent with ‘wrong exposure time’ in one of the images. Having assigned one filter and received another will have the same effect as an error in exposure time – if the DS and BS are altered by the same factor. I wonder if ‘wrong filter’ could camouflage as ‘wrong exposure time’? Since DS and BS have different colors I doubt it – but we should investigate this.
So, three things learned:
1) Wrong exposure time will lift or depress DS and BS by same amount.
2) More sky brightness in one image than in the other – affects DS only.
3) Image pairs like the ones used for lower right panel, above, are probably both OK.
I have located all ‘good images’ in B and V. That is, all B and V images, made from stacks of 100 images, that are on Chris’ list of ‘good images’. I have furthermore identified all pairs of B and V images that are taken less than 1 hour apart. For all these pairs I calculate a B-V image, by using Chris’ calibration of flux against known standard stars. I allow for an extinction correction based on kB=0.15 and kV=0.10. I get airmasses from the Julian date of the image, and IDL software. I plot a colour contour plot as well as a ‘slice’ across each image. The slice is made up of an average of 40 rows centred around the row that goes through the centre of the B-V image. A total of about 33 image pairs have been plotted in this way. Here is a (large) pdf file containing the plots.
There are several strange things to see. Generally we get the profiles shown here. That is – the BS is somewhat flat, while the DS slopes towards the DS sky. Oddities include images where the DS is as flat as the BS – just at another level. Is that ‘extremely good nights’? I think not. In some, the DS is much higher than the BS – that could be images in which the exposure time is incorrect due to shutter problems. Or filter-wheel problems! In many the BS is flat, but not near the 0.92 value we expect based on publications. For small differences with this ‘canonical value’ I think we could be talking about exposure time uncertainties. If we are at B-V=1.0 instead of at 0.92 we could have an 8-10% error in the exposure time. The times are short and it does not seem unlikely we have that big a problem. Sigh.
I will suggest that we ‘correct the BS value’ to 0.92 by simple shifting, and then consider how our DS B-V values look. With some ‘deselection’ of some obviously bad images we can perhaps arrive at a base set of good B-V images.
I think the result will be that few show any leveling off in the ‘slope of B.V wrt distance to BS’.
We have few images at small lunar phase (i.e. small illuminated fraction). Chris has analyzed one good pair of B and V images at small phase, but it is not in my pipeline of good images – the automatically detected centre coordinates and radius are way off (because the sicle is so small that automatic methods do not work). Our new student Johanne is inspecting these images by hand and will adjust radius and centre coordinates and then we should have a few more images to play with at small phase. These may show us if we can get far enough away from the BS that the DS slope in B-V levels off and shows the true earthshine B-V color.
[Note added later: we have updated this posting using extinction corrections for B and V. kB=0.15 and kV=0.1 were used, and airmass=1.67. JD was near 2456016.82]
In preparing for the EGU 2013 conference, where we will have a poster on color results from the earthshine project, I want to show a ‘B-V’ image of the Moon – since we can, and may be the first to publish such a thing.
Using Chris’ calibration of our filters against standard stars in M41 and NGC6633 I can reduce observed images in any filter to ‘magnitude images’ in the same filters. By taking the difference between the B and V image I thus arrive at a ‘B-V image’. Here is a slice across the middle of that image:
Second image added later: We now correct fore extinction:
Here, the BS is to the right of the ‘cut’ in the disk, and the DS is to the left. It appears that the BS is higher than the published value for ‘moonshine’ of 0.92 [see e.g. Allen, “Astrophysical Quantities”, 4th ed, Table 12.14]. Added later: Even after correcting for extinction the B-V is still higher than Allen’s value – we now get B-V near 0.95-1.0. Perhaps now, not so much larger than that value?
I am not quite sure about the DS. There is probably a slope because the halo in B and V are not identical and thus do not cancel. But as the BS is approached the color of the halo does approach the BS value, since that is where the light comes from. So I understand that part.
At the extreme left of the DS there is least influence from halo. The value is a bit up and down, but appears to be lower than the BS by up to 0.3. If B-V is a smaller number in one place compared to another it means that B is smaller than V in the first place, relative to the second – i.e. on the DS B is relatively smaller there than on the BS – since these are magnitudes it means we have shown that earthshine is blue – the Earth is indeed ‘a blue marble’!
Here is the B-V image itself, with cleverly chosen colours to reinforce our message: Sky is masked out, BS is to the right, DS to the left.
I’ll try to redo this for a lunar phase with less halo.
We have been following the mystery of the VE2 filter. Ana Ulla asked a good question: “Do you see the same problems in images not of the Moon?”, so I checked that, using our copious collection of Altair images. I fit a Moffat profile to each well-exposed image of Altair and extracted the sky level. I plot it here as a function of which filter (0=B, 1=V, 2=VE1, 3=VE2 and 4=IRCUT) and as a function of time, for one night (JD 2455845):
In the top panel we see that VE2 (the brownish points) have two levels, and in the lower plot we see that the VE2 level switches from high to low after some time. (Bottom panel has fractional day as x-axis). We also see that the black points (B) increase their level (slightly) at the same point (0.87) where VE2 drops to the low level. Blue points (VE1) have some history like that too. V (red) is quite stable. Green is IRCUT.
If the excess in VE2 (and others) was due to a source of light, then we would hardly see a down-shift in the level. Ana’s idea was that thin cirrus could enhance the nIR band of VE2 (quite like the Johnson I band) – but that does not explain why B rises, unless that is a particular behaviour of cirrus clouds, which I doubt. I do not think anything above is consistent with a light source. I think this is what we have had to conclude from the other results presented in this blog – e.g. the inverse exposure time dependence and the inverse airmass dependence.
I am guessing when I say that I think it is something to do with electronics. The camera knows which filter is being used! (Not).
We have already shown a strange behaviour of the VE2 filter – or what appears to be a problem related to the use of the VE2 filter. The post is here.
Now we are able to report something even stranger (OK, just as strange then). We appear to have a relationship between the ‘VE2 pedestal’ and the airmass of the observation:
In the upper frame we see the old VE2 pedestal vs exposure time plot. In the lower frame we see that the pedestal is also a function of the airmass! If the pedestal was proportional to airmass we could, perhaps, understand the phenomenon as something showing a flux proportional to the amount of atmosphere (I am thinking of some sort of airglow here) – but NO, the pedestal height is smaller the larger the airmass is!
Is this some electronic problem related to how the telescope is oriented? Then why the inverse proportionality on exposure time too? Why not the other filters?
What is going on here??
During reduction of our data it has become evident that the VE2 filter gives us certain problems. A new problem is that VE2 exposures tend to have a ‘sky component’ that is much larger than in the B,V,VE1 or IRCUT filters. Here is a plot of the relationship between exposure time and the ‘pedestal’ – i.e. the offset that has to be subtracted from the VE2 image to bring the sky level down near to 0 (in bias-subtracted images, of course).
It does seem like the pedestal height is a decreasing function of the exposure time. Why is it any function of exposure time, when the other filters do not show this behaviour?
If there was a source of light in he image frame, then its level should INcrease with exposure time, surely? Here is the plot for the VE1 pedestal:
The level is much smaller than in the VE2 case – and it does not appreciably depend on exposure time.
What is going on?
In this entry: http://iloapp.thejll.com/blog/earthshine?Home&post=304 we investigated the effects of using a lunar albedo map based on scaling the Clementine map to the older WIldey map vs. scaling the Clementine map so that lunar mare and highlands matched what is published in the literature. [Note: the point being that the ‘Clementine map’ we have available is just a picture file – jpeg! – so that pixel values have to be scaled to albedo values somehow.]
We were doing this fitting in the ‘new way’ which is to fit ‘profiles’ starting on the sky on the DS side and extending onto the DS itself, modelling the contribution from the scattered BS light. In doing this we saw that improved fits could be obtained if the lunar map albedo was ‘stretched’ so that dark and bright areas better matched our observations. We did this stretching by eye and were able to improve the formal fits.
We have now compared the fits on 535 images done using the ‘Clementine scaled to Wildey’ map and the ‘Clementine stretched by eye’ maps.
We have found that the RMSE (that is, the square root of the sum of the squared residuals [i.e. observed profile minus best fitted model profile]) is improved using the scale by eye’ map – in two ways: a subset of images were poorly fit using the other map; they are now much better fit, and the mean RMSE of the finally selected images is lower.
We selected ‘good fits’ on the basis of alfa (the PSF-width parameter) having to be in a narrow range and that the relative uncertainty on the fitted albedo should be below a certain limit.
Mean log10(RMSE) is now near -1.1 in units of counts/pixel. We fit a profile that is 150 pixel columns long – 50 pixels on the sky and 100 on the DS.
Mean relative fit-uncertainty on the albedo is near 1.5% when using ‘counting statistics errors’ on the observation.
We note that the VE2 images more frequently have a larger ‘pedestal’ or sky offset after bias removal than the other filters. While most filters have an offset of near 0.2 counts (+/-0.5 or thereabouts) the VE2 offset is more often near 4 or 5 (+/- 1-2 counts). What is the cause of this? The bias frames surrounding the VE2 exposures have been spot-checked and seem OK – bias is near 400. A few, otherwise perfectly all right, VE2 images have a sky offset of 10-40 counts! Observations from the same night in other filters show nothing like this – some sort of nIR ‘fog’? Does the sky emit nIR light?
Does this have a conseqeunce for the ‘tunnel selection’ of VE2 images, done by Chris? [A data-selection method designed to take into account shutter and filter-wheel problems by requiring that image total fluxes follow a known phase-curve.]
While the ‘new fitting method using stretched lunar albedo maps’ formally works best of all methods we have seen so far, it is very empirical. Later we may be able to use the method to produce a ‘best fitting lunar albedo map’ that improves on the original Clementine map. We may then also be able to compare to what the LRO people (and LLAMAS) are finding.
After the scattered light has been removed we still have the task of converting the observations to a terrestrial albedo. This is done, by us and the BBSO, with the use of a model for how the Moon reflects light. This involves assuming a lunar albedo.
The BBSO has a set of lunar-eclipse observations that they use to find the DS/BS patch albedo ratio – we use the Clementine map. That map is available to us only as an image file. We scaled that image so that the lunar mare and highland albedos correspond to what the literature states.
An alternative would be to scale it to match what lunar albedo maps show. The only digital lunar albedo map we have is the 1970s Wildey map, which Tom Stone gave us. In this blog: http://iloapp.thejll.com/blog/earthshine?Home&post=253
we performed a regression-based scaling of all the pixel values in the Clementine map to corresponding (or interpolated) pixel values in the Wildey map, using the two maps’ coordinate systems.
We therefore have the possibility of fitting our observations to the lunar map of Clementine scaled to Wildey (let us call that the “Wildey map” from now on) or the Clementine map (“Clementine map” – but note that both are Clementine-based!) scaled to the highlands and mares. This choice has been investigated.
We find that the contrast in the Wildey map is lower than in the Clementine map, which affects the quality of fit.
Same lunar-edge profile (white line) fitted with two different models (red line) – one uses the Clementine lunar albedo map, scaled to match lunar highlands and mare; the other is the Clementine map scaled, using every pixel, to fit the Wildey lunar albedo map. Notice the smaller contrast between highs and lows in the upper image (Clementine scaled to Wildey) compared to the lower (Clementine scaled to highlands and mare).
There is evidently a difference in the quality of the fits – neither being particularly good, failing to match the observed highs and lows. The conseqeunces for the albedo fitted is at the 3% level.
The mean difference in the terrestrial albedo determined using the two maps:
16 albedo values determined using both Clementine-scaled-to-Wildey (vertical axis, in top plot) and Clementine alone (h. axis). We see that the two albedos are near the diagonal – with only a few outliers. The histogram shows the absolute difference between the two albedo determinations – it is about 1% of the albedo value.
As long as the large spread in albedo values (from 0.23 to 0.42) is real and not due to some data-treatment bias, we have an effect from the choice of lunar albedo map that is at the level of the errors from pixel statistics (shown elsewhere in this blog).
We would like the known biases to be less than the effects due to scatter, so perhaps some work on the lunar albedo is in order? We can look at the mean values of the two maps – there was no requirement that mean albedo be conserved, and we can look at various ways of stretching the contrast, while maintaining mean value – this is a nice piece of work for a student project.
Performing a hand-adjustment of the contrast scaling, while maintaining the original Clementine map mean, we can refit the relevant profile. We get this:
Smaller RMSE, small but important change in ftted albedo.
The best data will be those that have almost no interference from the halo at all. Those exposures, if they can be identified, will have the least scattered light on the DS and will therefore be the easiest to clean up. Here we try to identify those exposures. We simulate the effect of the halo on ideal images and measure the amount of flux added by the halo – before any attempts to remove it. We loop over relevant values of lunar phase and ‘alfa’, the PSF-width parameter. We calculate the change in flux for a region near the DS rim and express the change in percent. The contour plot shows contours at 1,2,4 etc percent change. Overplotted are red data points showing locations of real exposures.
We see that the least polluted images are found at large phase (near New Moon) and for narrow PSFs (large alfa values). This is not surprising! We see that we have several exposures taken under conditions that allow just 1,2,4 etc percent change in DS intensity.
If these images are the least polluted by BS-scattered light, we should have the most success when applying our scattered-light removal techniques to these. If they have a 10% effect due to scattered light, and we can remove 99% of that we have reduced the error induced by the BS halo to the 0.1% level.
During processing a strange signal was found in a frame combined from 100 images. It turns out that in one image of this stack we have what appears to be a meteor or a satellite flash – or something:
The image is from Jan 17 2012: (UTC 2012-01-17T13:28:48). The exposure time was 0.009 seconds! The trail is about 3/4 of the lunar diameter in length – i.e. about 22 arc minutes. The orientation is such that it is travelling almost Due North (or South!).
What is it? Well – it is clearly between us and the Moon! If it is a meteor its height would be something like 50-100 km. The speed would then be 35 – 70 km/s. A satellite in low earth orbit has speed 8 km/s. Since the image is taken from Hawaii at UTC 13:28 it is near midnight on Hawaii – i.e. the Sun is behind the Earth and unlikely to be illuminating a LEO satellite.
An airplane flies at 800 km/hr at altitude 10 km, so the distance covered in 0.009 s is 2 m which would subtend an angle of 1 arc minute. This is no airplane – or it is much closer, in which case we should see details of the plane.
As far as I can tell there is no pronounced peak in meteorite activity in January.
We select 6 nights and extract albedos in 4 bands:
Get better pdf file here:
Here are albedos determined for 6 nights. We see 4 filter bands (VE2 is still being processed). Each color represents one night. There are 3 determinations of albedo from each image – hence points come in columns of three.
1) Albedos can differ in level from night to night – e.g. blue and red in panel 1.
2) Albedos can also be almost constant during a night – e.g. purple and green points in panel 1 vs red or blue.
3) Some nights have large scatter – e.g. green vs blue
4) B albedos are higher than V and others
5) VE1 and IRCUT albedos are very similar – the filters are also very similar, so this is good for reproducibility.
6) While some albedos evolve smoothly on a given night IRCUT shows a ‘dip’ – e.g. blue crosses in B vs blue points in IRCUT
Questions of interest:
a) Why are some nights noisy? E.g. the green points. Information on airmass, alfa and bias frame statistics, and RON, are available.
b) Can the nightly evolution (i.e. slope) of the points be related to anything happening on Earth?
c) Can the change in slope in IRCUT be related to anything special?
In several posts we have considered the behaviour of variance-over-mean images (VOM). In the ideal world these should be 1 because the noise (once RON is subtracted) is Poisson-distributed. We have seen how this is hardly the case in observed images.
We now consider a test using ideal models with and without imposed image drifts. We generate 100 synthetic images with Poisson noise, and RON as well as a bias.
Below, each set of three panels show a cut across one image in the synthetic stack, then a slice of the VOM image before image alignment and then the same slice after alignment. On the left are images that were not drifted while to the right images were allowed to ‘jiggle around’ by a few pixels.
We see that VOM is 1 on the BS in un-jiggled images, but that DS and sky values fall below 1.
We see a HUGE effect on images that were allowed to drift.
Some of it we understand. Before alignment, VOM rises on structured areas of the drifted images because surface albedo variations are being introduced for a given pixel along the stack direction. The effect on the DS and sky is much less – perhaps because the Poisson noise is so large comapred to the signal variations. After alignment, VOM falls to slightly below 1 on the BS, except near the edge. On the DS and the sky, though, a large lowering is seen. So far it is not understood how this comes about.
Any strange effects seen in observed images will be all the larger since the images do not just drift but also ‘shimmy’ because of atmospheric turbulence.
The effects of aligning images to sub-pixel shifts is part of the above.
Let us learn from this that a noise model probably is hopeless to build in the presence of image shifts – despite realignment – and that sub-pixel interpolation is not a welcome added bother. We could just omit single images with the most drift and use a straight average of the remaining stack. In real images we have the option to not use stacks that have a l ot of drift – but we do not know the extent of the ‘shimmy’ for the remainder.
These realizations above have the most impact on our ability to interpret results that discuss the effect of alignment – alignment reduces some problems but probably adds some others.
In this post: http://iloapp.thejll.com/blog/earthshine?Home&post=297 we investigated the effects of alignment on the noise. We measured noise in squares on the final image and therefore had some ‘cross-talk’ from the variations due to surface structure.
Now, we consider the variance-over-mean along the stack direction – that is, for each image pixel we calculate variance and the mean and look at the ratio of these, which we call ‘VOM’. In the perfect world, this image should be 1.
We look at unaligned images, and then at aligned images. The ‘superbias’ was subtracted (but not scaled to the adjacent dark-frame average). RON of 8.3 electrons was subtracted before VOM was calculated as (SD-RON)^2/mean. Gain used was 3.8 e/ADU. We plot VOM in a slice across the final images:
Top is VOM in the unaligned image, and below is VOM in the aligned image. A lot can be mentioned: First, the surface is not constant, obviously. Second, the effect of alignment is not just a uniform lowering of the VOM that we expect (same mean, less cross-talk between pixels).
In the top image we have VOM near 0.1 (dotted line) on the DS and most of the sky. On the BS the VOM is near 10 apart from the peaks that occur at the intersection with the lunar disc edge. There variance rises because of jitter between images and the mixing of disc and sky pixels.
In the aligned image VOM is near 2 or 3 on the BS disc, higher at the peaks (so there is still an edge-effect, but less). On the DS and the sky a spatial structure has been formed, slanting inwards towards the BS.
What is going on? Effects of pixel-subshifting? What does the interpolation do? Why is it spatially dependent? Strange flux-nonconservation?
The sub-pixel shifting used is based on the IDL ‘interpolate’ function. In the manual for this function it is suggested to allow for cubic interpolation by setting a keyword “cubic=-0.5”. I did this and recalculated the above. The effect was slight.
A test was performed to see what variance is induced by the interpolation. By shifting an image by a non-integer amount of pixels in x and y and then shifting that image back, and calculating the difference, we see that using INTERPOLATE without ‘cubic’ induces 2-3 times more residual S.D. than does use of INTERPOLATE with cubic=-0.5. The interpolations lost variance compared to the original image. With cubic interpolation the loss of standard deviation relative to the mean is at the 1% level – quite a lot actually. [Note: conservation of flux not the same as conservation of variance]
Could it be some effect of bias-subtraction? VOM has dropped most in the halo near the BS. Why?
In a potential upgrade of what we currently have, we might consider going to linux and freeware, for our command system.
Some devices can be operated via the indiserver concept. Devices for which drivers have been written can be commanded from essentially the linux command line. This means that scripting systems are easier to set up – for us at least.
I have been able to control our SXV-H9 CCD camera using this system. Here is a link to how it is done:
There exist indiserver drivers for gphoto2 – that means that Canon DSLR cameras, for instance, can be operated in this way.
A TrueTech filterwheel is being tested. LX200 telescopes can be controlled – i.e. mounts such as ours.
It would be a Good Thing if we could explain observed images in terms of a ‘noise model’ – that is, a model that explains why the signal and the noise bears a certain relationship.
We consider here a stack of 100 raw images taken in rapid succession. The difference between adjacent images should only contain noise in that any objects shown in both images should subtract. The difference between two series drawn from the same poisson distribution but independent has a variance that is twice the variance in either of the series subtracted, so we apply a factor of 1/2 to the difference image to get the variance image.
We subdivide such a variance image into smaller and smaller squares, and in each square we calculate the variance. We generate an image of the ratio between the variance image an dthe mean original image, also subdivided In the case where everything is Poissonian this image should be a surface with value 1. In reality there will be noise in this surface – and there will be strcutures seen wherever imperfect obejct subtraction took place. Here is the result:
In the three rows we results from different subdivisions – into 8×8, 4×4 and 2×2 squares. In the leftmost column is a histogram of the values of the image and to the right is plot of the profile across the middle of the image.
We see, to the right, that variance over mean (VOM) is not 1 everywhere. We are evidently picking up a good deal of variance near lunar disc edges [The image corresponding to the above is a quarter Moon with the BS to the right and DS to the left, situated in the image field center]. We see that the sky manages to have VOM near 1 and that parts of the DS does this, but that most of the BS appears to have VOM>1. Even ignoring the peaks that are due to edges we see a value for VOM near 2 or more [also seen in the rest of the 100-image sequence].
Since we used raw images we have to subtract a bias level. We subtracted 390 from all raw image pixels. We applied the ADU factor of 3.8 e/ADU to all bias-subtracted image values. We then calculated the variance and subtracted the estimated RON (estimated elsewhere at near sigma_RON=2.18 counts, or variance_RON=4.75 counts²; consistent with ANDOR camera manual “8.3 electrons”).
The subtracted bias is a little small – the observed mean bias value is nearer 397, but if we use that value we get strange effects in the images – only a relatively low value of the bias mean gives a ‘flat profile’ for the sky in a slice across the image. This is one poorly understood observation.
We also do not understand why the VOM is nicely near 1 on the DS while it fails to clearly be near 1 on the BS – both areas of the Moon have spatial structure which is bound to contribute to the variance in the difference image during slight misalignments.
That is the second poorly understood thing.
Progress towards a ‘noise model’ is therefore underway, but there is some distance to go still.
How would non-linearity in the CCD influence the expectations we have from Poisson statistics?
We are using stacks of 100 images. These are acquired by the CCD camera in rapid succession – 100 images can be obtained in less than a minute. During that time there may be small motions of the telescope, and the turbulence in the air, as well as the slight change in refraction due to airmass changes may cause image drifts in the stack.
In one approach we ignored the possible drift and just averaged the 100 images. In the other approach we use image alignment techniques to iteratively improve the alignment of the stack images: First we calculate the regular average, then we align all stack images against that average image, then we calculate a new average image on the basis of the alignments and re-align all images against this average and so on. This procedure turns out to converge, and after 3 iterations we can stop and save the last average image.
We would like to know the effect of doing this on result quality. We therefore generated two sets of averaged images – the first using the simple first kind above, and the second the iterative method.
We estimate ‘errors’ in the bootstrapping way. That is, we extract MHM averages (mean-half-median, as explained elsewhere) of the DS intensity on raw and cleaned-up image patches and also estimate the statistical error on these values by bootstrapping the pixels inside the patch, with replacement. This bootstrap procedure gives us a histogram og MHM values and the width of this distribution is a measure of the ‘error on the mean’. We express this error as a percentage of the mean itself, for RAW images and images cleaned with the BBSO-lin, BBSO-log and EFM methods.
we now compare the results to see the ‘effect’ of performing alignment of stack images:
Table showing errors (in percent of the mean). Lines labelled 1: and 2: show results for ‘one-alfa PSFs’ with and without alignment. The third line shows the effect of using a ‘two-alfa PSF’ and alignment.
We see that there has been a large reduction in errors by using alignment. Raw images improved by 20%, EFM images by 17%, BBSO-lin by 16% and BBSO-log by 14%.
The effect of using a two-alfa PSF on aligned images is small – indeed, all images except EFM experience a small increase in the error (probably not significant).
The effect of alignment on single-alfa PSFs is not investigated.
We conclude that alignment is a beneficial operation. We note that EFM has the largest errors but other arguments imply EFM is the better method to use – this is related to the stronger phase-dependence seen in non-EFM images, and is discussed elsewhere.
Our standard practice to date has been to take 100 exposures of the moon, each with an exposure time of 10 to 40 milliseconds, depending on the brightness of the moon. The exposure time is set so that the bright side attains about 50,000 counts maximum, in order to get good signal but not risk overexposure (saturation of the CCD wells).
We have looked at the differences between successive images in these 100 image stacks. A good night with a fullish moon was chosen (JD2456106)
and the relative difference between the first 50 consecutive pairs of images in the stack computed, and made into a movie — here
(this can be viewed with mplayer in linux)
An example of nine consecutive frames is shown below :
During the sequence of 50 exposures in the movie, the moon only moves about 1 pixel down and 1 pixel to the left, so drift in the position of the moon from one frame to the next is practically negligible.
White and black on the grey scale run from a -10% to a 10% difference with the previous image. Note that the two compared frames are normalised to the same total flux (this is typically a correction of ~1%).
We see plenty of structure in the difference image, which we interpret as being due to air cells — turbulence — above the telescope. The telescope size (3cm) and short exposure times (20 ms) mean that scintillation is very significant (because of the turbulence). In tests on bright stars with our telescope and 10 ms exposures, we were getting scintillation fluctuations in the fluxes of order 50% — very large — as one would expect (Dravins et al 1997a) (paper attached below).
Some of the structures look greatly elongated, and remind us of the “flying shadows” discussed by Dravins et al 1998 (paper attached below). To quote them : “Atmospheric turbulence causes “flying shadows” on the ground, and intensity fluctuations occur both because this pattern is carried by winds and is intrinsically changing”.
Analyse sequences of images in 100 image stacks, and make an estimate of the amount of turbulence present following Dravins et al’s work — for correlation with our fitted parameter alpha of the PSF — to see if the amount of turbulence in the frames has an effect on the PSF’s long power-law tail.
Dravins et al 1998:
Dravins et al 1997a (the scintillation figure is on p 186):
Dravins et al 1997b:
In this post: http://iloapp.thejll.com/blog/earthshine?Home&post=293 I proposed to fit more elaborate PSFs than we have been using.
I present here the results of fitting a 2-parameter PSF. The previous one was characterized by an exponent alfa, and had the form
The new one allows two exponents – one regulating the slope of the PSF up to a cutioff point in terms of radius, and the other parameter taking over outside that point. Thus we have 2 new parameters to fit – exponent and cutoff point.
Using such a PSF and evaluating success on the parts of the image designated by the mask – and using a mask consisting of sky on both sides of the Moon, but no polar areas – i.e. a ‘belt across the image’ – we arrive at images with slight changes in the estimated halo on the DS of the Moon and larger changes in the area on teh BS-sky.
This plot summarizes what was found:
The top panel shows absolute value of residuals (obs-EFM-model) in a slice across an image after the halo has been removed (along with most of the BS, as is always the case in the EFM method). The PSF used was the new tow-parameter PSF.
The middle panel shows the result when the usual one-parameter PSF has been used.
The last panel shows the difference between the two panels, expressed as a percentage of the first one. The illegible y-axis label for the bottom panel shows 10^-2 at the bottom, next label is at 10^0.
Note that absolute values of residuals are plotted above.
We see the DS to the left, the almost-erased BS to the right and the sky on either sides.
Clearly, the BS sky has been removed much better in the first image – i.e. with the two-parameter PSF. On the DS we see changes in the range from 1 to a few percent on the sky-ward part, rising to tens of percent near the BS.
This implies that a signifcantly different amount of scattered light has been removed using the two PSFs – but which is the better result? Judging by the RMSE per pixel on the mask in the two cases there has been a significant improvement in going from one to two parameter PSFs. The RMSE per pixel in the one-parameter PSF case is about 5 [counts/pixel], while the RMSE per pixel is 0.23 [counts/pixel] in the two-parameter case. Most of this change has evidently taken place in the BS part of the sky.
We need to know if there has been an improvement on the part of the image that matters – the DS!
The best fitted parameters were quite alike – near alfa=1.7. The best-fit cutoff point was near 30 pixels.
It seems that success for us will be linked to our ability to correctly remove scattered light from the BS. This hinges on our understanding of the scattering model – in essence, the PSF.
Currently we use a PSF that is empirical. The core is made up of values from a table generated from observations of bright stars and Jupiter. To that we link an extension, also empirical, that is based on what the far wings of the lunar halo looks like. This PSF is then ‘exponentiated’ during fitting to actual images in the EFM method.
We noted during several posts below that the values for the exponent varied little and only now and then seemed linked to the extinction. This could be an indication that most of the time (during good nights) we are limited by something fixed – such as the optics – rather than atmospheric conditions. On the other hand it could mean that the PSF is not very accurate in its basic from and that the fitting method gives up at some stage, leaving us with an exponent that is somewhat random, and therefore not linked to the atmospheric conditions.
We should also recall what happens during application of the EFM method: We have tested various forms of sky masks for this method – some that allowed fitting emphasis on both the DS and BS sky, and others that emphasized only one side. Common for the ones that focused on either just the DS sky or the BS sky was that the halo shape on the other side was not very good.
Might these things be indicating that a better PSF should be generated or a better way found to apply the fits?
I’d like to suggest the following: Perhaps the PSF has a form like
instead of the present
PSF ~1/R^const_alfa ?
I would like to try to use a piece-wise constant alfa so that the PSF is separated into radial zones, each of which has its own alfa, found by fitting.
More to be added …
In this post: http://iloapp.thejll.com/blog/earthshine?Home&post=289
the importance of the SKE for scattered light was discussed. The images shown, though, were not shown fairly – with intensities scaled to comparable levels. I therefore extracted a line across the BBSO image and a line across our image, at right angles to the SKEs, rescaled the intensities, aligned the plots and get this:
The black curve is from our image (whichis a sum of 10 well-exposed images). The red curve shows the cut across a single BBSO image.The BBSO image has only the DS peeking our behind the SKE, while our image has the BS in full view.
We see entirely comparable ‘halos’! The BBSO image ha a more pronounced sloping ‘tail’ onto the black side of the SKE than we do, and more noise. If that sloping tail is ‘halo’ we had a better system than the BBSO!
What does the above mean? It does NOT mean that we have less ‘halo problems’ than BBSO does – because the BBSO expose their DS so that the halo from the BS is not allowed to be formed. Yes there is a similar halo from the DS on their images as there is from the BS on our images – but the halo from our BS is very much stronger than their DS halo.
When the BS halo is small – i.e. near New Moon – we have minimal effect of the BS halo.
We have now reduced all the good data, and have arrived at the intrinsic properties of each image. We extract now the ratio of the DS-patch to the total flux and divide this by the similar ratio extracted from the synthetic model. This ratio of ratios is the same as the ‘Lambertian sphere terrestrial albedo’ and is the quantity published by e.g. Goode et al.
We select data for airmasses less than 2 and for statistical error less than 1.5%. This error is that due to mainly pixel-selection inside the ‘patches’ on the lunar surface (one near Grimaldi, one inside Crisium). Monte Carlo bootstrap sampling of the pixels inside a lon-lat box were resampled, with replacement, and the standard deviation of the consecutive means calculated. Errors due to image alignment, improper scattered-light removal, synthetic model problems and so on, are not included.
Here is the plot of extracted albedo against lunar phase for the VE1 filter (the filter with least scatter: IRCUT is almost identical to this plot): [scroll down for discussion]
The colours designate the method used for scattered light removal: red is the EFM method, orange or yellow is the ‘linear BBSO method’, and green are the raw data. Crosses and diamonds indicate the sign of the lunar phase – crosses are for positive phase. Error bars (error due to counting noise, not image alignment etc) have been plotted over the symbols, but are all smaller than the symbols themselves.
What do we see? There is scatter and there is dependence on lunar phase.
First, let us discuss the lunar phase dependency: For small phases we are essentially getting closer to Full Moon and the scattered light is more and more of a problem. The Raw data are doubled by phase 80 degrees, showing the importance of removing the scattered light. BBSO-linear is better than no removal all while EFM is better than BBSO-linear. There is also another lunar phase dependency – notice how the albedo rises for large lunar phases. Since the halo is smallest here this is not an effect of scattered light! The albedo is a composite quantity, consisting of quantities measured in observed and modelled images. The scattered light influences mainly the model, and mainly at small lunar phases. The reflectance model soley influences the synthetic lunar image, of course – I think we are seeing the influence of the (inadequate) reflectance model at large lunar phases – the albedo rises artificially due to this.
The there is the scatter. We see ‘daily progressions’ for some connected lines of data points. For instance, by 90 and 95 degrees phase. These points represent our candidates for geophysics! I think the ‘progression’ along these lines are due to different parts of Earth )oceans, clouds, continents) rotating into view for these observing sequences.
Below is the same plot, for the B filter. Apart from more scatter we see that the EFM seems able to remove most effects of scattered light, compared to the BBSO-linear method. A stronger ‘upturn’ is possibly seen at large phase.
Chris thoroughly investigated the quality of all data and arrived ta a list of the 534 best images: http://iloapp.thejll.com/blog/earthshine?Home&post=277
I have collected all those images in ‘cubes’ consisting of raw observation, EFM- and BBSO-cleaned images, images for incidence and emergence angles and the ideal synthetic image and lunar longitude and latitude images – each ‘cube’ contains centred images for a given ‘stack’ of 100 observations. We thus have 534 ‘good coadded’ images with all secondary data that belongs to that moment. Each cube is about 16 Mbyte uncompress,a nd slims down to 5-7 Mbyte upon being bzip2-ed.
I inspected the correlation of these observed stacked images with their synthetic models and found that 8 of them are poorly correlated with their synthetic models. I inspected each case and found that the problem was either focus (the observed image is blurred) or a slight alignment problem.
Omitting those images we arrive at this list of 525 best images:
These are all available as ‘cubes’, upon request.
In the future we might wish to only work with these images, since alignment and image-reversal issues have been dealt with (the observed images swap East and West upon the meridian flip).
We can set up a Mark I dataset, to be updated later as we find issues or solve the remaning (small) alignment issues.
We have some images of the SKE inserted in the ray path of our telescope, during Moon observations. We also have some images of the same from the BBSO telescope, kindly lent to us by Phil Goode.
BBSO images om two separate periods about a week apart. Top row shows histogram equalized imges of the BS (left and the DS (right) with SKE superimposed over the BS. Lower row shows the same – BS short exposure on left and DS long exposure with SKE on right.
Histogram-equalized image of the Moon with SKE inserted from our own telescope. SKE runs from upper left to lower right and is opaque above that line.
Our image shows a halo around the Moon that stretches behind the SKE – hence we know that halo is formed to a large degree in the secondary optics! On the BBSO images we do not see this as clearly – either because the BBSO telescope is of a different design or because the exposures with SKE (right column of images) covers much more of the BS. Assuming the latter we understand why there is no halo behind the BBSO SKE – the BS was effectively blocked and no bright light entered the secondary optics and could not cause a halo. On our image the SKE is inserted experimentally only, leaving a large part of the BS to shine into the secondary optics.
So, we learn that the halo is not from atmosphere or primary optics alone – apparently a large fraction of it comes from the secondary optics!
We also see the importance of having an SKE. While the BBSO group uses the ‘linear BBSO method’ to remove scattered light over the DS they have a much smaller problem than us because the halo from the BS is nowhere near as strong as ours is!
We now see how terribly important the SKE is.
Following on from post we now inspect satellite images one week apart in order to understand the natural variability found in satellite images of the same area.
Upper panel: average image pixel value for sequences of MTSAT images, 1 hour apart, for almost one day in March 2013 and a week later (red curve). Image pixel values is in arbitrary units but is proportional to pixel brightness. Lower panel: difference between upper panel black and read curves, expressed as a percentage of the mean of ther ed and black curves.
The difference between the two curves is on the order of 10% and varies from 8% to 18% during one day.
This tells us two things: Albedo (or something proportional to albedo) can vary by roughly 10% over a week. Albedo can vary during one day by almost as much.
This is useful information to have when we interpret the earthshine data.
We keep in mind that the smooth variations in the black and red curves in panel one are due to the day/night cycle – not intrinsic albedo variations: but the difference between the curves and the variability in the difference tells us about albedo variations.
We can reduce our observations to a number that is equal to the ‘Lambertian albedo of Earth’ at the moment of observation. That is, for a sphere behaving like a Lambertian sphere we can find the single-scattering albedo that gives the same earthshine intensity as we observe. This takes phases and all relevant time-dependent distances into account.
In order to understand if the data we get are realistic we wish to compare to satellite images of the Earth. From work published by others [the Bender paper] we are told that the terrestrial albedo varies by many percent from pentad [avg, over 5 days] to pentad.
Many of the best observations we have have ‘sunglint coordinates’ over South-East Asia. One geostationary satellite hanging over that spot is the Japaneese ‘MTSAT’. We may be able to get data from it, but have only ready access to a Meteosat that hangs over the Indian Ocean.
From the Indian Ocean Meteosat we have extracted a series of images for a given day, in order to start to understand what sort of variability we shall expect on time-scales that are shorter than a pentad.
We have access to half-hourly images from the satellite and have 12 hours of data – 25 images. We take the average of each image and plot it:
The black line is the observed mean intensity of the whole-disc image, and the red curve is a 6th order fitted polynomial. The difference between the black and the red curve has a standard deviation of 0.6% of the mean of the black curve.
The large-scale behavior of the black curve is due to the phase – we see half a day pass as seen from the satellite so Earth changes phase from new to full to new again. Variations over and above that would be due to changes in earth’s reflectivity. There is a pronounced sunglint in the Arabian Sea near Noon.
Some pixels in the image are saturated.
What do we learn from this?
By removing the fitted curve we learn how much variability there is as a day passes. Some of this would be due to the curve not actually being a 6th order polynomial – so the variability we get from the residuals are an upper limit. Fitting a 7th order polynomial lowers the S.D. to 0.4% of the mean.
It seems that the presence of a sunglint for some of the frames (the sunglint moves on to land – Horn of Africa – in the local afternoon and does not ‘glint’ in the sea anymore) does not generate a ‘spike’ of any kind in the average brightness.
We should remember:
That average brightness is not the same as albedo since the brightness depends on albedo times a reflectance function. But most of the reflectance behavior is removed via the polynomial fit, we think.
That cloud-patterns on this scale hardly vary much.
We do not yet fully understand the processing of the satellite images – are they normalized somehow? Were they all taken with the same exposure time?
On this day at least variability in the albedo was much less than 1%.
We should inspect several days of images to see if the average level differs much from day to day.
We should try to get information from MTSAT, and at higher cadence – we can observe the earthshine with minute spacing.
Try to get more technical information about the satellite images.
The Laplacian method has been discussed here
We now extend the method to include an identical analysis step applied to synthetic images generated for the moment of observations. We then take the ratio of the results from observations and the results from the models. This will eliminate the effects due to geometry and reflectance (as long as the reflectance model used for the synthetic images is correct), leaving only the effects of changes in earthshine intensity. So, this is another ‘ratio of ratios’ result. The ratios involved are, in summary:
(Laplacian signature at DS/ Laplacian signature at BS; in observations) divided by
(Laplacian signature at DS/ Laplacian signature at BS; in models).
We use only the ‘good images’ identified by Chris in this entry: http://iloapp.thejll.com/blog/earthshine?Home&post=277
We start by inspecting the dependency of (Laplacian signature at DS/ Laplacian signature at BS; in observations) on phase [attention: this is not yet the promised ratio of ratios – just a ratio! :)], in each of the available filters:
We see the morning and evening branches folded to the same side, for comparison. We see an offset between the branches in each filter, and we see the dependency on lunar phase. We next look at the same ratio but from models. Since we do not have colour-information in our models we redundantly now show plots for each filter as if the models were color-dependent – they are not: but the points available in each filter are different, of course. Recall that a model is generated for each observation:
We see a very similar pattern – dependency on phase (but steeper this time). We also notice that the branches are closer together than in the observations.
What does this mean? The branches are separated, in observations, due to different distributions of light and dark areas on the eastern and western halves of the lunar disc facing Earth. Our Laplacian method samples pixels right at the edge of the lunar disc and has evidently met areas of different albedo. The separation in observations is not reproduced in the models – this can imply that the model albedos are incorrectly distributed (i.e. mares and craters etc in the wrong places) this is somewhat unlikely as we use one of the most detailed lunar albedo maps, from the Clementine mission. However, our model does not sample colour – the map used, and thus the ratio between ‘light’ and ‘dark’ – is taken from the 750nm Clementine image [I think; must check!]. This map was stretched to match the older Wildey albedo map [see here: http://iloapp.thejll.com/blog/earthshine?Home&post=253 ] which was made in such a way that the ‘filter’ the WIldey map corresponds to is a combination [see: http://iloapp.thejll.com/blog/earthshine?Home&post=286 ] of the Johnson B and V filters. We see the most well-reproduced branch spacing in the V band observations, compared to the models. This implies we have some colour-information about the two halves of the Moon – or a tool for how to scale the lunar albedo map when different colours are to be considered.
We next inspect the ratio of the observations and the models – the ratio of ratios:
For each filter is shown the Albedo derived – it is the ‘ratio of ratios’ spoken of above and is identical to the one used in BBSO literature – it is the albedo relative to a Lambertian Earth-sized sphere. The model albedo used was 0.31 so the ‘actual’ albedo derived is the above times 0.31.
We see that there is a phase-dependence in this – particularly in one branch, with the observations being relatively brighter than the models at phases nearer Full Moon, compared to phases nearer new Moon.
Since we have seen the EFM method produce less phase-dependent albedos [see here: http://iloapp.thejll.com/blog/earthshine?Home&post=252 ] we think the Laplacian method needs further development and investigation before it can be used.
It is worth listing why the Laplacian method might be useful:
1) It does not require careful alignment of model and observation. The signal is extracted from a robustly defined location in each image.
2) It is a ‘common-mode-rejecting’ method and is not dependent on image resolution.
3) It is relatively fast.
I have been looking at the use of GPUs versus CPUs for our scattered light analysis. We need to be able to convolve artificial Lunar images (outside the atmosphere) with the instrument PSF. GPUs offer a considerable speed advantage.
First look (CPU versus GPU):
Upper left panel: artificial lunar image outside the atmosphere.
This artificial image is then convolved with a 2-D Gaussian-like PSF
which has fat (powerlaw) tails, and which closely reproduces what we see
in real data.
Upper right panel: convolution using 2-D FFT code running on a CPU
Lower left panel: convolution using 2-D FFT code running on a GPU
Lower right panel: the ratio of the two methods, i.e. the ratio of the two previous panels
There is a lot of structure in there, mainly images of the lunar
crescent turning up in different places — at a level of about 0.1% of
IMPORTANT: the CPU code was written in double precision, whereas the GPU was in single precision.
(The above reproduces with more explanation an earlier post)
Notes: The CPU code calls the FFTW3 libraries from Fortran (Dec’s ifort compiler is used), just using the standard Fortran to C wrappers provided with FFTW3. The GPU code is in written in CUDA.
Second look (CPU only, single versus double precision):
The plot above shows the ratio of the single precision CPU versus
double precision CPU (i.e. no GPU results shown on this plot).
There is similar structure in the ratio — and at about the
same level as the GPU tests gave, i.e. discrepancies at the level of a
few x 0.1% of the intensity.
Third look (CPU in double precision, renormalisation)
In this plot
we compare CPU double precision, applied to the ideal Lunar image, and without “min/max
renormalisation”. (Min/max renormalisation means scaling the input image so that the smallest value in the frame is 0.0 and the largest value is 1.0).
The ratio panel of the two convolutions (bottom right) shows noise only, and at a
very low level — 1 part in 1E7. Highly acceptable!
Fourth look (CPU, single precision, renormalisation)
This plot shows the same as the previous one — but with single precision rather than double. The artefacts are back, at the same old level of a few x 0.1%!
We might already be able to conclude from the above that double precision FFT/CPU is robust (negligible
artefacts), but that a single precision CPU, or a single precision GPU,
produces similar sized (few x 0.1%), and thus slightly worrying, artefacts.
But I need access to a double precision GPU to test this. Hope to do so next week!
The acid test will be the results of comparing double precision on a CPU to a GPU.
On this blog we have previously investigated the laplacian method of estimating earthshine intensity. [See here: http://iloapp.thejll.com/blog/earthshine?Home&post=273
and here http://iloapp.thejll.com/blog/earthshine?Home&post=272 .]
One of the limitations we realized then was that image resolution influenced the results from the Laplacian edge signature – the more the edges of the image are smeared the lower the Laplacian signature estimate will be.
We now revisit the method after realizing that the whole image is affected in the same way by resolution issues – the intensity estimate derived from the DS and BS edges will both depend on image resolution in the same way. The DS estimate is proportional to earthshine intensity, while the BS estimate is proportional to ‘moonshine’ intensity (plus earthshine – a vanishing contribution). The ratio of the two estimates will be independent of resolution – i.e. the ratio has the ‘common mode rejecting’ property.
We convolved ideal images of the Moon with the alfa=1.8 PSF and estimated both the ‘step size’ at the edges (on DS as well as BS) and the Laplacian signature. We plot them below:
The top panel shows the step size estimate against the Laplacian signature estimate on the DS only. We see an offset, with Laplacian estimate being smaller than the step size estimate. This is caused by the resolution issue – the edge is fuzzy and the Laplacian is degraded. The step size is estimated robustly from the original ideal image (i.e. before the convolution is performed) and is thus not dependent on image resolution.
The bottom panel shows the ratio of DS and BS estimates – x-axis is the step size estimator and the y-axis is the Laplacian estimator. We now see that the dependence on image resolution is gone from the Laplacian estimate. The correlation between the two estimates is good, at R=0.92, but not perfect. Noise was not added to the images. Estimates of Laplacian were performed on the line through the disc centre. The step size estimator is based on the average of 9 lines through the disc center.
[Added later:] Since we are working with synthetic models we know the actual earthshine intensity in each image, so we can compare what the actual intensity is to what is found with the ‘step’ and the ‘Laplacian signature’ methods:
In the first panel we see what we saw above – that the step and Laplacian methods are rather consistent. In the panel below we compare to the actual earthshine intensity.
Since the Laplacian signature method (and the step size method) literally express the DS/BS intensities ratio (which is only proportional to the earthshine intensity) we have different values along the x- and y-axes. The geometric factors having to do with distances and Earth’s radius are not compensated for.
The main result is that the relationship between actual earthshine intensity and the Laplacian signature method (and by extension, the step size method) are not quite proportional – there is a slight curve. In interpreting the above we should keep in mind that the Laplacian estimate of DS illumination from the edge derivatives is in a different role than the estimate of BS illumination – the former is not very geometry dependent while the latter is: That is, earthshine is due to light from a source we are sitting on doing our observations from, while the BS illumination is more angle-dependent in that the reflectance properties of thr Moon come into play to a larger degree – at the moment I am not sure whether there is an angle-dependence ‘along the BS edge of the disc’ that can cause a problem in interpretation. Will need to look at this.
Another thought: is it possible to make a calibration relation between actual earthshine intensity and the Laplacian DS/BS signature estimate?
We do have many problems to contend with – but now and then we are confirmed in what we set out to do: Here is an example of the scattered-light reduction of an observed image (2456034.114etc) where the EFM Method seems to be doing well compared to BBSO linear. We compare to a synthetic model image generated for the moment of observation (i.e. the libration and geometric factors are representative of the observing situation):
Top panel: slice across the Moon at row 296 so that Grimaldi near the edge is transected. The black curve is the synthetic model unconvolved – it is in units of W/m². The red curve is the scaled and offset profile along the same row of the EFM-cleaned image; and the blue curve is the BBSO-linear cleaned image identically scaled and offset.
Middle panel: detail of the above.
Bottom panel: difference between EFM and ideal and BBSO-linear and ideal, along row 296, expressed in percent of the ideal value.
The observed data were scaled since the units are different; they were offset because the EFM model did not have 0 value on the DS sky. The BBSO-linear, being ‘anchored in the DS sky’ did have a 0 value in the sky. The same scaling factor and offset was used on EFM and BBSO, for comparison, however – hence the little offset on the DS sky at right.
What do we see? Well, the EFM-cleaned (red) line follows the synthetic model quite nicely between column numbers 325 and 370 while the blue line (BBSO-cleaned) diverges all the time. Near the DS edge (columns 370-385) the synthetic model is higher than EFM.
What does it mean? The BBSO-linear method has better removed the flux on the sky – it is designed to do that, while the EFM is designed to minimize the residuals squared in a mask on the sky around the Moon. This implies that the BBSO-linear method, in the present case, would be better than EFM on the lunar disc near the DS sky. As we move further onto the DS disc the BBSO-linear method will fail more and more since the halo is not linear with distance from the edge – the method underestimates the amount of scattered light on the disc between the edge and the BS. We do see how the blue line diverges more and more; we do see the red line cling closer to the ‘true value’ (assuming the synthetic model is ‘true’) across the disc, before it too fails nearer the halo and the BS. The behavior nearer the edge may be a consequence of how we model the synthetic images – we have to use a reflectance model to make the synthetic images – and the angles of emergence and incidence corresponding to ‘near the edge’ is not one for which the reflectance model is inherently very good. The models we use are based on the simple ‘Hapke 63’ model. We speculate that the disparity between observations and synthetic models in the columns 370-385 is due to model inadequacy.
Anyone using a reflectance model as simple as the Hapke 63 model will encounter the above problem if they try to use pixels near the lunar disc edge – the natural thing to do, if the halo is removed using the linear method is to use edge or near-edge pixels.
Hence the problem of reflectance modeling and the inadequate linear method become coupled! Our EFM method seems to be a way around this obstacle – allowing use of disc areas further from the edge where simple reflectance modelling is adequate – hence we should be getting more reliable terrestrial albedo data. One day.
In the EFM-method we determine the alfa values for every image. Is there a link between the alfa value for one filter and the rest in an interval of time? It is our understanding that alfa is determined by the amount of scattering in the optics plus the atmosphere. We therefore expect that on ‘bad’ nights the alfa values will tend to move in the same direction. We investigate this here.
We find all alfa values in all EFM-treated images. We sort them into half-hour bins. We calculate all the alfa values in each bin and plot the results. Below is a pdf showing all the plots between some filters, at different ‘zoom-levels’. The image shows the last zoom-level, highlighting the dense ‘clump’ of points:
There seems to be a general agreement that the alfa values are correlated – bad nights (i.e. ‘broad PSFs’) occur in all filters at the same time. Since VE1 is just about identical to IRCUT the scatter seen above means that the fitting routine is unable to make a perfect match – or that observing conditions, during the half-hour bins used, changed.
Using 15 minute bins does not improve matters:
I therefore suspect that the fitting method does not find the best fit each time.
What B-V colour should we expect for the earthshine?
We will here estimate it by using the change in colour of Sunlight that has struck the Moon once, and the colour of Earth as estimated from spacecraft.
The Sun’s B-V is +0.650 [Allen, 1973] [Holmberg et al, MNRAS, 367, 449, 2006]
The Full Moon’s B-V is +0.85 [Lane&Irvine, AJ 1976 78, p. 267]
[vdBergh has +0.876 for Mare Serenitatis;
Allen 4.ed. table 12.16 has ‘Moon’ B-V 0.92;
Gallouet (1963) has +0.94;
Wildey & Pohn (1964), AJ vol 69, p.619 have a range of values near +0.86 to +0.87 (their work seems good and a milestone).]
The Earth’s B-V is 0.2 [Allen 3. ed, but appears based on a 1961 work – so pre-spaceage?]
The Moon’s DS B-V is 0.64 on average given data in [Franklin (1967), JGR 72, p 2963]
If Sunlight is reddened by one reflection off the Moon by 0.85-0.65=+.2 mags, then we expect earthshine, bounced once off the Moon to redden by the same amount.
If the Earth has B-V=0.2 as seen from space then seen after one reflection it ought to be redder by +0.2 or appear to us observing it from Earth at B-V=0.4. This is not what Franklin measured.
Basically, we do not yet know Earth’s B-V colour! I am making inquiries, and we shall see.
Note that Danjon did lots of colour observations of earthshine – but in the Rougier system. Wildey [JGR vol 69, p.4661+] refers to a transformation from Colour Index (“C.I.”) in the Rougier system to B-V in the Johnson system – but without giving numerical details. The method is based on transformations using the Full Moon and the Sun colours.
The transformation should be made specific and the data from Danjon placed online. Another student project!
We compare the B-V values on the DS of images from JD2456034: we look at images only exposed to bias-removal (‘DCR’ images) and images cleaned with EFM and images cleaned with the two variants of the BBSO method: linear and log. We show a ‘slice’ across the disc at row 256:
Top left: black is the B-V slice from the DCR images; red is the B-V values from the BBSO-linear cleaned images. The second graph in the upper left panel is the difference between the B-V values (BBSO-linear images minus cleaned image). The vertical dashed lines show the edge of the lunar disc and the start of the BS in column units.
Top right: same, but for BBSO-log method.
Bottom left: same but for EFM images, as shown elsewhere on this blog.
The results are quite different – the BBSO-linear method has given us fairly constant B-V values across the disc – they are about .05 mag below the DCR values (i.e BBSO-linear are bluer than DCR values).
The BBSO-log values seem completely unrealistic.
The EFM-cleaned values also look a bit unrealistic in that there is a spatial dependence on the magnitude of the B-V relative to DCR – in a way that looks like a remnant effect of the halos. The Delta(B-V) value changes sign across the disc.
The night JD2456034 is very close to New Moon and we know from other results that this is when the BBSO-linear method is likely to work best (the halo being small). Since there is a phase-dependency in the overall results for BBSO methods over and above what EFM shows we know we cannot universally use BBSO results. On the DS, towards the edge, the BBSO-linear method should be very good – it is ‘anchored in the sky’, unlike the present EFM method, and therefore should be unbiased near the edge. Our EFM method, at the moment, only minimizes the square of the residuals on the sky, inside a mask.
Near the sky, the DS B-V values in the BBSO-linear and EFM-cleaned images differ by about 0.04 mags. We should look at EFM methods that also ‘anchor in the sky’ and see what we get then.
The present largest worry is not the B-V offset but rather the dependence on position on the disc that the EFM method shows.
What does the literature tell us we should expect B-V to be under earthshine?
At the moment I only know of Franklin’s 1967 paper. It gives B and V values of the earthshine.
The average B-V seems to be +0.64. Can we find any other information on B and V?
I have made four colour maps now in B-V on four different nights.
The nights are arranged like this:
A uniform scale is used for all four images, from B-V = 0.0 to 1.4. The BS light is coming out in all four images around B-V=1.0, and the ES at around B-V = 0.6-0.8,
depending on whether one is looking at highlands or lowlands (lowlands are redder).
The cause of the black dips along the BS rim is the problem that the B images are slightly larger than the V images – so even with good registration of the center of the moon (to the closet pixel), there are issues in producing these colour maps. The falloff in the halo light cannot be the same power law in V and B, because the halo changes colour — but this is still to be checked.
I wrote a program called moonraker in fortran, which processes all our data, seeking out the best for further reduction. Peter has written something similar in IDL.
It goes through all the images and throws out ones with
1) saturated BS (> 50,000 counts in the peak of the BS)
2) underexposed BS (<10,000 counts)
3) smeared images (heavy bleeding of photons in the Y-direction on the chip, reaching the bottom 20 pixel wide strips of the frame). A mean count level of more than 50 counts in this strip triggers a flag and the frame is rejected.
4) rough check if the moon is too close to the edges of the frame. This is not yet optimal, but throws out the worst cases.
5) checks that the temperature of the CCD is OK (an entry in the fits file header tells us if the temperature of the CCD has stabilised or not)
If all 100 slices in multiple exposure frames are OK, according to the above, they are bias subtracted (using the biases on either side of the exposure, averaged appropriately for each slice, and scaled to the superbias created by Henriette) and averaged, and written to disk.
6) The magnitude of the total light in the frame (i.e. apparent magnitude) is then compared to the expected V magnitude (using the JPL model, with corrections for the east and western sides of the moon, depending on which side is illuminated by BS), taking into account the airmass of the observation and the extinction for the particular filter. This isolates for removal exposures which have been taken with an incorrectly reported filter. Finally, after extinction correction, only those frames which fall within ~ 0.2 magnitudes of the correct magnitude using the JPL model are retained.
The list of the final set of good exposures using this method is attached (all100.doc — actually an ascii text file):
There are 534 good frames, spread from nights JD2455938 to JD2456104.
A summary is as follows:
Night, number of good frames in V, B, VE1, VE2, IRCUT, comments
JD2455938 2 2 0 5 0 fullish moon, some foggy frames
JD2455940 3 1 0 2 0 fullish moon
JD2455943 3 3 0 3 1 fullish moon
JD2455944 3 1 2 1 3
JD2455945 1 1 1 2 0
JD2456000 3 0 0 1 1
JD2456002 7 0 0 0 0 V band only
JD2456003 9 0 0 0 0 V band only
JD2456004 1 0 0 1 0
JD2456005 6 0 0 0 0 V band only
JD2456006 4 0 0 0 0 V band only
JD2456014 3 0 0 0 0 V band only
JD2456015 7 10 4 6 4 1/3 moon, lots of frames
JD2456016 11 10 9 11 11 “
JD2456017 7 11 0 11 2 “
JD2456028 5 5 0 9 0
JD2456029 0 0 0 6 2 2/3 moon, lots of scattered light
JD2456030 1 0 0 3 2
JD2456032 1 0 0 4 1
JD2456033 4 1 4 2 4
JD2456035 0 0 0 1 0
JD2456045 5 5 4 4 4
JD2456046 4 8 10 11 10
JD2456047 6 5 8 6 7 stars visible in some frames, useful for PSF
JD2456061 2 4 3 6 4
JD2456062 0 0 0 2 0
JD2456063 0 0 0 5 0
JD2456064 0 0 0 3 0
JD2456073 4 7 7 10 8 some eclipsing of moon – dome issues?
JD2456074 4 2 3 7 5 scattered light haze or fog?
JD2456075 8 7 8 7 8
JD2456076 9 1 0 2 0
JD2456089 3 2 1 1 0
JD2456091 2 1 5 0 4
JD2456092 0 0 3 0 5
JD2456093 0 0 1 0 5
JD2456104 5 7 5 1 5 some foggy frames, moon a bit close to bottom corner
I’ve inspected all of these by eye and they look mostly pretty good! There are still a few bad frames here because the scattering of the halo is not selected for yet — but the number of frames with low alpha (i.e. haze or fog) is just a handful.
Some of these nights will be pretty useful for looking at colour changes in the ES over time. I am doing that next.
In this – uipdated – posting we combine more images from JD2456034 – we construct B-V images in various ways.
The discussion refers to Chris’ originalposting:
The first images we look at are not the bias-reduced images Chris uses, but ‘EFM-cleaned’ images (i.e. scattered light has been removed). Both the B and V image used were co-additions of 100 images each. The two images were centered in the image frame. We look at tow sets of B and V images – the one at the top is cleaned with one setting of the EFM method; and the one below is done with another setting of the EFM method. Some of the features to the left on the sky are the effect of ‘cyclical overlap’ from the right side when the image is shifted.
We see the DS to the left and the BS to the right. The BS halo and the BS have become undefined (i.e. ‘NaN’) because either B or V is negative here (remember, it is not a bias-reduced image with full halo in place – these are EFM-reduced images so the BS and parts of the halo are now zero or negative. In overlap almost all of the BS and the BS halo have become NaNs!
Here is a ‘slice’ across the middle of the upper image:
And here is the same slice across the lower image:
And finally, here is the slice across the un-reduced B-V image – that is, the image formed from B and V images calculated from the images that were only boas-reduced (as in Chris’ plot).
The uppermost image has a large ‘dip’ when we get near the BS and its halo remnant . In the middle image the halo has apparently been better removed. STrangely, thereis least sign of a halo in the ‘raw’ images where nothing has been done to remove the ahlo. This needs to be discussed!
We used other airmasses than Chris. In my calculations the airmasses for the two images involved are:
B_am=2.545 image is 2456034.1142920MOON_B_AIR_DCR.fits
V_am=2.477 image is 2456034.1164417MOON_V_AIR_DCR.fits
I used kB=0.15 and kV=0.10, like Chris.
I did not solve for B and V by solving two equations with two unknowns – I iterated. Convergence was fast. I iterated on the whole images.
The first of the above EFM cleanups was done with a weighting of the mask used to define the area of the sky on which to reduce the sum of squares that favored the DS part of the sky. In the second attempt above equal weight was given to the RH and LH sides.
Following on from the previous post, we have made a colour map based on two images (V and B) taken on JD2456034
Both are averaged results from stacks of 100 exposures, so the S/N ratio is 10 times better compared to the images in the last post.
V band exposure time was 72.5 ms, and the B band exposure was 222 ms, for a total of 7.3 seconds in V and 22 seconds in B. The observations were at an airmass of ~ 2.3.
The colour map looks like this:
with the colour scale (B-V) running from 0.2 to 1.3.
The BS (brightside) has a colour of around 1.0, which is still a little redder than the expected value of 0.9 (for BS on a full moon) by van den Bergh (1962).
There is more scattering of the light in B than in V, which accounts for the deep valley to the right of the BS, and the excess halo light beyond it. Colours here are not reliable. Colours just beyond the concave edge of the crescent on the DS are probably also affected by differences in the scattered light profile, so should be regarded carefully (this part of the moon can be examined when the crescent is on the other side). The left half should be pretty good!
The DS (darkside) has colours ranging from about 0.6 to 0.9 — with the lowlands redder than the highlands.
We have used B and V images of the moon from JD2455859 to make a map of the B-V colour of the reflected light from a thin crescent moon.
We use the photometric transformations derived from our open cluster observations: http://earthshine.thejll.com/#post229
The B band image of the moon was shifted to the position of the V band image, and the colour computed from the fluxes in B and V directly from the pixel fluxes.
The map below shows the result:
The scale at bottom runs from B-V=0.1 to B-V=1.5.
The darkside (DS) has a B-V colour in the range 0.6 to 0.9 — whereas the crescent has B-V ~ 1.1. The measured colour of the BS is B-V ~ 0.9 (Van den Bergh, 1962, ApJ, 67, 147), so we may be a little too red. More work needed on this.
We are seeing roughly the colour difference expected between ES and the BS — ES should be a few tenths bluer in B-V.
There is a clear residual halo in the colour map, indicating that the scattered light around the crescent falls off differently in B and V filters (as we expect).
Very interestingly, the crater pattern becomes much harder to see in the colour map. The V and B images look like this (after registration) V on the left, B on the right:
The lowlands appear to be slightly redder than the uplands in the colour map, by a few tenths of a magnitude in B-V.
More work needed on this — this is just a progress report!
a) Images used were:
2455859.1313477MOON_V_AIR_DCR.fits and 2455859.1365256MOON_B_AIR_DCR.fits
b) The transformations used were (airmass is 2.6 for V and 2.7 for B, extinction coefficients for V and B are 0.10 and 0.15 respectively).
Vinst = -2.5*log10(sumv/exptimev) – 2.6*0.10 ! airmass = 2.6
Binst = -2.5*log10(sumb/exptimeb) – 2.7*0.15
V = Vinst + 15.07 – 0.05*bmv
B = Binst + 14.75 + 0.21*bmv
which implies that B-V = 1.35 * (Vinst-Binst) – 0.43
(the airmasses above were incorrect in the first version of this entry, they have been corrected now)
In this entry on the blog: http://iloapp.thejll.com/blog/earthshine?Home&post=272 we discussed the Laplacian method of estimating earthshine intensity by removing the influence of the scattered light halo. We have tested it on a good image from near New Moon (i.e. lots of bright DS, very little interfering BS). Consider this plot:
In the top two rows we see the observed image, the EFM-image and the Laplacians of these. In the second row second plot there are three additional curves, in black. The centre one shows the estimate of where the disc edge is, based on the estimates we have of disc centre and disc radius. The other two lines are parallel outliers at + and – 40 pixels from the edge. In the third row we show first the coaddition of all rows along the edge, using the estimates of the edge and the outliers as start and stop – this allows a coaddition that centres on the disc edge. Second in third row is the same thing based on the Laplacian of the observed image. We see the signature of the double derivative of an edge. In the last row we show the same as in the third row but for the EFM-cleaned image and its Laplacian.
The application of the EFM method removes most of the halo and this is consistent with the slight lowering of the slope after the ‘step’ in the first panel of fourth row, compared to first panel of third row. We extract the size of the ‘jump’ as well as the distance between the minimum and the maximum of the Laplacian edge signatures. The jump size is estimated as the difference between the mean of the first third of the curve and the last third. We collect the four estimates of the earthshine intensity – two jump heights and two estimates based on the Laplacian signature, and plot them below for 4 differently rebinned versions of the original images – unbinned; binned 2×2; binned 4×4 and binned 8×8. This is done to ‘sharpen’ the edge in case the edge is ‘fuzzy’ so that the 3-point numerical operator used in constructing the Laplacian will interact correctly with the edge.
We see color-coded graphs. The estimates of earthshine intensity based on the observed image and the EFM-cleaned image are black and blue, respectively. The estimates based on the ‘signature size’ in the Laplacian of the observed and EFM-cleaned images are red and orange, respectively.
The differences are quite large – several percent, and there is a dependence on the rebinning of the image – even in the case of the step method. The results that ‘ought’ to be the best is the jump-size one based on the EFM-cleaned one since the halo is absent and because the jump size method should have less statistical error-of-the-mean than the signature size method since more pixels are involved in estimating the former. The error-in-the-mean for the jump estimates for the unbinned image is of the order of tenths of percent – clearly dominated by systematic effects.
The above suggests that there are large method dependencies.
The ‘jump estimator’ may be victim of the slopes seen near the jump – especially in the EFM case – with such slopes the mean before and after the jump is dependent on just how you estimate the jump. Possible improvements is to extend a linear regression from the first half or third of the points, before the jump to the midpoint and do the same from the other side and calculate the ‘jump size’ as the difference between the two extrapolations. The slope is still a concern, however. It is due to the lunar surface albedo variations. The Jump size itself is a function of earthshine intensity as well as lunar surface albedo and therefore on libration, as well as the geometric distance factors. Being estimated right at the edge the reflectance is a constant, leaving lunar albedo as a variable factor (through libration). This could be taken out with knowledge of the parts of the surface of the Moon right at the edge – i.e. via modelling.
The ‘signature size’ method has different challenges – it is based on fewer pixels for one thing. It also depends on the lunar albedo – and not on reflectance, provided the edge is used (and not features on the DS, as Langford et al do). However, the method depends on knowing that the image is in good focus. Bad focus will blur the edge causing a lowering of the signature size – for area methods such as the EFM we do not depend on a sharp edge but can estimate levels from regions chosen away from the edge. Of course, we still get blurring from lunar features overlapping.
The above study did reveal dependence on binning – but if blurring was being addressed the results are not encouraged as there was no convergence.
So many things to compare! Super student project! A study of artificially blurred synthetic images could be done, and ‘step’ and ‘signature’ methods compared.
In an interesting article Sally Langford, et al. [ http://adsabs.harvard.edu/abs/2009AsBio…9..305L ] describe an analysis method for earthshine images. Essentially, the Laplacian’s effect on images is to detect edges and the size of the ‘jump’ in going from e.g. sky to lunar disc in the observed image translates into an amplitude of a characteristic signature of the derivatives of an edge. This enables the large-spatial-scale features of the scattered light halo to be removed from the short-spatial-scale features of edges and craters.
Our method for removing effects of halo is to construct a forward model of the halo, based on the BS of the Moon (caught in the same image as the DS – Sally’s FOV is smaller – about 20 arc minutes; our is 60 arc minutes), and subtracting it, leaving the DS almost halo-free.
How well does the Laplacian method work on our type of images? Sally’s images were well-exposed images of the DS alone – allowing large counts – 10’s of thousands; ours are co-addition of 100 images taken so that the BS is not overexposed, leaving the signal on the DS typically near 10 counts. One image with 10,000 counts should thus have the SNR of 100 of ours.
We took one of our good images – from JD 2456095 – near New Moon – and plot the profile across the image near disc centre in the image that we obtain using the forward modelling method (named ‘EFM’ in the plot and this blog) and the same profile from the Laplacian of the image. We average 20 rows.
Inspection of the columns (approx # 100-120) where the lunar disc edge is we see a clear ‘step’ up from the sky level in the EFM-cleaned image but there is no sign, above the noise, in the Laplacian of the same image. Inspecting the whole Laplacian of the EFM image we do see a faint signature:
The above image is ‘histogram equalized’ to show the feature. Our own method – also shown when histogram-equalized is here:
We see that the halo has been only partially removed on the right.
I think we would be hard pressed to extract a signal from the Laplacian of the image. It also bothers me that the whole earthshine signal is reduced to the value of the derivatives in just edge pixels. In our own image we have hundreds of pixels to measure on – in the Laplacian we get only a signal along the edge.
We should note that the remnants of halo on the right are much less evident in the Laplacian image, suggesting that there is a kernel of a good idea in the method. For reference, the Laplacian of the raw observed image is here (histogram equalized):
It does seem as if the Laplacian helps remove a lot of the halo, but also reduces the analyzable part of the image to the DS edge.
The Langford paper mentions both smoothing images and co-adding them – the former is done at the resolution of the worst image for a sequence. The Langford paper analyses features on the disk – not the edge.
Working out how the Laplacian is best used, on realistic images of the type we have, would be a good student project!
We have, below, studied the ‘structures’ or ‘bands’ that, in some images, stretch from side to side. One concern was that the structure was induced by some step in image reductions. We therefore compare raw images to EFM-cleaned images to see if the structure is present in both.
Visual inspection of histogram-equalized versions of raw and EFM versions of the images reveals the structure, although it is easier to see in the EFM image since the halo and most of the BS is gone:
Plotting the average of the first 20 columns of each image, and scaling them to each other shows that the structure better:
The black line is from the EFM-cleaned image, while the red curve is from the raw image.
This seems to rule out that the ‘structures’ are induced by image-reduction steps.
We remain suspicious of the possibility that the bright light from the BS itself somehow is causing the problem. It is not a reflection since the structure is a deficiency of counts. Only way to explain it is by some process subtracting counts from these bands – or lowering the sensitivity of the whole set of bands – perhaps some sort of non-linearity caused by strong light in one part of the CCD affecting the whole row?
We still need to map when the structures is present – we want to see if it appears in a timeline or is a function of the placement of the image in the frame. Inspecting the first 700 images of 2000 for the EFM frames shows no sudden dependency on time of this problem.
Idea: also extract the maximum counts of the image and use this as a parameter – we do have suspicions that non-linearity sets in far below 55.000 counts.
Question: Why is the structure along rows, while readout direction is along columns? What is special, in a CCD chip, about the rows?
More to follow.
In the entry below, at:
we considered the need to remove some residuals left over when a poorly-fitted halo had been removed from the observed image. Sensing that the problem has to do with asymmetry in the solution forced by the lunar disc not being well centred, we consider now the effect of lunar position in the image on the quality of the halo fit. We estimate the quality of the halo fit from the mean value in a sky-patch near the DS.
We see here on the x-axis the value of the disc centre coordinate (i.e. column number in the image) and on the y-axis the mean value of the DS sky patch. We seem to have some scatter as well as a structure that looks like an inverted parabola, for these points near y-value 0. That is – the sky-patch mean value of the DS in images where the halo has been removed with the present EFM method depends on where the lunar disc is – the further away from x0=256 (middle of image) the Moon is, the larger a residual is left on the sky after the halo is removed with the EFM method.
We need to invent a better EFM method!
In this post:
I pointed at the unwanted structure in the sky near the lunar disc in an image that was Bias-reduced as well as had had its halo removed. Bias problems were ruled out – and left was a speculation on internal reflections in the camera system.
If we are truly left with that horrible structure on the sky we cannot just remove the halo in the present way, as the method depends on ‘fitting the sky’. If there are basins and hills in the ‘sky’ the fit will be bad – if the halo subtracted has the right shape but not the right ‘level’ then perhaps we can think of a fix: After removing the halo, but before extracting photometry from the DS disc in the image we can reference the sky adjacent to the DS and offset our disc value from there.
I tried this, by estimating the sky level on a part of the sky near the DS on an image where the halo has been subtracted, following the EFM method:
The two patches on the lunar disc are the Grimaldi and Crisium reference patches (DS and BS, respectively [although the BS value is estimated from the same pixels in the raw image]) and the large semi-rectangle on the sky next to the DS is the ‘skypatch reference area’. We calculate the average value of that patch and subtract it from the value extracted for the DS. If the error is of an offset type, rather than a slope type we have then corrected for the halo misfit.
To see the effect of this problem we estimated the DS/BS ratio in all EFM-cleaned images, for each filter, with and without referencing to the sky level:
As usual, the jpegs above are poor in quality so we also post the pdfs:
Each panel contains three columns – the rows are for filters. The first column is the DS/BS ratio as function of lunar phase (Full Moon is at phase 0). The second column is the ratio of the DS and the total RAW image flux [for reasons explained elsewhere!]. The third column shows the ‘alfa’ value derived by the EFM method.
Almost the same list of images were used for the two plots above – note the differences by inspecting the alfa plots – crosses are present in one plot but not the other one.
We see that the sequences of points for the case ‘skyptach reference level removed’ are slightly fuzzier than the sequence where the reference is not subtracted! Before we can interpret this I think we need to restrict the two plots to the same set of images – I shall do this and return …
[later]: This has now been done. For the IRCUT filter (341 data points) we show the effect of skypatch-referencing on the DS/BS ratio expressed as obs/model:
First we notice the wide span in DS/BS along both axes – this is not new: this is mainly the phase-dependence we see – this is MUCH LESS than would be seen for BBSO-method treated images, and is not the point – the point is that the spread along y for a given value of x – say near x=1 is something like 50%: That is – the effect of removing the sky-level from the DS value, in EFM-cleaned images, is to alter the DS/BS ratio by 50%. This means that the EFM-method did not do a very good job of removing the halo. It did remove a lot of the halo – but clearly not so much that a considerable effect is felt when the small remaining offset is removed.
This is important for our Science Goals: On the one hand we see the EFM method work much better than ‘BBS linear’ and ‘BBSO log’ methods in removing the effect of phase on the DS/BS ratio [shown elsewhere in this blog], on the other hand we see it does not do a sufficient job.
We should note that the BBSO linear method removes the sky level in one step (since it is a fit to the sky near the DS) but that, being linear, it does a poor job of removing the halo where it matters most – on the DS disc. The EFM method may be much better at removing a halo of approximately the right shape – but there remains a bias that is important.
The current EFM method is not ‘anchored in the sky’ – it merely seeks to minimize the residuals formed when an empirically generated halo is subtracted from the observed image – as evaluated on a masked section of the sky part of the image. There are choices made when that mask is set up, and they are:
a) The mask used above was such that sky was included on both sides of the lunar disc, set off from the disc by some 20 pixels and curtailed vertically at 0.7 radii (so that a ‘band across the lunar disc, but not including the disc’ is set up). This gave weight to both the part of the halo on the BS and the DS.
b) The mask is quite large to give access to a large number of pixels so that the effect of noise on the fitting-procedure was reduced.
The requirement in b) forces the choice in a) to be so large that any unevennesses in the sky (such as we suspect occurs due to internal reflections when the Moon is near the side of the image) has an influence on the quality of the fit, forcing the above consideration of a ‘special removal of the sky level near the DS’. To this comes the problems of allowing the BS halo to influence the fit.
We should investigate how we can improve the EFM method. Possibilities include
1) spatially weighting pixels so that ‘difficult areas’ are avoided or are given less influence on the final fit – such as the bothersome BS part of the halo.
2) use much smaller areas to fit the halo on the DS to – such as a patch near the DS patch on the lunar disc. This has to be balanced – on the one hand we get more influence of noise (which influences the BBSO method too, as it uses ‘narrow radial conical segments’ on the sky), but on the other hand we eliminate the effect of the BS halo as well as the ‘unevennesses in the sky field’.
3) One hybrid form of the EFM, tested earlier, consisted of enforcing that the model halo should be flux conserving (as now) and at the same time pass through an average point on the sky near the DS in the observed image.
We should implement a method that does not show the above sensitivity to removal of a sky reference level calculated on the already cleaned-up image. Access to a system without these internal reflections would also be nice. One significant step to take before that could be to find all images without the ‘streak’ effect that lead to the above considerations.
A strange structure has been found in some reduced images. To the right we show a dark frame taken seconds before the image on the left. The image on the left is an EFM image – i.e. the halo from the BS has been subtracted after fitting to the sky areas of the image. We clearly see the structure in the left image – a ‘band’ stretching to the left of the DS. This structure is not present in the dark image – so it is not an artefact of bias subtraction [Of course – the bias subtraction is not performed with adjacent dark frames – noise would be added that way – rather, a scaled almost noise-free superbias is subtracted. It is like a lower-noise replica of the image on the right – i.e. no structure, just mild level of noise.]. Below the two we show a plot along a vertical axis in the two images: columns 50 to 150 were averaged and shown in the plot as the black and the red lines.
There is a very clear structure in the sky of the Moon image. Given that it is not induced by bias-subtraction it must be due to the presence of the Moon itself. We speculate that:
a) it is some optical effect – reflection – from the inside of the camera or telescope. Halo-subtraction has reduced the sky level to almost zero but a little too much has been subtracted in the ‘dark band’ and a little too little has been removed outside the band. The fitting of a rather smooth ‘halo’ from the observed image could give this effect, if the structure itself is present in the image.
b) some electronic effect is causing the rows with the very bright BS in to somehow ‘jump low’ due to some effect we do not understand. NB: The readout direction is ‘down’ – not to the left!
Note the presence of what looks like a truncated halo to the left in the image, at the frame edge: this could be an internal reflection showing the right hand side of the halo being reflected inside the camera. We saw this same effect in the animated sequence of the tau Tauri occultation – as the Moon migrated to the RHS of the image frame an ‘echo’ appeared on the left.
This therefore seems to support idea a) above. If this is the case we may have an effect on solution-quality from position of Moon in the image. We should investigate if there is a position nearer the centre that eliminates the effect, and then omit images that are too close to the edge.
One of the reduction steps performed has to do with the scaling of the bias images due to the thermostat-induced temperature variations in the CCD chip. This temperature variation causes a 20 minute period in the mean level of the bias with an amplitude of almost 1 count – thus of importance to our attempts to analyse extremely small signal levels.
We take Bias frames on both sides of all science exposures – one just before and one just after. If we were to just subtract the average of these frames from our science image we would be adding noise to the result – we therefore need to subtract a smoother bias frame. We have constructed a ‘super bias’ frame as the average of hundreds of bias frames – it is very smooth, but probably has a level that is unrelated to the actual level in each frame.
By scaling the super bias to the mean level of the average bias frames taken at the time of the science frame we get a scaled superbias that we can subtract – it has the right level and very little noise.
We need to understand how the scaling procedure performs, so we have extracted the scaling factor from the 5000+ exposures we have.
Top frame shows the factor on the superbias as a function of the sequence number, and the bottom panel as a a function of the Julian Day of the observation.
Most factors are near 1 but some stand out. 9 files have factors above 1.09 – their Julian days (integer part) are:
2455938 (7 images) 2455940 2456032 (one on each).
A list of the 209 images with factors over 1.08 is here:
The 4 unique JDs are: 2455814 2455938 2455940 2456032, with the majority of cases on 2455814.
These images should perhaps be inspected very carefully for problems with bias.
A close-up of the factors nearest 1 looks like this:
Using the methods described by Chris in the entry below – i.e. at
I have also selected for the likely good data by taking extinction into account and looking for linear sequences. This resulted in a set of images that I deem ‘good’. That list can be compared to Chris’ list.
Chris has 3162 ‘good’ images, while I have 2990. The cross between the lists finds 2273 instances on both lists. This list of ‘jointly agreed good images’ is here:
Methods for selecting good images differ slightly: Chris does not yet consider the alpha value found in the EFM method – I select for alphas in a narrow range near the mode of the distribution for each filter.
With converging selection criteria the list above would expand somewhat – perhaps best to keep the criteria a little different to avoid duplication of potential errors.
The joint list contains 49 unique nights. The distribution of images over nights is:
where the first column gives the number for the night. Note that towards the end of the sequence there are few images per night – that is because we were realizing the necessity to observe ‘stacks’ rather than sequences of single images which results in fewer co-added images.
A histogram of these data is shown above.
We have been using the apparent magnitude of the moon compared to the JPL model of its expected magnitude (V band), to check which images suffer from gross problems, such as an incorrect filter in the beam, or light losses from thin cloud, or other reasons. This follows up on previous posts (http://earthshine.thejll.com/#post256, http://earthshine.thejll.com/#post254).
This plot shows the difference between the measured apparent V magnitude of the moon and the JPL magnitude (i.e the difference between instrumental and true magnitude) as a function of airmass. There are about 5000 frames, mainly of averaged data (i.e. up to 100 images averaged together) in the total data set (as of November 2012), of which about 1000 are in the V band, and plotted here.
(a pdf version is also available).
Blue represents one side of the moon (the side with Mare Crisium), green the other side. Opposite sides of the moon have differing distributions of uplands and mare, leading to about a 0.15 magnitude offset between the reflected light and the JPL model (which implicitly assumes a uniform disc). The dotted lines show the expected trend with airmass for an adopted 0.1 magnitudes/airmass extinction — quite close to what we are seeing in the data. There is a lot of structure in this diagram!
To clarify which side of the moon is which: the next plot shows the Crisium side illuminated by sunlight:
This plot shows the Grimaldi side being illuminated.
Now we concentrate on the crescent light on the Crisium Mare side (blue points in the plot):
Concentrating on just this side of the moon, we see a very tight (extinction) relationship with airmass, with notable outliers. They are all on the dim side of the relationship — indicating loss of light relative to the JPL magnitude). The reason for most of these has been identified — most often because of thin cirrus or haze (this affects the scattered light around the bright side of the moon, which we model with a power law with slope “alpha” — these are indicated on the plot with that symbol. The moon went behind the tower and cables on the western horizon in one case “cables” and one image has low S/N. Two have “?” — no reason was found for these outliers, so they are simply dropped. This leaves us with a nice looking sequence of data for this side of the moon. I’ll look into these further as there may still be structure in there as a function of lunar phase.
The other side of the moon, the side with Grimaldi and extensive Mare, shows much less clear trends with airmass than the other side.
The dotted lines are the same as in the previous figure — they bracket the good data on the Crisium side. Strange things are happening at low airmass (z<1.2) – it turned out that most of these peculiar data are from a single night (JD2455912) as the next figure shows (background of these points shown in blue). Note the airmass scale has changed to make these points easier to see.
The entire night of JD2455912 seems to be a dud — as similar strange happenings show up in the other filters — VE1, VE2, IRCUT (but not clearly in B, as only one good B image was taken on that particular night).
Analysis of the remaining (grossly) discrepant points shows that they are often all on a single night. The next plot shows some of these problem nights:
Blue: Totally discrepant data from JD246691 : drop these.
Red: Near full moon!
Purple: data from JD2456061,2,3 (three nights). For almost all, the moon is too bright by 0.3 mag — possibly bias issues? Will look at this.
The bulk of the green points lie along the expected extinction line, with an offset relative to the other side of the moon (Crisium) of about 0.15 mag. We can bracket these data and use them for the ongoing analysis of earthshine. We look at this in the remainder of this post.
The next plot shows both sides of the moon in green and blue crosses, but for all filters (instrumental magnitude – JPL V-magnitude) as a function of airmass. No data have been dropped yet — it’s all the nights we have.
The colour of the moon in the various bands shows up nicely as an offset relative to JPL V-magnitude (e.g. B-V is of order 0.9, second panel from bottom). The Crisium side of the moon shows much cleaner results than the Grimaldi side, in all colours. The dotted lines bracket the good Grimaldi data, and are the same in all panels. Data which appears between these two lines in other bands then V, mean that the V filter was actually in the beam, not the nominated filter. There may be similar effects for all filter combinations, but for the moment we can only pick out misfirings which resulted in the V-filter being in the beam. Only a small fraction of the data seem to be so affected, which is very good.
Extinction depends on the wavelength of the filter : this is clearly seen as differing slopes in the zeropoint (ie. the difference between instrumental and true (JPL) magnitude) with airmass.
Once the grossly erroneous observations/nights are cleared out of the data, the sequence of good data as a function of airmass can be picked off the plot (we call this tunneling).
Firstly, opposite sides of the moon are reduced to the same scale, by adding 0.05 mag to the magnitudes (in all bands, B, V, VE1, VE2, IRCUT) on the Crisium side, and subtracting 0.15 mag from the Grimaldi side photometry. This brings both sides onto more or less the same sequence with airmass. We then estimate (by eye) the extinction as a function of airmass in each filter. The results are:
B 0.15 mag/airmass
V 0.10 mag/airmass
VE1 0.08 mag/airmass
VE2 0.06 mag/airmass
IRCUT 0.12 mag/airmass
Interestingly, the extinctions in IRCUT and VE1 are somewhat different, which we wouldn’t expect as the filters are ostensibly rather similar. We should look into this further. In any case, these extinctions allow us to define lines about 0.3 mag wide around each sequence for each filter and pick off the best data. The result is this diagram:
A list of these frames is here:
There are 3162 frames on this list — cut down from about 5000, so about 60% of the frames have survived the “tunneling” process. This is still a quite conservative list because I haven’t
1) checked for the Lunar centering,
2) checked for “alpha”, the scattered light parameter, except for those very obvious cases which showed up by eye.
Final plot: using just the good data, the extinction corrected apparent magnitudes at MLO are compared to the JPL model as a function of Lunar phase. The scatter is small in some of the filters — we are down to absolute photometry errors of just a ~ 0.1 mag relative to JPL, which is pretty good considering we cover a range of airmass mainly from 1 to 5, but with some of the data out to airmasses of 15!
There is a hint we can do better still, because there is still structure as a function of phase — for example, the B band magnitudes at negative phase are a bit brighter still than the positive phases — indicating we didn’t get the correction for each side of the moon quite right. There are also trends in the reflected light with phase, but this probably reflects the fact that phase and airmass are somewhat correlated (small absolute phases tend to get observed at higher airmass, because the moon starts the night closer to the horizon).
The earthshine comes from the various parts of the Earth that are turned towards the Moon, and the Sun – all the clouds and oceans and deserts and ice-caps that are illuminated and visible from the Moon contribute to the Earthshine. Which parts contribute most?
We take a representative image of Earth, as seen from space, and investigate where the flux mainly originates.
Splitting the above JPG image into the R, G and B channels we can analyses where e.g. 10, 50 and 90 % of the light comes from. That is – we seek the pixels that contribute these fractions of the total flux, and identify them in images. Note that R,G and B refers to other wavelength intervals than the B, V VE1 VE2 and IRCUT bands we have – our B band is bluer than the ‘B’ used in JPG images.
In the three frames we see 3 rows (B, G and R, from the top) – on the left in each panel is the original R,G or B band image, while to the right are the pixels contributing to the 10, 50 and 90 percentiles of the total flux in the image. The order of the panels is: top left 90%, left bottom: 50% and top right is 10%.
In the 90% images at top left we note that the B image (top row) looks different from the R and G images – the light in the B band comes from atmospheric scattering – Rayleigh scattering, and aerosol scattering – as well as the ocean and the clouds; other bands have more of their flux coming from clouds.
Variations in the blue may therefore tell us more about the atmospheric state than do the other, redder, bands. The Rayleigh scattering is due to molecular scattering – as long as the composition of the atmosphere is the same this ought to be constant in time; but some of the blue scattering is also due to aerosols and thus we may have a tool to investigate variations in the aerosol load. The longer-wavelength bands will tell us more about the continents. All bands are quite dominated by clouds – a small cloud can reflect as much light as a larger un-varying continental area.
The above is repeated here on another image of the Earth – more realistic as it is half-Earth. Image from Apollo 8.
And here is the B,G,R images and the 90% percentiles:
Top to Bottom: B,G,R, Left: R,B or G-band image – right: 90th percentile image.
We again see that the light contributing to the blue image (top) is more diffusely distributed than in e.g. the red (bottom) case where most of the light comes from variable features like clouds. This implies that we should expect larger variability in our albedo data for the red images than the blue image.
While waiting for the CCD to be fixed, I am summarizing the discussions we have had on this blog. The material should be used largely in section 6 of the manuscript we submitted, leaving the other parts of that paper – or splitting it into tywo papers. Here is the submitted manuscript as well as the start of a summary the summary is really just a collection of links to this blog.
What is the effect on observational coverage if we have different numbers of observatories and observe in different ways?
Since this depends on what we mean by ‘observational coverage’ we define OC as ‘largest fraction of the time with continuous observations’. Note that this is different from, say, ‘largest fraction of days where at least one good observation was made’, OK?
For 1,2,3 … observatories chosen from the list of known observatories (in the IDL code observatory.pro) and evaluating at 15 minute intervals we get the following (non-optimal, but pretty good) results when January and July are combined:
Upper panel: The red curve shows OC for Moon above 2 airmasses and Sun lower than 5 degrees under horizon. Blue is same but for 2 airmasses. The dip at 5 in the red curve is an artifact of the search method we use – exhaustive search between 44 available observatories would be too expensive so we seach for best of 100 random picks of 1,2,3,4 … in the list of 44. Lower panel: The same, but evaluated for more stations, best of 200, and with a 10% random – uncorrelated – occurence rate of ‘bad nights’ (clouds, for instance).
We see that, compared to this we have less OC – that’s because that search was for ‘Moon above horizon, Sun below’ instead of the more realistic constraints used here.
We see that extending observations from AM 2 to AM 3 is equivalent to adding two observatories for midrange values.
We see that adding many more observatories is in the end a loosing proposition – the 7th observatory on the blue line adds nothing compared to the sixth.
a)More exhaustive searches can be made, but takes time. This would probably smooth the curves above and also uncover slightly better solutions.
b) We have restricted the site choices to the positions of known observatories. Since most observatories are on the NH summer months (when the Moon is not as high in the Northern sky) there is a handicap.
c) The method is slow – because the altitude of Sun and Moon are evaluated from very precise routines. Simpler and faster expressions for altitude could be used – but one for each observatory would be needed.
The Moon is not observable all the time from a single observatory, all year round. To get complete – or almost-complete coverage we ask: where should the earthshine telescopes in the network be placed? The plot below shows, with coloured symbols, when the Moon is observable (defined as Moon up and Sun down), for 5 observatories around the world. MLO is Mauna Loa, MSO is Mount Stromlo in Australia, LCO is Las Campanas in Chile, SAAO is South African Astronomical Observatory. The underlying curve shows the earthshine intensity for one month. The Earth was modelled as a cloud-free sphere with bright continents and dark oceans – hence the zig-zag nature of the curve. Addition of randomly placed clouds would tend to dampen the amplitude of the zig-zags by about a factor of two.
When the underlying curve is not covered by coloured symbols it means that the Moon is not observable from any observatory. We note that this happens particularly when the earthshine is intense – that is near Full Earth, which is New Moon. This is clearly because the Moon is almost never in the sky alone when it is New (it has either risen shortly before the Sun, or will set soon after the Sun). Observability is best near Full Moon (New Earth) near day 15 – but then observing earthshine accurately is very difficult.
On average a single observatory experiences the Moon above the horizon and the Sun below, 25% of the time. Given the choice of these 5 observatories the total observability depends on time of year – given in the panels.
Clouds and the need for the Moon higher in the sky cuts the observability.
The above simulation is for two months of the year – the declination of the Moon changes with the seasons so the relative contribution by each observatory changes with time.
In this post, we noted that there is a daily progression in the obs/mod ratio of the DS/BS ratio itself (let’s call this thing ‘the relative albedo’ from now on!). We speculated as to its origin.
We have now calculated the “relative albedo” with two sets of terrestrial models – one a uniform Lambert-reflectance sphere and one a non-uniform sphere with ocean-land contrast but with a Lambert reflectance, while keeping the set of observations constant. We compare these two sets of results:
The upper panel shows the “relative albedo” for a non-uniform Earth and the lower one for a uniform Earth. The linear regression slope is printed on each line.
We note that the ‘progression’ for V has changed sign in going from a non-uniform Earth to a uniform earth. The magnitude of the slope on B has changed by 50 %.
We conclude from this that the ‘daily progression’ is due to the terrestrial modelling of surface albedo on Earth.
Or: we can state that we are seeing actual surface albedo variations during a nightly sequence of observations!
During this set of observations (night of JD 2456046) The B-band albedo of Earth decreased slightly, while the V-band albedo rose. This is the albedo of the half-Earth shown in the figures in the post here.
For fun, we plot here the B and V relative albedos for tow nights with overlapping phase intervals:
The two nights are 2456075 and 2456016. For the B progressions it looks like there is a difference, at phase -96 degrees of about 0.15 in the relative albedo, and given the scatter of the points we might be able to confidently see a difference 1/10th of that, I guesstimate. That is 1.5% at a given point in phase (that is, time).
BBSO discuss ‘1-2% per night’.
If we take the above results at face value we seem to be able to track albedo changes through a night. Averaging sucha progression, in order to get the ‘nightly mean value’ would, of course, yield an average value, and an average can be affixed with a ‘standard deviation of the mean’ which can become very small if you have enough points – and if it gives any meaning to average the points. It does not give that much meaning to average the above progressions since they change on scales longer than our observing sequences.
But anyway, we need to start understanding what BBSO does in their data-reduction. Notably – do they assume Earth is a Lambert sphere? If so, we have perhaps arrived at their “A*” quantity? If, even more, so – we now just need to observe for a few decades!
Given only 1.5 years of data we should for now focus on how we present the results, and how we get something ‘meteorological’ out of the data – as opposed to ‘climatological’. For instance, does the level-difference between nights 2456016 and 2456075 correspond to a change in albedo in the relevant areas of Earth that we can determine from Earth observation data from satellites? More clouds, perhaps? 2456016 is the upper sequence.
Let’s see! (more to follow…)
For all our observations, we plot the lunar phase (in the convention we use) against the azimuth (degrees East of North) of that observations.
We see that most positive phases are observations East of the meridian; most negative phases are West. Only rarely has the Moon been followed past the meridian.
We consider now the DS/BS ‘ratio of ratios’ – that is, the DS/BS ratio in observations divided by the DS/BS ratio in the model images. We have used the Clementine albedo map scaled to the Wildey levels, and Hapke 1963 reflectances in the model images.
We look at the data from several nights in the phase-interval from -130 to -50 degrees (Full Moon is at 0 degrees; negative phases implies we are observing the Moon in the West (setting)):
What we see are short sequences of points – each sequence covering one night. We note how many of these nightly sequences ‘dip’ during the progression of one night. We note the smoothness of each nightly sequnce – this bodes well for our original idea that albedo could be extracted with high precision. Jumps from night to night are more troublesome! But we need to first understand the ‘progression’.
Inspection of the time-stamps on the data in a ‘progression’ shows that time moves to the right – that is the ‘ratio’ plotted starts out high at a larger negative phase and ends as a lower ratio at a less negative phase (moves left to right). Since the observations are in the West we are looking at a setting Moon, or increasing airmass along a progression.
On a single night several things happen:
1) the Moon is seen through different airmasses, and
2) the Earth turns.
[recall that the synthetic models contain all the correct distances as functions of time, so effects due to that should divide out].
1) If the effect is due to the increasing (or decreasing) air mass we need to understand how a ratio of intensities taken from two areas on the Moon, obtained from the same image, can show a dependence on airmass. Differential extinction across the lunar disc? really? We are looking at 10-30% changes in a single progression!
2) Since these observations are all at negative phase they may all represent the same configuration of Earth and Moon – i.e. we could be seeing the effect of the same parts of the Earth rotating into view through the night. From a fixed observatory there is a tendency for the same areas on Earth to reflect light to the Moon on subsequent nights – in our case either the American sector and oceans on either side, or the Austral-Asian sector.
We should not forget 1 – but let us turn to 2 for now. We can extract ‘scenarios’ for the Earth at the moments of observation corresponding to the data-points above. Inspectingh the observing scenarios could perhaps teach us what is going on.
We select the progression near -91 degrees. That is a set of observations from JD 2456047.75 to 2456046.87, or about 3 hours (the date is April 30 2012). At the start the Earth was in this configuration (crosses on the globe show the part illuminated by the Sun):
While this is the configuration at the end islike this:
That is, the Earth has rotated slightly during the observations and
more of the Asian landmass has swung into view, meaning that more of the
earthshine is coming from the Asian landsmass, than at the start.
Can we estimate the expected change in Earth’s reflectivity during this sequence? Yes. We have quite elaborate code for that, know as “Emma’s code” due to a student who worked on it in Lund. It shows the Earth from any angle at any time, with superimposed albedo maps of clouds as well as continents. A reflectance law is imposed. I think there is even some Rayleigh scattering added to give the ‘bright edge’ of the Earth (see any Google Earth image at startup, as you zoom in – the edge is bright, and that may even be realistic).
That code is quite complex and is not run in a jiffy. [We need more students!] But we can make a perspective model of the Earth using simple IDL routines. We can wrap surface albedo and cloud maps onto a sphere and view it from the angle the Moon is at, for any given time. There is no reflectance imposed – just the albedo maps and the perspective view. The two situations above look like this with cloud maps for the moment of observations (taken from the NCEP reanalysis product TCDC):
The simple average of the brightness of the two images above are 42.6 for the former and 40.6 for the latter – so it is getting dimmer by about 4% as time passes – perhaps because bright Australia (cloud-free at this time) is entering dusk. The cloud map images are available at 6 hourly intervals only, so there has not been time for the cloud image to change – it is simply rotated and other bits have become hidden by the day-night sector advancing.
We have extracted the mean value of alfa (the parameter that describes how broad the PSF for a given image is) for a given night, and the corresponding value of the extinction coefficient for that night. We have only few nights where the extinction could be determined.
The plot of one vs the other looks like this:
(download and look, etc)
We see that there is a tendency for alfa to be narrowly constrained, and that the extinction has a broader distribution. In general there is no strong relationship between the values, but if we ignore outliers and the effect they have on the regression (plotted as a red line) we see a general tendency for high extinctions and small alfa values to be related: For B it is quite clear. V would be clear but for the outlier, IRCUT also seems to be clear. VE1 and VE2 are all over the place. A broad PSF is given by small values of alfa. We expect broad halos (i.e. broad PSFs) on hazy or turbid nights – nights on which extinction also should be large.
Factors that determine scatter in alfa are things like image focus and how the nonlinear fitting routine determined it should stop. Physical factors include haze and thin cloudyness on the night in question.
Factors influencing the scatter in extinction include the actual regression: we used all nights with more than 3 observations used to determine the airmass vs extinction line, and for which the determinations from 3 regression methods agreed to a S.D. of less than 0.02. The three methods were – ordinary least squares, and two ‘robust’ methods (“ladfit” and “robust_poly_fit” in IDL). While doing the actual regression it was necessary to eliminate some outliers by hand. Physical factors include haze and cludyness and whether the halo around the Moon was well captured inside the image frame.
The relatively low value of alfa for the VE2 filter is still not understood.
We weren’t sure of how exactly JPL models the Moon’s brightness in HORIZONS, i.e.
The following plot shows that JPL calculates the actual observatory-object distance in giving the Moon absolute magnitude, but does not use any albedo map of the Lunar surface — so it is symmetrical with phase on either side of new moon (say).
Lower panel : apparent magnitude of the Moon over a 12 month period (September 2010-2012) computed with Horizons, and shown in a narrow range of the illuminated fraction (40 to 50 percent). The scatter in the apparent magnitude around the trend is ~ 0.1 mag.
Most of this scatter is due to the changing distance of the moon around its orbit (between ~0.019 and 0.023 light minutes). The middle panel shows the magnitude if we correct the photometry to a standard distance (0.0215 light minutes) — this reduces the scatter to 0.03 mag. (The distance provided by Horizons includes the position of the observatory on the Earth — this can make a difference of up to ~12,000 km in the Moon-Observatory distance).
The upper panel shows the residuals in a least squares fit to the middle panel, as a function of Julian day over the 1 year sample period. This clearly shows that most of the 0.03 mag scatter in the middle panel is due to the changing Sun-Earth distance during the course of the year. Accounting for this reduces the scatter to <0.01 mag. We interpret this to mean that no surface features of the Moon are being included in the Horizons’ apparent magnitude estimate, since we expect considerably more scatter than that (but we are working on this!).
At the moment we are displaying our results by providing plots of ratios of the DS/BS ratio in observations, to that in models. We do this chiefly to get rid of common factors – such as solar irradiance and distance-related geometry.
What remains in such a ‘ratio of ratios’ are the effects of:
1) The Earth’s actual albedo,
2) the model’s Earth albedo,
3) Earth’s reflectance (real vs modelled),
4) lunar surface albedo’s in the reference patches (real ratio vs model ratio)
5) effects due to the choice of lunar reflectance model.
We are really only interested in 1.
2 is an assumed value so that the results we get for terrestrial albedo are relative to that choice. We use a value of 0.31.
3 is a choice – we expect that any errors made in this choice will be seen as a phase-dependency in the results, and we can therefore control or at least understand it. Earth is more Lambertian than the Moon. The Earth has edge-darkening, the Moon has very little. We use a Lambertian model for an otherwise uniform Earth.
4 is observable, but only with difficulty – you need a good total lunar eclipse, then the albedos in the two patches on the Moon – or their ratio – can be measured. BBSO has done this. We have not (yet). In the model we make a choice, based on which lunar albedo map we use. As long as it is fixed the results will be relative to that choice. Perhaps we can use published images of total lunar eclipses to extract the ratio?
5 is a choice – we have models for Lambertian reflectance, as well as the Hapke 63 model and other, as yet untested, more advanced reflectance models. We expect that incompletenesses in these models will be seen as a phase-dependency in the results.
Of the above only some could possibly induce a phase dependency: 3 and 5.
We have reduced all data using both lunar models based on the Lamertian reflectance and the Hapke 63 model. We show them next – look here for a discussion of what we are actually showing: ‘ratio of ratios’ and all that:
(as before, plot needs to be downloaded as it does not show up well on this blog).
The plot is for 5 filters with the EFM method applied. The left column is the ratio of DS/BS in obs to the same in model, while the right column is DS/total in obs relative to models, where ‘total’ means the disk-integrated brightness of all source counts (plus a few stars that we can ignore!).
The first page is for the Lambertian lunar reflectance. The last page is for the Hapke 63 reflectance.
We notice that left and right columns are quite similar. We notice that the ‘jump up’ at angles corresponding to about 40 degrees from New Moon is much smaller in the Hapke 63 model results. Inspection of the files that correspond to the individual points in the ‘jump up’ and those next to the jump, reveals that the ‘jumped up’ points have a different processing history: they are the result of coadding and averaging single image sequences, while the rest are stacks of observations that were averaged. Why these should be different is unclear, as yet.
We conclude that there was an effect of lunar reflectance model on our results – and that Hapke 63 is better than Lambert. This is not surprising as the Moon is well-known not to be Lambertian in its reflectance.
So – we are beginning to see observations constrain theory!
There is still some scatter to account for, and we shall return to this mattrer, using the estimates of nightly seeing available from the measured alfa.
There is also some ‘slope’ in the result wrt phase – so the EFM method has a success rate that is phase dependent. WHile the second-best method (BBSO logarithmic; not shown here) has some ‘upturn’ towards Full Moon (center of plot), the EFM has an even slope down towards FM. Perhaps some empirical fine-tuning of the method will remove this problem too.
We have isolated the very best images in V band, in which the Moon is well centered and well exposed.
We measure the apparent magnitude of the moon in these images by simply measuring the total flux, and applying the standard photometry relations (i.e. http://earthshine.thejll.com/#post229).
We correct for extinction of 0.10 mag/airmass (from http://earthshine.thejll.com/#post249).
We then compare the apparent magnitude to the expected apparent V magnitude from the JPL ephemeris for the Moon (http://ssd.jpl.nasa.gov/horizons.cgi). This uses the relation quoted in Allen “Astrophysical Quantities”, which actually comes from Eqn 8 of this paper: http://adsabs.harvard.edu/abs/1991PASP..103.1033K.
The plot shows the difference in the apparent magnitude as a function of phase (new moon = 0). Blue and green show opposite sides of the moon.
There is a bit of scatter in these data, but there are two clear sequences around phases 50 to 100 showing that opposites sides of the moon differ in luminosity by about 0.1 mag. This is quite a lot less than we expected to see, viz. this post:
In a previous post (here) we have compared the Wildey and Clementine albedo maps. These maps are important for our synthetic modelling code since the albedo (along with reflectance assumptions and correct geometries) are the basis of constructing realistic model images, used in analysis. We can compare these two maps very directly by accessing both and plotting the mean albedo in boxes at common lon,lat positions:
Evidently, the Clementine albedos are lower than the Wildey ones by a factor of two for dark areas and by some tens of percent at brighter areas.
This may be an explanation for the discrepancy we have between observed morning/evening brightness ratios and modelled ones, which Chris Flynn put his finger on.
We seem to have [not shown, but material could be inserted] an observed difference (expressed in magnitudes) of 0.12-0.14 magnitudes at absolute phase 90 degrees between the morning and evening integrated brightness. In models, based on Clementine albedo and either Hapke 63 or Lambert reflectances the difference is more like 0.3 magnitudes. If the Wildey map is more correct than Clementine, in terms of the highlands/mare albedo, then using the ‘flatter’ Wildey albedos in the synthetic code would help on the morning/evening brightness issue by lowering that ratio in the models.
The issue with using Wildey instead of Clementine is that Clementine is a global map – Wildey only covers something like -89 to 89 degrees in longitude and something similar in latitude so there is no remedy for modelling under lunar libration. We could ‘scale’ the Clementine map, using the above relation and see what that gets us, though.
Before we do that, we should fully understand how our own synthetic code uses the Clementine map – I believe there is more to its use than merely being used as a lookup-table. Hans will be able to tell us about this.
A second-order robust polynomial fit to the above data (where points where Clem > Wild have been omitted) is:
Earlier we showed the performance of the linear and logarithmic BBSO method. Now we have added the EFM method.
As before, plots are not shown well on this blog, so please download this pdf file and look at it:
You see the same 5 panels on each page – one panel per filter. On the panels you see the behaviour of the ‘ratio of ratios’ against phase – that is, the DS/BS ratio in observations relative to a set of synthetic lunar model images.
First page: RAW data – i.e. no scattered light has been removed. Full Moon is at the centre of the plot so we see the effect of scattered light – the obs/model ratio is increasingly not 1 as we near FM.
Second page: the linear BBSO method has been appllied to images. We see a reduction in scattered light – points ‘move down’ towards the 1-line.
Third page: logarithmic version of BBSO method – a slight improvement is seen.
Fourth page – the new one: the EFM method – a quite large improvement is seen over the best of the others! Many of the selected points are now lying on a flat seqeunce, except points towards New Moon, and some outliers.
The EFM data has been ‘selected’ in the sense that the parameter alfa, determined from images, has been used to select ‘good cleanups’. A histogram of the detected alfa values was made and a notable peak found and only images with alfa in a narrow range were used for the above plot. Alfa between 1.67 and 1.73 were picked. The absence of VE2 points is due to this – the peak of the VE2 alfa distribution is between 1.60 and 1.66. We must investigate whether these are ‘good’ solutions. [added later: a brief visual inspection seems to imply that the 1.7 solutions are the good ones – not the majority, which for VE2, lies in the peak at 1.63. Hmm.]
Speculatively, we note the ‘turnup’ of points towards New Moon. We ahave earlier discussed that this may be due to some feature or failure of the synethtic lunar image model to correctly portray the Moon at large phase angles. On the other hand the ‘turnups’ now look less like a gradual sequence, and more like a ‘jump’ up. The ‘jumps’ occur near 40 degrees from New Moon – …. is this the ‘rainbow angle’? The jump seems large – the rainbow angle paper spoke not of factors of 2 and 3 and 4 but of percentages. On the other hand the phenomenon has never been observed, as far as we know. It would be nice to have points ‘outside’ the rainbow angle to see the jump go down again (if this is the rainbow, that is). We note here that as we get closer to New Moon the lunar sicle is narrow and it becomes increasingly important to place the ‘patch’ in which the BS brightness is measured – here is an opportunity to experiment with DS/total ratios, instead.
Work continues …