I am testing a new method to calculate bias and flat fields. The principle is this: If several images are taken of the same scene at different exposure times then for each pixel you could plot counts against exposure time. They should lie on a straight line (if the shutter works so we can trust exposure time) with intercept at the bias value for that pixel and the slope equal to the gain. The gain and the flat field are the same thing, just scaled differently.
I did this for the V filter using AIR, N1.0, ND1.3, and ND2.0, and using the hohlraum lamp as the source. There is a ND0.9 filter setting that appears to be just AIR, so we ignore that one. For each pixel the regression of exposure time (taken as the requested exposure time – as noted in this blog the measured exposure time no longer does anything, it seems) against counts was made and the intercept and slope stored, along with uncertainties on these, were collected as well as the correlation between regressor and regressand.
More results are on the way but so far this is what we get for the flat field and its uncertainty:
On the left is the flatfield normalized to mean=1. On the right is the uncertainty in each pixel expressed as a percentage of the FF. The color bar at the bottom gives the colors for the uncertainty – i.e the uncertainties are in the range 0.19-0.28%.
The FF looks familiar – bars, spots and the dge effects. I suggest it could be compared to the ones Henriette has from analysis of a few sky sessions as well as the dome flats and some lamp flats, for the thesis.
The plot for the ‘bias’ is more puzzling:
The bias is on the left and the values are near 650-690. The uncertainty is on the right and it is in the range of 10%.
The value for the bias is odd – we are used to 394-396. The uncertainty is huge. We see the signature of ‘dust spots’ on the bias and a little of the ‘barred structure’ that we know from the FF. There seems to be some ‘cross-talk’ during the regression.
I think the large calculated bias is due to an offset in the exposure times – if the requested exposure time is larger than the actual this will cause an overestimate of the intercept – i.e. the bias. We can calculate the offset from the difference between calculated and observed bias and the gain. We have the observed gain from a large number of dark frames taken at the same time as the V images. Calculated and observed biases and the gain are available as images, and this yields the offset as an image of the same size. Taking the average of this we find 0.010 +/- 0.0008 s. That is – the exposures (at least this time) were shorter by 10ms than what we requested. The spread in the offset is due to the roughly 10% noise found in the calculated bias image, not the much more accurately determined gain image.
For this trial the requested exposure time and counts are extremely closely correlated, so the shutter was well-behaved.
The plot shows correlation between exposure times and counts for all pixels.
Does the above suggest a hybrid procedure – subtract the known bias taken
from dark frames and fit the rest, keeping the slope as the gain?
Note added later: The method depends on the source being time-invariant and spatially flat. The main interest right now is that the technique allows an estimation of the pixel.to-pixel uncertainty of the derived flat-field.
This can be compared to other methods that depend on averaging of FF exposures.
Not sure about “spatially flat” comment.
Have collected all traditional Fs – now compare to these calculated ones.