Sunday, 20 January 2013

Nyquist Sampling Theory, Undersampling, Oversampling and Solar Astronomy

Much has been written about Nyquist sampling theory, and the undersampling and oversampling of images from a general astro imaging perspective, normally from the point of view of planetary imaging; however there is little specific information on this from a solar imaging perspective.  The purpose of this article is to demonstrate it's relevance to solar imaging, clarifying how to tailor the focal length of the imaging system used to not only the camera but also to the wavelength being observed.  In particular it is hoped that it will help explain why when imaging at CaK wavelengths the results are often not as good as imagers may hope for.


In simple terms, the question we are trying to answer here is what is the ideal focal length that we need to image with, using a camera of particular pixel size and at a particular wavelength to record the information such that none of it is 'lost' as a result of undersampling.  Undersampling is when the theoretical angular resolution of the sensor being used is worse than that of the telescope, in this case this is undesirable in most  situations, as we are not recording all of the image data our telescopes are capable of delivering - we are wasting aperture, and in terms of solar astronomy, aperture is the real estate that costs the most to buy in the first place. 

Nyquist Sampling Theorem

This is the key in terms of dictating the optimum minimum focal length needed with a particular scope and camera to ensure that images are not undersampled.  Nyquist sampling theorem states that the sampling frequency of an image (in our case) must be twice that of the smallest feature we can record, in other words the smallest feature must be imaged over 2 pixels on the CCD chip.  Now, theoretically, the smallest feature we can record is that of the resolution of the scope, however, such is the case with daytime imaging, the seeing conditions are not as we would like due to thermal turbulence, and, as such, it is likely to be the local seeing conditions that dictate the smallest feature we can record.


The theoretical resolution of a telescope of particular aperture at a particular wavelength is given by the formula opposite.  Whereby alpha is the theoretical resolution of the telescope in arc seconds, Lambda is the wavelength of light in mm (1 nm = 10-6 mm) and D is the diameter of the telescope in mm.  However, from our perspective of solar astronomy this is where things start to get different from planetary imaging as we image in very narrow bandwidths at opposite ends of the visible spectrum and this has some important consequences:  If we consider a 100mm telescope imaging at hydrogen alpha wavelengths at 656.28nm, then the resolution of this optical system (seeing permitting) is 1.65" (arc seconds).  Now, assume we use the same 100mm scope to observe at CaK wavelengths at 393.4nm, we find the resolving power is much improved at 0.99" (arc seconds).  Resolution is a function of wavelength observed, and at shorter wavelengths we have improved resolution compared to longer wavelengths.  Now, remember what we said earlier that it is often the local seeing conditions that dictate the maximum resolution we can achieve rather than the telescope aperture, and we can see why CaK imaging is more difficult and often spoiled more by poor seeing than imaging like for like at Ha wavelengths.  However, this is not all that affects CaK imaging...

The Ideal Focal Length

So, according to Nyquists sampling theorem, for a particular aperture, particular sized pixels on our camera and for a particular wavelength,  there is a minimum focal length we ideally should be working at to avoid undersampling.  The formula for this is as below:

Whereby, F is the desired focal length in mm, dpixel is the width of the pixel in microns, D is the diameter of the telescopes objective in mm, and Lambda is the wavelength in nm.

So, lets put some numbers in this and see what happens:

Take for example a Coronado Ha PST (40mm D, 400mm fl) with a DMK41 camera (4.4micron pixel pitch), using the equation gives us an ideal focal length of 536mm, not too far off the native focal length of this scope, but a little image amplification would result in a little more detail visible in the resultant image.

Now let us consider a Coronado CaK PST, same spec, same CCD, these yields an optimum focal length of 894mm to avoid undersampling.  Using a 2x barlow in this instance would result in more detail in the final image if the seeing allowed.  Don't get me wrong, I'm not saying imaging with this scope at its native focal length is not going to be effective, full disk images are great with this setup - what I am saying is that at native focal length some of the finer details are being lost and these could be recovered if imaged using a 2x / 2.5x barlow.

Now a 60mm Coronado SolarmaxII scope, with 400mm fl and a DMK41, the ideal focal length to avoid undersampling is 804mm, so, again, a 2x barlow lens is needed to get the best out of this scope!

Lets look at an 80mm f6 refractor and a DMK51 and a Lunt CaK wedge:  To avoid undersampling we need to be running at 1789mm focal length (!!!) - compared to the native  focal length of 480mm.  To get to the limit as prescribed by Nyquist you would be needing to use a 3x / 4x barlow lens, however less than this you are wasting the resolution that the 80mm offers, and with CaK there are alot of fine details. 

Looking at some scopes in my own setup, the Tal 100mm refractor (1000mm fl) which I use for my PST mod and also for CaK imaging and my DMK31 (4.65micron pixel pitch), for Ha this yields an ideal focal length of 1417mm - interesting that I found using my 1.6x barlow gave excellent results at 1600mm...  However, at CaK wavelengths the ideal focal length works out at 2364mm; great in theory, but I know from experience there are only a couple of times a year I can use this setup in CaK at such a long focal length with the Televue 2.5x powermate - however, when I can the results are worth it!

If I consider my 118mm PST mod, this is 1180mm fl, and the DMK31, then the theory tells us the optimum focal length to image at is 1672mm - very close to the ~1900mm fl that I find works best most often when using the 1.6x barlow.  at CaK, this scope would perform without undersampling at a whopping 2800mm fl!  I've only been able to image in CaK at this fl a couple of times in the last several years!

So, using the above formula and the specifics of your own system you should be able to work out the ideal focal length for you to avoid undersampling.

To Undersample, To Oversample?

OK, this theory is all good and well but what about real world practise?  It's far from the end of the world if you do undersample - infact this can often produce some of the most contrasty images, however, the point is, if you image at or above the ideal focal length as prescribed by Nyquists sampling theorem then you are recording all the detail your scope and camera are theoretically capable of.  That is, assuming the seeing conditions allow, the shorter wavelengths of CaK mean that not only are you susceptible to the effects of poorer seeing, but as per Nyquist the resultant amplification factor you must apply is also higher than longer wavelengths - a double whammy!  But looking from another perspective undersampling increases the field of view on the chip and also decreases the contribution of noise from the chip on the resultant image.

Well what about oversampling? we haven't spoke of this yet.  Well in simple terms this is just cranking up the focal length and going hi-res - but what are the limits?  what is the optimum?  Sadly Nyquist doesn't tell us this, all it tells us is the minimum ideal focal length.  In simple terms how much you oversample is going to depend upon your own observing circumstances.  In my situation, even at Ha wavelengths I can rarely use more than 2x / 3x the native focal length of my scopes due to poor urban seeing, however, I know from experience from the images of others that in the right observing conditions, with excellent seeing and transparency, it is possible to image at 4x / 5x the native focal length quite successfully.  It is not wise to oversample more than necessary, in this case the angular resolution of the telescope is better than that of the CCD chip and as is seen as overly magnified images, the sensitivity of the CCD chip is not being effectively utilised - the exposure time needed becomes progressively longer and also noise levels become higher, decreasing the signal to noise ratio of the final image.


As with everything, solar imaging is all about compromises, and the point of this article is not to tell people not to undersample their images, rather to be aware of the constraints that certain setups impose.  People like the convenience of imaging a solar full disk in one frame without having to mosaic, and that is understandable, but they must also be aware that the resultant image lacks the spatial resolution that it potentially could have if it was sampled at a longer focal length.  Then there is the poor seeing factor - if the seeing is bad drop the focal length to hide the poor seeing in your images!    However, if you are looking to get the optimum detail from your setup then you need to be running at least at the focal length as determined by Nyquist, if not exceeding it.  How much you exceed it, assuming perfect seeing, in my opinion, is governed by the transparency of your observing site - the better the transparency the more you can oversample.

However, cast your mind back to Nyquists sampling theorem, it is a mathematical equation, and as such we can play around with the variables to increase our chances of successful imaging. 
First of all the easiest thing to alter is D - an imager can step down the diameter of their telescope, reducing this reduces the optimum focal length.  Secondly, an imager could choose a CCD camera with a smaller pixel pitch - for instance using a PGR Flea 2 with ICX445 chip yields a pixel pitch of 3.75microns (compared to say DMK21 with 5.6micron pixel pitch) - this will also reduce the optimum focal length.  It would be interesting to see if imagers with more than one camera with differing pixel pitches find the smaller pitched pixel camera shows better results, particularly at CaK wavelengths.  A combination of both of these variables is likely to achieve best results in CaK.  The final variable we can change is the wavelength of the light we are observing - while this is quite clearly impossible at narrowband Ha and CaK wavelengths, white light solar is a different matter, and imaging at longer wavelengths can help to match the pixel pitch of the camera better with the focal length.  This approach definitely works for instance as demonstrated by several imagers with the 705nm  TiO2 filter, or other long wavelength narrowband filters.
I hope you find this discussion helpful and gives and insight into the way that you take your solar images with respect to your individual setups.  I would welcome comments or discussion into the issues it raises.