Planetary photography can be a lot of fun. Unlike deep-sky astrophotography, you don't need a dark observing site. You can shoot from downtown in the biggest light-polluted city if you want. You can shoot when the Moon is up. You can even shoot the Moon itself. Indeed, we can even shoot that blazing ball of fire and destroyer of the night, the Sun! Just be sure to use safe, proper solar filters when shooting the Sun. We can also shoot the brightest planets, like Venus, in the daytime, if we are especially careful. Planetary photography can be done with all kinds of different camera lenses and telescopes, all with different focal lengths. Types of Planetary Photography There are several different types of planetary photography. Listed in increasing order of complexity, they are: Wide-angle scenics are basically low resolution. All we need to shoot them is a simple camera on a fixed tripod. Single full-frame prime-focus shots are basically moderate resolution. We use our telescope or long-focal length telephoto camera lens. These types of shots can be done on a fixed tripod, or on a tracking telescope mount. For high-resolution planetary images, we will be working at high magnification and recording movies or Live View video with lucky imaging. This type of imaging typically requires a tracking mount. Video and Movies Note that in this book the terms video and movies are used interchangeably. In the old days movies were shot on film, and video was shot on video tape. Today both may be shot digitally. DSLR cameras record high-definition movies/video digitally, just as they do individual still images. The main concept for both video and movies is that they record many individual images, or frames, per second. When these individual frames, which are recorded at 24, 30, or even 60 frames per second, are played back at the same rate in a movie or video, they pass so fast in front of our eyes that our visual perception cannot distinguish the individual frames and it looks like smooth continuous motion. Live View Most DSLR cameras manufactured since 2008 have a feature called "Live View". This presents a live image of whatever the sensor is seeing through the lens in near real time. The mirror in the DSLR must be flipped up, and the shutter opened, to see this Live View video on the LCD. This live image is sent to the LCD on the back of the camera in the form of a video feed. This video feed can also be output to a separate monitor, or sent to a computer where it can be viewed and recorded with special software. The Live View video can be recorded by special software in a computer for high-resolution planetary photography. Live View can usually be enabled or disabled in a menu setting in the camera. Lucky Imaging We will use "lucky imaging" for high-resolution planetary photography. Lucky imaging means we shoot a lot of frames in a video in the hope of getting lucky and capturing brief moments of good seeing in some of them. Seeing describes how the image is affected by turbulence in the Earth's atmosphere.
Even with lucky imaging, to obtain the absolute best results for high-resolution imaging, good seeing is of critical importance. Other very important factors are observing location, optical quality, aperture, telescope acclimation, collimation, focus, resolution, sampling and exposure. All will be discussed in depth in the following chapters. Lucky imaging is usually done by shooting video because it records a high number of frames, or images, per second. This is called the framing rate and is measured in frames per second (fps). We may record hundreds or thousands of frames in a planetary video at a high framing rate. This means we can get a lot of frames before a planet rotates and fine detail is smeared. Lucky Image Processing After many frames are captured in a video, the lucky imaging process uses special software to pick out the best frames, align them, and then stack them to improve the signal-to-noise ratio. Stacking basically means averaging a lot of images together. Some planetary image-processing programs even use multi-point alignment where different features in an image are aligned and only the frames where those features are sharp are stacked together in the final image. Different features in an image may have a different subset of frames out of the video that are stacked. After the best frames are stacked, the resulting stacked composite image is sharpened with sophisticated sharpening techniques such as wavelets or deconvolution to reveal fine details. Further processing, such as contrast and color adjustments, is also usually applied to the image. |
||||||||||||||||
Back | Up | Next |