You’re in a field of flowers on a warm sunny day chasing and capturing beautiful butterflies for your brilliant collection. Each butterfly behaves differently and requires special techniques to summon it into your net. But wait! It’s not warm and sunny. In fact, it’s a little chilly at your workstation and you’re daydreaming again. You have a deadline and you need to somehow collect images for your brilliant project where only the best will do. And so, instead of capturing butterflies, you’re capturing images. And like collecting butterflies, capturing pictures is the critical first step in the process of creating beautiful images.
The Eyes Have It
Two types of devices are used to convert visible light into pixel information: the scanner and the digital camera. These devices are the eyes of the computer. They “see” color and interpret it into numerical information that the computer’s software uses to display and edit images. Learning to use a digital camera or a scanner properly are important skills, and there are many variables that enter into a producing a quality image capture. In this article I’ll cover the basics of image capture methods to assure the best possible results.
There are actually three ways to capture images to your computer: scanning, digital photography and screen capturing. Screen capturing simply copies the image seen on your monitor and saves it as a JPG or PNG with a few simple key commands. On a Mac press Shift + Cmnd + 3 to capture the entire screen. Press Shift + Cmnd + 4 as you press and hold the mouse button and drag to define a specific area to capture.
In Windows, press the PrtSc button to copy the entire desktop. Pressing the Alt and PrtSc buttons will copy just the active window then open the image and copy paste it into any image editor.
If you are on Windows 7 or later, go to the Start Menu and under the search bar lookup the built-in program called Snipping Tool. This works the same way as the Mac would work and allows you to either define an area and take a snapshot of it, or simply capture a single window.
A lot of hardware is available that captures image data in a form that your graphics software can interpret. These technologies transform visible light from analog wave lengths into numerical levels that can be stored and manipulated on your computer. Most of the variables of image capturing involve image quality. The following table summarizes the factors that determine the quality of a scanner or digital camera.
Here are the factors and their meanings:
Optical Resolution: The maximum number of pixels per linear inch the device can create from information it gathers from a reflective image or film transparency.
Scanning Elements: The array of photosensitive detectors on either a Charge-Coupled Device (CCD) or Complementary Metal-Oxide Semiconductor (CMOS).
Interface: The software that controls the device’s behavior.
Color Modes: RGB, CMYK, Grayscale, and Lab determine how colors are represented.
Document Dimensions: The physical size of the image to be captured determined at the scanner’s interface.
Dynamic Range: How extensively the device can retain detail especially in the highlight and shadow areas of an image. The higher the dynamic range the more information the scanner can "see."
Speed: The amount of time it takes the device to capture an image.
Bit Depth: How much information the device assigns to each pixel.
A scanner measures the color content and tonal variations of opaque images or transparencies and converts the data it collects into pixels. Each pixel is assigned a value for its red, green and blue components or, if the image is black-and-white, a grayscale value.
A flatbed scanner (see Figure 1) bounces light off opaque art and directs the bounced light on to a series of photo sensors that determine the strength of red, green and blue components of light. The scanner’s software converts color and tonal variations into numerical values. The quality of a flatbed scanner is determined in part by its maximum optical resolution. The more pixels the scanner can produce, more detail the image will have. When choosing a scanner, be aware of the difference between its optical and interpolated resolution. Optical resolution is the amount of data that the scanner can collect by directly “seeing.” Interpolated resolution uses software to increase the resolution or size by manufacturing pixels. Interpolation may produce undesirable softening and poor contrast. Dynamic range is also a critical factor. How well does the scanner interpret highlights and shadows?
The scanner’s interface (see Figure 2) usually provides several image correction features but for superior control, I find that it’s better to perform these functions in dedicated image editing software such as Photoshop or Lightroom.
Instead of bouncing light off a piece of opaque art, transparency scanners (see Figure 3) pass light through the emulsions on a piece of negative film or a color slide. In general, the quality of this light is better and less distorted because it is stronger than reflected light. A transparency scanner’s dynamic range determines its ability to distinguish color variations and highlight and shadow detail. If you’re going to purchase a transparency scanner, look for one with a high dynamic range (3.0–4.0) and a high optical resolution (2700–8000 dpi). Transparency scanners are available for 35mm, 2.25″x 2.25″ and 4″x 5″ film.
Transparency adapters are available for flatbed scanners (see Figure 4), but these devices don’t generally produce the quality of a dedicated transparency scanner. They are useful however and a lot less expensive.
Every digital camera (still or video) contains a two-dimensional array of detectors that convert the light from an entire image into pixels. Light enters the camera through a lens and is focused onto the photo-sensitive detectors. Digital cameras use either of two types of detectors: charge-coupled devices (CCDs) and complementary metal-oxide semiconductor (CMOS) chips. The size and quality of the detector determines the amount of data a digital camera can collect.
Digital cameras have become ubiquitous because almost anyone who has a cell phone has a fair quality camera in their shirt pocket and the manufacturers keep upgrading the camera with every new release. Case in point; the recently released Apple iPhone 7 (see Figure 5) available in two sizes, has dual lenses, a 28 mm wide angle lens and a 56 mm portrait lens. Professional DSLR (digital single lens reflex) cameras, like the Canon EOS 5D Mark IV (see Figure 6) has an interchangeable lens system that makes superb images of spectacular size in low light, and it can shoot eye-popping HDR videos with sound. The question remains, how does the cell phone stack up against the professional DSLR? As good as the iPhone 7 is, the cell phone has a long way to go and will never produce the quality of a high-end DSLR. Its main advantage is that it is portable and you usually have it with you wherever you go.
No matter which technology you use to capture your images, you should be aware of the kinds of problems you may encounter. Some of these problems are unavoidable whereas others can be reduced with certain preemptive measures or image editing.
- Noise—Randomly colored pixels, or noise, may inadvertently be added to and distributed across a digital picture. Noise is primarily a result of capturing images in insufficient light. Image noise appears most pronounced in the blue channel (see Figure 7). Noise can be eliminated or subdued with the noise feature in the Camera Raw interface or with the Noise filter in Photoshop and Lightroom.
- Artifacts—Unintentional image elements are sometimes produced by an imaging device or a compression scheme like a low-quality JPEG (see Figure 8). Artifacts can also result from dirty optics. Clean your lenses and dust both your scanner bed and lid to help avoid artifacts in your pictures. Be sure originals are clean of dust, fingerprints, and other surface marks.
- Resolution—The resolution of a digitizing device is measured by its pixel count. The more pixels the better the potential image quality. In general, when capturing an image, use a higher resolution than you will ultimately need. You can always reduce the resolution to a more manageable picture size in your image editing software. Be cautious about increasing resolution or resampling “up.” When pixels are added by interpolation, the image may lose its contrast and sharpness.
- Bit Depth—Bit depth refers to the capability of your device to capture color information. The more bits a capture device allocates to each pixel, the more colors can be produced. The most common color depth, 8 bits per channel (or 24-bit RGB color), can produce 256 shades of red, green, or blue for a total of 16,777,216 colors. “High bit” images consisting of 36-bit color (three channels with 12 bits per pixel) and 48-bit color (three 16-bit channels) can produce billions or even trillions of color combinations at the expense of much greater file sizes.
- Moiré Patterns—These are optical anomalies produced when one pattern is imposed over another. They usually appear when scanning preprinted halftones such as pictures from a book or magazine, or optical repeating patterns that produce interference with the screen’s matrix (Figure 9). Editing moiré patterns can be problematic because they can vary significantly from image to image depending on the scan and halftone resolutions and the screen angles. You can avoid them by scanning only continuous-tone images or by using the scanner software’s descreening function (see Figure 10). You can sometimes reduce moiré patterns by changing the angle of the art on the scanner bed or eliminating moiré noise from the most pronounced channel (usually the blue).
Capture from Anywhere
Now that you’ve read this article you can capture an image from anywhere—from home, the field or the monitor. Image capture, as you have seen, is the first critical step of the image editing workflow. A properly captured image will save you a lot of editing time and make all the difference to your final output.