Geog 296: Contemporary Geographic Techniques

Exercise 7: How Satellite Imagery Works

Now we see a few more details about how satellite imagery works.  In addition to understanding energy bands, you also need to know about how objects reflect energy, how satellites record energy data, and how the data are assembled back on Earth into picture-like images.

Objects on Earth Vary in How Much Energy They Reflect

Remote sensing processThe Sun sheds plenty of energy on Earth in the visible and infrared bands.  Satellites record this energy after it reflects off the surface of Earth and bounces back toward space.  Objects on Earth, such as forests, water, pavement or snow all reflect different amounts of energy.  It's these differences in reflectivity that enable us to identify objects through remote sensing.  You can understand this idea if you think about how you can recognize objects from the air:  partly it's the differences in color or brightness, as we saw when we discussed interpretation of air photos.  Water is darker than snow or concrete, for instance.  Interpretation of remotely-sensed images relies largely on these differences in reflection in the bands of energy that the satellite records.

Groups of objects on Earth have typical reflections of energy that help identify the objects.  For example, water reflects very little visible or infrared energy.  Snow reflects energy strongly, which is why it appears white (the combination of all visible light wavelengths).  Vegetation is an interesting case.  Healthy vegetation absorbs most visible light, but strongly reflects infrared.  In fact, any object that had a strong infrared signal and a weak red visible signal would almost certainly be vegetation.  Hence objects can be identified to some degree based on their "spectral signature," or combination of reflectances in various bands.  However, we cannot usually get too specific in our identification based on reflectance.  For example, it's nearly impossible to distinguish a redwood from a douglas-fir based on satellite data -- their signals are too similar.  For trees, at best we can usually distinguish deciduous and evergreen forest.

The graph below illustrates how some objects might reflect energy that strikes them.  For example, the vegetation (green) line shows that vegetation reflects little of the visible light it receives, but reflects a lot of the near infrared.  Water, on the other hand, doesn't reflect much visible and reflects none of the infrared it receives.

Spectral responses of some surfaces

Resolution: How Much Detail the Satellite Sees

Satellites may carry more than one instrument for sensing Earth and other tasks.  For instance, one weather satellite also carries a special instrument for recording multispectral data.  We often then distinguish between a satellite and the instrument, or sensor, it carries.

Comparative pixel sizesThe satellite's sensor observes a small portion of Earth at a time.  This small area is usually called a pixel.  The pixel size represents a squarish area that is, for example, 30 meters (100 feet) on a side.  The pixel size varies depending on the satellite sensor.  Pixel sizes on sensors so far have ranged from 5 meters to 1 kilometer.  The smaller the pixel, the more detail the satellite "sees."  Satellite data available to civilians cannot see extremely small objects because everything within the pixel is sensed together as one energy signal.  You cannot read license plates, for example.  Even military satellites probably cannot do this, though the military doesn't say much about the capabilities of its satellites.

For each pixel, the satellite records the amount of energy in one or more bands, depending on the design of the sensor.  So if the pixel size was 20 meters, the satellite might record one reading of the amount of blue light, one of the amount of green, one of red, and one each of two different infrared bands, for a total of five readings or brightnesses for that one pixel.

Satellites then must sweep across an entire area, taking readings in multiple bands for each pixel area.  Some satellites have a mirror that goes back and forth, east to west, as they orbit north-to-south.  Others have a long bar of sensors that reads a whole row of east-west pixels at once.  Whatever the approach, the sensor must look at a huge number of pixels in a short amount of time.

Scenes: The Digital Equivalent of the Photo

Satellite images are not photographs, since they don't use film.  Although even remote-sensing scientists will occasionally call their product "satellite pictures," technically they should be called satellite images.  The images are composed of thousands of pixels that the satellite scanned into rows and columns.

Landsat scene size over S.F. Bay AreaThe satellite gathers a group of rows into a computer file.  This file covers an area of Earth known as a scene.  The scene size varies depending on the sensor.  Sensors on Landsat, the US satellite series, has scenes about 185 km (115 miles) on each side.  The illustration shows a Landsat scene coverage for the San Francisco Bay Area.  Scenes for other commonly-used sensors range in size from 60 km to 2200 km.  A scene of Landsat can have over 6000 rows and columns of pixels.

You usually purchase satellite imagery by the scene.  Scenes can cost anywhere from $50 for poorer-resolution, public-domain data to over $5000 for higher-quality scenes.  Some companies also sell by smaller areas, often based on 7.5-minute USGS quad areas.  A small amount of satellite imagery is available on the Internet, but the images are such huge files that it is difficult to offer them on-line.  Tapes and CD-ROMs remain the usual way to obtain images.

Image Processing: Getting Information Out of the Data

Using satellite imagery often requires not only expensive imagery, but also sophisticated equipment and software, as well as trained personnel.  Let's look a bit at how images are handled by the computer and user.

Image processing is the activity of working with images on a computer.  Image processing applies to both satellite images and other kinds of images.  Although satellite imagery is usually handled by special software, many of the same techniques are used in other imaging software, such as Adobe Illustrator or Corel Photo Paint.  Software packages that specialize in satellite images include Erdas Imagine, PCI Easi/Pace, ER Mapper and Idrisi.  The first three in this list are more expensive, and are used in commercial applications.  Idrisi, which is available in the GIS Lab, is less expensive but can do many of the same operations, though is slightly less sophisticated.

False color composite image of San FranciscoA common output from image processing is to create a photo-like image for viewing or printing.  A band of imagery can be viewed separately, usually by assigning white to the pixels with highest reflectance, black to the pixels with lowest reflectance, and shades of gray in between.  These images can be difficult for an untrained eye to interpret.  If a red band is portrayed this way, the whitest part of the image actually shows where there was strong reflectance of red wavelengths.

Another common output is a false-color composite (sometimes abbreviated FCC).  Although the images are not photos, a color image can be created that resembles a color photo.  Natural-color composites are rare because the blue band is poorly transmitted to the satellite.  Instead, an image is often created that resembles a color-infrared photograph.  Here, the infrared band is portrayed as red color.  Then the red band is shown in green, and the green band is assigned to blue.  These images should be interpreted similarly to CIR photos, which we discussed earlier.

Classified satellite imageImage processing goes far beyond simple image portrayal.  The computer can be used to detect information about the area recorded in the images that cannot be seen by the eye.  The most common procedure here is image classification.  This procedure determines the land cover of pixels in a scene.  The classification usually identifies such land cover types as water, forest, grassland, urbanized area, and snow.  The example at right classifies an image from an area in Massachusetts into land cover types.

As we saw earlier, there are limits to the detail we can achieve in identifying objects, both because of the pixel size and because objects vary in how the reflect energy.  As another example, consider the reflection signal of an oak forest during winter compared to summer, when the leaves are present.

Other common tasks in image processing include:

Of course, we probably wouldn't do this processing unless it told us some useful information.  Interpretation of satellite imagery has been applied to many fields.  Some areas of application include:

Questions on this Page

8. Based on the graph of reflectances above, how much energy does bare soil reflect in the blue, green, red, near infrared, and mid infrared bands?  What color would you expect bare soil to have on a false-color composite? (Remember, infrared is given red color, red energy gets green color, and green energy gets blue color.)

9. Refer to the false-color composite image of San Francisco above.  What is the general type of land cover for areas with these colors:  (a) red; (b) white; and (c) blue?

  


Top of this page Geography Department Web Page

Bryan Baker, Sonoma State University, bryan.baker@sonoma.edu
Updated 17 February 1999