Jason Wolfe

Jason has worked with Exelis for 10 years as a Senior Technical Writer. He works with the Engineering group to document new features in the ENVI Help and to improve the user experience through software validation. His favorite role is producing tutorials and demonstrations that showcase ENVI’s image analysis workflows. He has an M.S. and B.S. degree in Geography and 20 years of experience in remote sensing. In the past, he has worked with the NASA EOSDIS community and presented posters at scientific conferences.




Using Attributes to Improve Image Classification Accuracy

Author: Jason Wolfe

In this article I will show an example of how you can improve the accuracy of a supervised classification by considering different attributes.

Supervised classification typically involves a multi-band image and ground-truth data for training. The classification is thus restricted to spectral information only; for example, radiance or reflectance values in various wavelengths. However, adding different types of data to the source image gives a classifier more information to consider when assigning pixels to classes.

Attributes are unique characteristics that can help distinguish between different objects in an image. Examples include elevation, texture, and saturation. In ENVI you can create a georeferenced layer stack  image where each band is a separate attribute. Then you can classify the attribute image using training data.

Creating an attribute image is an example of data fusion in remote sensing. This is the process of combining data from multiple sources to produce a dataset that contains more detailed information than each of the individual sources.

Think about what attributes will best distinguish between the different classes. Spectral indices are easy to create and can help identify different features. Vegetation indices are a good addition to an attribute image if different vegetation types will be classified.

Here are some other attributes to experiment with:

  • Texture measures (occurrence and co-occurrence)
  • Synthetic aperture rader (SAR) data
  • Color transforms (hue, saturation, intensity, lightness)

Including elevation data with your source imagery helps to identify features with noticeable height such as buildings and trees. If point clouds are available for the study area, you can use ENVI LiDAR to create digital surface model (DSM) and digital elevation model (DEM) images. Use Band Math to subtract the DEM from the DSM to create a height image, where pixels represent absolute heights in meters. In the following example, the brightest pixels represent trees. You can also see building outlines.

Example height image at 1-meter spatial resolution

Locating coincident point cloud data for your study area (preferably around the same date) can be a challenge. Luckily I found the perfect set of coincident data for a classification experiment.


The National Ecological Observatory Network (NEON) provides field measurements and airborne remote sensing datasets for terrestrial and aquatic study sites throughout the world. Their Airborne Observation Platform (AOP) carries an imaging spectrometer with 428 narrow spectral bands extending from 380 to 2510 nanometers with a spectral sampling of 5 nanometers. Also onboard is a full-waveform LiDAR sensor and a high-resolution red/green/blue (RGB) camera. For a given observation site, the hyperspectral data, LiDAR data, and high-resolution orthophotography are available for the same sampling period.

I acquired a sample of NEON data near Grand Junction, Colorado from July 2013. My goal was to create an urban land-use classification map using an attribute image and training data. For experimentation and to reduce processing time, I only extracted the RGB and near-infrared bands from the hyperspectral dataset and created a multispectral image with these bands. I used ENVI LiDAR to extract a DEM and DSM from the point clouds. Then I created a height image whose pixels lined up exactly with the multispectral image.

I created an Enhanced Vegetation Index (EVI) image using the Spectral Indices tool. Finally, I combined the RGB/NIR images, the relative height image, and the EVI image into a single attribute image using the Layer Stacking tool.

Next, I used the Region of Interest (ROI) tool to collect samples of pixels from the multispectral image that I knew represented five dominant classes in the scene: Water, Asphalt, Concrete, Grass, and Trees. I used a NEON orthophoto to help verify the different land-cover types.

I ran seven different supervised classifiers with the multispectral image, then again with the attribute image. Here are some examples of Maximum Likelihood classifier results:

Maximum Likelihood classification result from the NEON multispectral image

Notice how the classifier assigned some Building pixels to Asphalt throughout the image. 

Here is an improved result using the attribute image. Those pixels are correctly classified as Building now.

Maximum Likelihood classification result using the attribute image

A confusion matrix and accuracy metrics can help verify the accuracy of the classification.

Confusion matrix calculated from the attribute image classification

The following table shows how the Overall Accuracy value is higher with the attribute image when using different supervised classifiers:

  Multispectral image  Attribute Image 
 Mahalanobis Distance  72.5  83.8
 Minimum Distance  57.69  95.22
 Maximum Likelihood  91.3  98.91
 Parallelepiped  57.18  95.8
 Spectral Angle Mapper  54.97  61.79
 Spectral Information Divergence  62.1  66.93
 Support Vector Machine  85.03  99.16

The accuracy of a supervised classification depends on the quality of your training data as well as a good selection of attributes. In some cases, too many attributes added to a multispectral image can make the classification result worse, so you should experiment with what works best for your study area. Also, some classifiers work better than others when considering different spatial and spectral attributes. Finally, you may need to normalize the different data layers in the attribute image if their pixel values have a wide variation.

New tutorials will be available in ENVI 5.4 for creating attribute images and for defining training data for supervised classification.


National Ecological Observatory Network. 2016. Data accessed on 28 July 2016. Available on-line at http://www.neonscience.org from National Ecological Observatory Network, Boulder, CO, USA.

Comments (0) Number of views (2289) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks





Creating Image Tiles in ENVI

Author: Jason Wolfe

I want to preview a new feature that will be available in the upcoming ENVI 6.0 release: a series of “Dice Raster” tools than can separate images into multiple tiles. 

You will have four different options for specifying how to separate (or dice) images. The first is indicating how many tiles to create in the X and Y direction. For example, I diced the  above image into four tiles in the X direction and three tiles in the Y direction, for a total of 12 tiles. 

Another option is to create tiles based on linear distance. In this case, the image must be georeferenced to a standard or RPC map projection and you must know the units associated with the projection. The following example shows an image georeferenced to a UTM Zone 13N WGS-84 projection, split into tiles that are 1000 x 1000 meters. 

In most cases the tiles in the last row and column will be smaller than the specified distance, as this example shows.

Another option is to dice an image by a given number of pixels in the X and Y direction. This is similar to the tile distance option.

Finally, you can create image tiles based on the spatial extent of vector records in a shapefile. Here is an example of how you might use this feature. Let’s say you have a Landsat scene and you want to separate it into tiles that correspond to USGS 7.5-minute topographic quadrangles. This screen capture shows a Landsat TM scene of the Grand Canyon along with a shapefile of Arizona quadrangle map boundaries (available from Data.gov).

Since there are so many polygons in this shapefile, I created a vector subset of only the records that overlap the Landsat image. Open the Attribute Viewer, then click in the display to select the records that you want to keep. The Attribute Viewer highlights the selected records.

Then choose the File > Save Selected Records to a new Shapefile menu option. Now we have a shapefile that only contains quadrangle map boundaries around the Landsat scene.

Now we’re ready to create the tiles. The Dice Raster tools are available from the Toolbox under Raster Management > Raster Dicer. Use the Dice Raster by Vector tool to separate the Landsat image into separate tiles based on the quadrangle boundaries.

Since the shapefile had 270 polygon records, the Dice Raster tool will create 270 separate images. Here is one of the resulting image tiles that corresponds to the House Rock Springs Quadrangle boundary, displayed with map grid line annotations (also available in ENVI 6.0):

You can also use the Dice Raster tools, for example, to split an image of a large study area into smaller areas of analysis. They are just one of many versatile and easy-to-use features that will be available in the next release of ENVI.

Comments (1) Number of views (2846) Article rating: 5.0




Remote Sensing of Light Pollution Near U.S. National Parks

Author: Jason Wolfe

With the 100th anniversary of the U.S. National Park Service in 2016, I have been wanting to get out and visit more national parks this year. A few weeks ago I spent some time in Great Basin National Park. Its remote location provided the bluest sky I had ever seen and a rare chance to see the Milky Way and thousands of bright stars.

Sadly, the ability to view clear night skies in the way that our ancestors did is becoming a rare commodity as air and light pollution increase. In this article I will talk briefly about light pollution specifically, and how we can use remote sensing to see its effects on a large geographic scale. I will show how I used ENVI to map national park boundaries relative to nearby light sources detected from satellite imagery.

National Park Service photo / Jacob W. Frank

Light pollution is an excessive brightening of the night sky caused by artificial light that emits upwards or sideways. Air pollution particles also increase the scattering of light at night. If you live in a big city like I do, it is difficult to pick out even a few stars at night. Throughout the last few decades, light pollution has encroached upon remote lands, including national parks. As this National Park Service web site explains, starry skies are becoming an “endangered resource.”

Researchers have traditionally used sky quality meters and photodiode devices to measure night-sky illuminance in specific locations. However, in recent years, satellite cameras have been used to look downward at Earth to analyze the global effects of light pollution over time. From 1992 to 2012, the Defense Meteorological Satellite Program (DMSP) provided nighttime lights imagery. From 2012 to present, the Suomi-NPP Visible Infrared Imager Radiometer Suite (VIIRS) sensor has provided even more detailed imagery of night lights.

NPP VIIRS has a Day/Night Band (DNB) product that can detect lights, glas flares, auroras, and even wildfires from space. I used the NOAA CLASS website to download NPP VIIRS Near Constant Contrast (NCC) data, which is derived from the DNB product. DNB radiance values are converted to a reflectance-like value that offers a better visual interpretation of light sources at night. 

Here is a screen capture taken with ENVI 5.3 that shows NCC imagery along with a shapefile of national park boundaries (in red). I restricted the analysis to western continental U.S. parks:

Click on the thumbnail images below to see each park in closer detail. I labeled major cities and petroleum sites in white. Major roads are colored green. Park boundaries are colored red. For comparison, the scale is the same among all the images (1:625,000), as well as the image stretch.

Badlands/Wind Cave

Big Bend

Black Canyon of the

Capitol Reef/Arches/

Channel Islands

Crater Lake

Death Valley


Grand Canyon

Great Basin

Great Sand Dunes

Carlsbad Caverns/
Guadalupe Mountains

Joshua Tree

Kings Canyon/Sequoia

Lassen Volcanic

Mesa Verde

Mount Rainier

North Cascades


Petrified Forest



Rocky Mountain


Theodore Roosevelt



Zion/Bryce Canyon

One thing you may notice is an absence of lights within park boundaries.The National Park Service has programs in place to preserve the natural lightscapes of the national parks. These efforts attempt to minimize the intrusion of artificial light into the ecosystems of parks.

From looking at these maps, you can see why some parks are better suited for viewing the night sky. They are far from populated areas:

  • Arches
  • Badlands
  • Big Bend
  • Bryce Canyon
  • Canyonlands
  • Capitol Reef
  • Crater Lake
  • Death Valley
  • Glacier
  • Grand Canyon
  • Great Basin
  • Yellowstone

Others are potentially affected by major sources of nearby light pollution:

  • Carlsbad Caverns
  • Channel Islands
  • Guadalupe Mountains
  • Pinnacles
  • Saguaro
  • Theodore Roosevelt

Satellite imagery gives us a unique perspective on viewing the effects of light pollution over time. To learn more about this subject, please see the resources below.


Davis, L., "10 Spectactular Parks for Stargazing." National Parks Conservation Association (2014). https://www.npca.org/articles/378-10-spectacular-parks-for-stargazing. Accessed April 2016.

“Night Skies.” National Park Service (2015). http://www.nature.nps.gov/night/index.cfm, Accessed April 2016.

“Remote Sensing with Nighttime Lights", Remote Sensing special issue, Vol. 7 (2015), C. Elvidge, editor. 

The National Park Boundary dataset was provided by the Earth Data Analysis Center (EDAC), Resource Geographic Information System (RGIS), University of New Mexico. Data retrieved from Data.gov (http://www.data.gov).

Comments (0) Number of views (2009) Article rating: No rating




Can I Subset an Image Using a Shapefile?

Author: Jason Wolfe

In this article I will show how to subset an image from a polygon shapefile that has been converted to a region of interest (ROI). A typical use case is to process only the pixels that are within a geographic boundary, as the following image shows. This is not a “clip” operation but rather a masking procedure whereby the pixels outside of the shapefile boundary are set to values of NoData.

This example uses a Landsat 5 TM image and a polygon shapefile of the Lake Maurepas watershed boundary in the Mississippi River coastal delta region of Louisiana.

First, you need to create an ROI from the polygon shapefile. Follow these steps:

  1. From the ENVI menu bar, select File > New Region of Interest.
  2. In the ROI Tool menu bar, select File > Import from Vector.
  3. Select the polygon shapefile when prompted.
  4. In the Convert Vector to ROI dialog, choose All Records to a single ROI or Each record to a separate ROI, depending on how you want to export the shapefile records.
  5. From the ROI Tool menu bar, select File > Save As and save the ROI to an XML file.

Now you are ready to subset the image. In addition to manually defining the spatial extents of the subset, ENVI offers other options for spatial subsetting such as Subset by Raster, Subset by Vector, and Subset by ROI. The latter two options produce a rectangular subset that is based on the geographic extents of the shapefile or ROI. The subset includes pixels that are outside of the shapefile or ROI, for example:

To create a subset that contains only data pixels inside the ROI boundary (see the example at the beginning of this article), you need to create a mask from the ROI. Follow these steps to create a mask and to subset the image by ROI:

  1. From the ENVI menu bar, select File > Save As > Save As (ENVI, NITF, TIFF, DTED).
  2. Select the image file, then click the Spatial Subset button.
  3. Click the Subset by ROI button.

  4. Select the ROI when prompted.
  5. In the File Selection dialog, click the Mask button.
  6. In the Mask Selection dialog, select the ROI file and click OK.

  7. Click OK in the File Selection dialog.
  8. Save the masked raster to a file on disk.

The result will look similar to the image at the beginning of this article. The white pixels are filled with NoData values. Also note that the coordinate systems differ between the Landsat image and the ROI; however, we did not need to take extra steps to reproject one dataset to the other. ENVI automatically reprojected the ROI to match the coordinate system of the Landsat image.

For more information on subsetting options and creating masks, please refer to the ENVI Help.


Landsat imagery was provided by the U.S. Geological Survey.

The Lake Maurepas shapfile was provided as part of the U.S. Watershed Boundary Dataset (WBD). The WBD is a coordinated effort between the United States Department of Agriculture-Natural Resources Conservation Service (USDA-NRCS), the United States Geological Survey (USGS), and the Environmental Protection Agency (EPA). The WBD was created from a variety of sources from each state and aggregated into a standard national layer for use in strategic planning and accountability. The WBD data for Louisiana is available from http://datagateway.nrcs.usda.gov [Accessed 02/29/2016].

Comments (1) Number of views (6026) Article rating: 5.0




Where in the World Am I?

Author: Jason Wolfe

When analyzing satellite or airborne imagery, we often know the general location of our study area. But we may not know exactly where a particular image is located, relative to nearby cities or geographic features (oceans, mountain ranges, etc.) This article will provide some tips for finding your way around in ENVI that you may not have been aware of.

Here is an example. At first glance, you probably wouldn't know where this image was located:

Landsat 8 color-infrared composite, acquired from USGS EarthExplorer. Data available from the U.S. Geological Survey.

Landsat images are georeferenced to a standard map projection, so you can display geographic coordinates in the ENVI status bar as you explore the image. The status bar is located along the bottom of the ENVI application. Right-click in any segment of the Status bar and select Lat/Lon Decimal Degrees.

The Cursor Value tool offers a similar method. From the geographic coordinates, you can see that the image is at a high latitude (68ºN). The image reveals geomorphic features that are common in coastal floodplains such as braided river networks and thousands of melt ponds and lakes. 

True-color snapshots of various floodplain features from the Landsat 8 image

Still, this is not enough information to tell where the image is located. Enter the Reference Map Link tool.

This handy feature is located under the Views menu, or you can use the Alt+M keyboard shortcut to start it. The underlying technology and base maps were developed by Esri® via their ArcGIS API for Javascript. The Reference Map Link dialog displays a base map with a blue dot that corresponds to the location of the scene center:

The lower-left corner of the map displays the updated coordinates as you move around the map. Here are some tips for using the Reference Map Link:

  • Click the + and - buttons, or use the mouse wheel, to zoom in or out.
  • Click and drag to pan around the map.
  • Click and drag the lower-right corner of the window to resize it.
  • Click the Switch Basemap menu to select a different base map.
  • The Reference Map Link is actively linked to the ENVI view. If you change the location in the view, or if you zoom or rotate the image, the map will update to reflect that change.

As you zoom out, a blue border shows the view extent:

This particular image is located in the Mackenzie River drainage basin, Northwest Territories, Canada. The view extent and scene center are shown in the following figure that uses a National Geographic base map:

The Reference Map Link is the simplest way to determine the geographic location of your imagery. You can also overlay vector coastlines, countries, and provinces on an image using the File > Open World Data menu options. Or, use the menu option File > Chip View to > Google Earth to open the extents of the current image view in Google Earth.

Have fun exploring!

Comments (0) Number of views (4411) Article rating: 5.0





















© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation