Joe is a Solutions Engineer on the County Government team at Esri. Joe was previously a Technical Solutions Engineer at Exelis VIS. He holds a B.S. in Environmental Science/Geology from the University of Vermont and a M.E. in GIS from the University of Colorado Denver. Joe has a strong background in remote sensing and image analysis. He has a keen interest in solving complex geospatial problems with the latest in enterprise technologies. You can follow Joe on Twitter @JPeterPeters.
Author: Joe Peters
For the past several months, I have been hearing from users of ENVI about a new set of provisional Landsat products being provided by the USGS. These products, known as Surface Reflectance High Level Data Products, are generated by specialized software that was originally developed by NASA. The software applies radiometric calibration and atmospheric correction algorithms to Level-1 Landsat data products. What this means for users of Landsat data, is that you can access data that has been pre-processed to top-of-atmosphere (TOA) reflectance, surface reflectance, or (in the case of thermal bands) brightness temperature. The data products also include masks for clouds, cloud shadows, adjacent clouds, land, and water. Users can also order several spectral index products for their Landsat scenes. This is an interesting development because although this sort of pre-processing and analysis is bread and butter for ENVI, the ability to order a product that has been pre-processed using proven radiometric calibration and atmospheric correction algorithms ensures analysis will be as accurate as possible and can ultimately save users a lot of time.
This past week, I was finally able to dig into these data products to see what we could do with them in ENVI. The good news is that for the most part, there's not much a user needs to do to start working with the data products in ENVI. The first step is simply ordering the products that you would like. To order the products you must have an account with the USGS (if you don't have one already, it's easy to get one). You can order the products from the USGS Earth Resources Observation and Science (EROS) Center Science Processing Architecture (ESPA) On Demand Interface. This interface is pretty nice. The only real trick is that you have to put the Landsat scene IDs for the scenes you would like to order into a text file that you will upload to the interface. I typically order Landsat scenes through GloVis, so I used GloVis to search for some Landsat 8 scenes that I wanted to order and copied the scene IDs into a text file once I found the ones I wanted (here's an example of a Landsat 8 scene ID: LC80330342013269LGN00). The screenshot below shows an example of the ordering interface.
Once I uploaded the text file to the interface, there were a lot of options to choose from. Since I wanted to check out all of the possibilities, I ordered quite a few products. For starters, I ordered the original Level-1 input products and metadata. This was nice because I got the raw data for each band and the *MTL.txt file that ENVI can read to open the original Level-1 product with all of the metadata and band information you would expect. I also ordered TOA reflectance products, surface reflectance products, brightness temperature products, and a few of the spectral indices that are available. Users also have the ability to reproject all of the images in their order to a common coordinate system, modify the image extents, and resize pixels. I chose not to do this, but it is really nice that this is available. The last choice is the output format that you would like to receive your products in. There are three formats to choose from: GeoTiff, ENVI, and HDF-EOS2. I have not tested the HDF-EOS2 data format, but I have confirmed that both the GeoTiff and ENVI formats are supported in ENVI. Once I received my order, I used 7-zip to untar and unzip my files. The following screenshot shows an example of the files I received from my GeoTiff order.
So, what can we do with all of this in ENVI? For starters, if you ordered the original input images, you can drag-n-drop the *MTL.txt file into the ENVI display (or do File > Open and select the *MTL.txt file). This would open all of the GeoTiff files associated with the original Level-1 product in ENVI with all of the appropriate metadata and band combinations you would expect, allowing you to do all of the pre-processing and analysis that ENVI is capable of yourself.
But, let's be honest, if we wanted to do this ourselves, we wouldn't have ordered the higher level data products to begin with! While I can't touch on all of the things that we can do with these products, I'd like to talk about what I felt was the most important thing to be sure to support, which is working with the surface reflectance images for the multispectral bands. This is what I felt would hold the most value to our users because they are the radiometrically calibrated and atmospherically corrected products that would be highly useful for calculating spectral indices, performing land use classification and determining land change from multitemporal datasets. The problem I encountered is that the metadata for these bands seems to be contained in a .xml file that contains information for all of the files for the scene. ENVI does not natively read this file (though it is possible we will support this in the future if customers show enough interest). For now, users can manually load the individual surface reflectance images for each band into ENVI and use the Layer Stacking tool to combine the individual files into one multispectral image containing all the bands, as shown below.
Once the bands have been layer stacked into one file, the Edit ENVI Header tool can be used to add appropriate metadata, such as appropriate band names, band wavelengths, acquisition time, sensor information, etc... This information can be found in the *MTL.txt file, in the *.xml file, or can be gleaned from the names of the individual filenames. While this approach will work, it's awfully manual and manual approaches to doing things take time. So, what I have come up with this week is a new set of extensions that will automatically stack the multispectral surface reflectance bands and add the appropriate metadata. So far, I have developed extensions that work with Landsat 8 in ENVI and GeoTiff formats, but soon I plan to have extensions that will do this for all Landsat sensors. Once complete, I will make these tools available in the ENVI extensions library.
If people show enough interest, I can develop similar tools that will automatically stack TOA reflectance bands and add the appropriate metadata. Once the multispectral surface reflectance images are stacked into a single file, users can carry on with whatever analysis they choose, such as performing Tasseled Cap, Principle Components Analysis, choosing from one of ENVI's 65 built-in spectral indices, analyzing multitemporal changes, or creating classification images. I welcome feedback since I would like to know what other uses people might have for these datasets and how we can best support them in ENVI.
Categories: ENVI Blog | Imagery Speaks
Last week I co-taught our very first Advanced Geospatial Analytics (AGA) course in our Boulder, Colorado, office. For this course, we decided to develop the content completely from scratch. This allowed us to focus on some of my favorite new tools available in ENVI + IDL for pre-processing, analyzing, and visualizing multitemporal data. The concept of performing multitemporal analysis is not new to remote sensing, but in the latest release of ENVI we have built out a lot of new tools that are powerful in their simplicity and ease-of-use. If you ever have occasion to view and compare multiple images of the same place taken at different times, these tools might be worth knowing about.
Top on my list is the the ENVITask system. The ENVITask system is a relatively new method for performing discrete bits of image processing programmatically through the ENVI object-based API. This programmatic approach to image processing can save a lot of time because you can easily chain together multiple ENVITasks, allowing the output from one ENVITask to become the input to the next. You can also loop through multiple files, so that whatever processing you choose to do, you can repeat for multiple images all at once. ENVITasks were introduced in ENVI 5.1 and the number of available ENVITasks has steadily grown with each subsequent release. At ENVI 5.3, there are over 100 ENVITasks available in the ENVI API. To programmatically run an ENVITask, a user simply needs to define the ENVITask to be executed, define the ENVITask parameters (input files, output file locations, additional task parameters), and execute the ENVITask. For a full list of available ENVITasks, along with examples for calling each individual ENVITask from the ENVI API, please consult our online Documentation Center - http://www.exelisvis.com/docs/ENVITask.html.
In the case of our AGA course, we used high resolution satellite imagery provided by Airbus Defense and Space to look at agricultural fields in the Central Valley of California. The imagery was collected by the Pleiades 1A and 1B sensors. The following code example shows how we can:
1. Create a list of input Pleiades images to perform processing on
2. Call the the ENVIRadiometricCalibrationTask
3. Call the ENVIQUACTask
4. Call the ENVISpectralIndicesTask
5. Open each file and perform each step of processing
While coding in the ENVI API might not be for everyone, there are some serious advantages to digging in and becoming familiar with the ENVITask system. In the example shown above, we were able to perform radiometric calibration, atmospheric correction, and create an output NDVI image for 9 Pleiades 1A and 1B images simply by clicking "Go". I could go on and on about some of the exciting details of the ENVITask system, but in the interest of time I am going to move along.
Next on my list is the ability to build a series of images (called a raster series) for spatiotemporal analysis, then view the images incrementally. In the image below, a raster series has been created from the results of running an Optimized Soil Adjusted Vegetation Index (OSAVI) on the 9 images provided by Airbus Defense and Space. Using the Series Manager, we can quickly step forwards or backwards through our series of images to get an idea of vegetation health for any given pixel in our scene throughout time. You can also quickly annotate the date and time of acquisition for your scenes and export your time series to video. In the example below, I have chosen to create a raster series from the results of image processing, but we could have just as easily created a time series from our original images.
The images that comprise a spatiotemporal series do not necessarily have to be in the same map projection or have the same spatial extent. An example is animating a time series to track a hurricane; in this case, it is not critical that the images have the exact spatial extent, projection, or resolution. However, ensuring a common projection and spatial extent allows you to perform a true multitemporal analysis; for example, studying the change in vegetation over time or the change in a glacier's extent over time in a specific location. To this end, ENVI has introduced several tools for gridding a raster series to a common projection and spatial extent. These Regrid Raster Series tools can be found in the ENVI Toolbox and can also be called programmatically via the ENVITask System.
The Regrid Raster series tools include the ability to regrid by the union of all images in the series, by the intersection of all images in the series, by the spatial extent and coordinate system of a selected image in the series, or by defining a custom grid (i.e. coordinate system, spatial extent, pixel size, etc...). These are powerful tools when you consider how much work would have been required in the past to make sure that each pixel in an image lines up exactly with each pixel in a set of corresponding images. For more information, please reference our online Documentation Center - http://www.exelisvis.com/docs/SpatiotemporalAnalysis.html.
Last, but certainly not least, I want to talk a bit about the new Series Profile tool in ENVI. Once a raster series has been regridded to a common coordinate system and spatial extent, we can use the Series Profile tool to truly dig into a scene to get an understanding of how any given pixel has changed over time. In the example below, we are using our Pleiades image processing results to look at an individual field throughout the growing season. Using our Series Profile tool to view an Optimized Soil Adjusted Vegetation Index, we see that crop health steadily increased until sometime in the middle of June. By June 23, crop health began to decline and we even see signs of crop loss in the northwest corner of the field. This is just one example of how ENVI multitemporal analysis tools can provide insight into conditions that might not be readily available from the ground.
It's that time of year again when many of us around our office find ourselves busy getting ready for the Esri User Conference (UC). While this time of year is most certainly stressful, I always enjoy preparing for the Esri UC because it gives me the opportunity to take stock of what I have noticed going on in our industry. There's a lot going on in the worlds of remote sensing and geography right now, but from my perspective, one of the most exciting things is the rapid proliferation of spaceborne and airborne sensors. Rising demand for remotely sensed data coupled with falling costs for building and deploying spaceborne and airborne sensors means that we have a lot more data to work with.
With spaceborne sensors, there seems to be a divergence in the commercial market. Some companies are deploying larger satellites carrying multiple sensors capable of capturing images with higher spatial and higher spectral resolution than ever before. Other companies are deploying a constellation of much smaller satellites with high spatial resolution multispectral sensors in order to capture as much of the earth's surface as possible at any given time. Having such a constellation of satellites also allows for rapid revisit times which can be very beneficial for many applications, such as agricultural crop monitoring. The good news for us, the users of remotely sensed data, is that we benefit from both of these approaches.
The image above, captured by WorldView-3 from DigitalGlobe shows a Short Wave Infrared (SWIR) band combination of Cuprite, Nevada. The high spatial resolution SWIR bands available from WorldView-3 are unique in the commercial remote sensing industry and can be used for a myriad of applications, including material identification, land use classification, and mining.
The image above shows an Optimized Soil Adjusted Vegetation Index (OSAVI) calculated from multiple Pleiades images from Airbus Defense and Space. The Pleiades satellites offer high spatial resolution multispectral imagery. The Airbus Defense and Space constellation of satellites allows for rapid revisit times, which is ideal for such uses as monitoring crop health throughout a growing season.
This year, the Federal Aviation Administration (FAA) selected six test sites for Unmanned Aerial Systems (UAS) and also began granting exemptions to a range of companies flying UAS. There is no doubt that the emerging UAS market will make a huge impression on the remote sensing industry. We are already beginning to see the use of UAS data for such applications as natural disaster monitoring, wildlife observation, precision agriculture, surveying, and more. In the image below, a point cloud has been generated from high spatial resolution multispectral images captured by a UAS. This point cloud can further be used to generate a Digital Elevation Model (DEM) and contour lines for the area of acquisition. This is pretty cool stuff and it's just one of the many applications for UAS data.
Speaking of point clouds, there have also been some advancements in the realm of LiDAR. Geiger-mode LiDAR is an interesting new commercial technology that enables the collection of data at higher resolutions and from higher elevations than traditional linear-mode LiDAR collection. Harris recently introduced the world's first commercial Geiger-mode LiDAR sensor. This interesting method of LiDAR point cloud collection means that we can collect higher resolution point clouds over broader areas at a much cheaper cost than traditional LiDAR collection methods. The point cloud in the image below was captured by a Geiger-mode LiDAR sensor. It might be a little hard to see, but I circled the point cloud density in the image below. In the most dense areas of the scene, the point cloud density is almost 200 points per square meter, with an average density throughout the scene of around 75 points per square meter. That's pretty dense!
There are, of course, too many new sources of remotely sensed data to mention in this blog post, but that's sort of my point. I have said nothing of publicly-funded earth observing sensors, such as the Advanced Baseline Imager (ABI) being built for the next generation of Geostationary Operational Environmental Satellites (GOES), or the Magnetosphere Multiscale Mission, which will investigate how the Sun's and Earth's magnetic fields connect and disconnect. This is all really cool stuff, but perhaps too much to go into detail here.
With all of these sensors producing all of this data comes the need to store, process, and make sense of it all. This is where I like to play and I am excited about some of the new tools we will be showing at this year's Esri UC - from new methods of processing and visualizing data in our ENVI desktop software to new ways to process data in enterprise and web-based environments. If you happen to be headed to this year's Esri UC, please stop by the Harris booth to say hello.
This past week, Exelis and DigitalGlobe, Inc. agreed to provide a new commercial offering of cloud-based ENVI analytics for the DigitalGlobe Geospatial Big Data Platform (GBD). The agreement will enable imagery users to easily combine powerful geospatial analytics with the vast DigitalGlobe image library to solve challenging environmental, natural resource and global security problems.
In response to the devastating 7.8 magnitude earthquake that struck central Nepal on April 25, DigitalGlobe has made high resolution satellite imagery of the affected areas freely available online to all groups involved in the response and recovery effort. This imagery can be accessed via http://services.digitalglobe.com. For more information on how to access DigitalGlobe high resolution imagery to aid in recovery efforts, see this blog post.
In addition to the offerings that DigitalGlobe has made available, DigitalGlobe and Exelis have put their new partnership to good use by setting up Amazon instances allowing free access to the data and ENVI image analysis software for anyone that would like to lend their image analysis skills to generate useful products for response and recovery efforts. If you are interested in getting access to one of these Amazon instances, please contact the DigitalGlobe GBD team via http://geobigdata.launchrock.com. You can also contact us at firstname.lastname@example.org.
Next week I will be teaching a class here in our Boulder, Colorado, office called Extending ENVI with IDL. Many people may not be aware that our geospatial image analysis software (ENVI) is built on a powerful programming language called IDL. IDL is used across disciplines to extract meaningful information and to create visualizations from complex numerical data. Predecessor versions of IDL were written for analysis of data from NASA missions such as Mariner and the International Ultraviolet Explorer. These predecessor versions of IDL were developed in the1970s at the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado, Boulder. The first official version of IDL was released in 1977, by Research Systems Inc. (RSI). IDL has come a long way since it was first created in the 1970s. Our engineers are constantly adding new functionality to IDL to keep it modern, flexible, and user-friendly.
Because IDL has its roots as a programming language geared towards scientists and engineers working with multidimensional arrays of scientific data, it lends itself very naturally to working with raster data from remote sensing systems. In 1994, RSI released the first version of ENVI. Since then, ENVI and IDL have been like inseparable best friends. Whatever IDL can do, ENVI can too. ENVI has an API written in IDL that can be used to create custom algorithms, custom workflows, and for batch processing of remotely sensed data.
The ability of ENVI to be extended using IDL is, in my opinion, the single most powerful aspect of ENVI. The ENVI API has a quick learning curve and once you figure it out, the sky is the limit to what you can do with your data. This includes powerful visualizations of geographic data, such as the one I created below using the CONTOUR function.
One of the more powerful aspects of the ENVI API is the ability to implement custom algorithms.This allows users to apply an algorithm that they have read about in scientific literature or to experiment with their own image analysis algorithms. Let’s go ahead and take a look at how one might implement a custom algorithm in ENVI using IDL. In this example, we will use the Tasseled Cap Transformation (TCT) for Landsat 8. The TCT was originally designed for the Landsat Multi-Spectral Scanner (MSS), which was launched in 1972. Subsequent adaptations of the TCT have been published in scientific literature for the Landsat Thematic Mapper (TM), the Landsat Enhanced Thematic Mapper (ETM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. The ability to perform the TCT on Landsat MSS, TM, ETM and ETM+ images is available out-of-the-box with ENVI. However,the TCT algorithm for the Landsat Operational Land Imager (OLI) sensor was only recently published and has not yet made it into the ENVI Toolbox. But, because the algorithm is now available in the scientific literature, we can apply this algorithm to Landsat 8 data by using the ENVI API. This algorithm takes a Landsat 8 image and calculates a new image file containing 6 bands, with each band containing unique information about a scene, such as albedo (brightness), vegetation health (greenness), and water content (wetness).
The example code below can be implemented in ENVI + IDL to perform a TCT on Landsat 8 data, as described in an article titled, Derivations of a tasseled cap transformation based on Landsat 8 at-satellite reflectance:
; Copyright (c) 1988-2015, Exelis Visual Information Solutions, Inc. All rights reserved.
; Set compile options - this is a standard compatibility line that is recommended for use in all IDL code
; Use the current session of ENVI
; Path and filename to a pre-calibrated Landsat 8 image file
; Input file should be radiometrically calibrated and optionally atmospherically corrected
inputRaster = e.OpenRaster('C:\Data\Naples\LC80160422015028LGN00_RadCal_MS_Subset.dat')
; Subset the input raster - only these 6 bands are needed for the TCT calculation
; Set up output raster file
outputRaster = ENVIRaster(URI=outputURI, INHERITS_FROM=subsetRaster)
; Create tiles - creating tiles allows ENVI + IDL to iterate through the tiles, which is a good idea
; when you want to minimize the impact of image processing on your computer's resources
tiles = subsetRaster.CreateTileIterator(MODE='spectral')
; Iterate through tiles
FOREACH tile, tiles DO BEGIN
; band 1
b1 = tile[*,0]
; band 2
b2 = tile[*,1]
; band 3
b3 = tile[*,2]
; band 4
b4 = tile[*,3]
; band 5
b5 = tile[*,4]
; band 6
b6 = tile[*,5]
data = make_array(dims,/FLOAT)
; Use IDL to do the TCT calculations
; Calculate brightness
data[*,0] = (b1 * 0.3029) + (b2 * 0.2786)+ (b3*0.4733) + (b4*0.5599) + (b5*0.508) + (b6*0.1872)
; Calculate greenness
data[*,1] = (b1 * (-0.2941)) + (b2 * (-0.243))+ (b3 * (-0.5424)) + (b4 * 0.7276) + (b5 * 0.0713)
; Calculate wetness
data[*,2] = (b1 * 0.1511) + (b2 * 0.1973)+ (b3 * 0.3283) + (b4 * 0.3407) + (b5 * (-0.7117)) + (b6*(-0.4559))
; Calculate TCT4 (Haze)
data[*,3] = (b1 * (-0.8239)) + (b2 * 0.0849)+ (b3 * 0.4396) + (b4 * (-0.058)) + (b5 * 0.2013) + (b6*(-0.2773))
; Calculate TCT5
data[*,4] = (b1 * (-0.3294)) + (b2 * 0.0557)+ (b3 * 0.1056) + (b4 * 0.1855) + (b5 * (-0.4349)) + (b6 *0.8085)
; Calculate TCT6
data[*,5] = (b1 * 0.1079) + (b2 * (-0.9023))+ (b3 * 0.4119) + (b4 * 0.0575) + (b5 * (-0.0259)) + (b6 *0.0252)
; Write to ouput file
outputRaster.SetTile, data, tiles
; Add appropriate Band Names to the HDR file
metadata = outputRaster.METADATA
metadata.UpdateItem, 'BAND NAMES', ['Brightness','Greenness','Wetness','TCT4','TCT5','TCT6']
; Save changes to output file
; Display output file in ENVI Display
The input file for this example was captured by Landsat 8 over Naples, Florida, on January 28, 2015. The screen capture below from the ENVI display shows a Color Infrared representation of the scene, along with the computed Brightness, Greenness and Wetness bands from the Tasseled Cap Transformation. This is just one example of how the power of IDL can make implementing custom algorithms so easy.
Muhammad Hasan Ali Baig, Lifu Zhang, Tong Shuai & Qingxi Tong (2014) Derivations of a tasseled cap transformation based on Landsat 8 at-satellite reflectance, Remote Sensing Letters, 5:5, 423-431, DOI: 10.1080/2150704X.2014.915434
Sign Up for News & Updates: Stay informed with the latest news, events, technologies and special offers.