Mark Bowersox Exelis VIS  

Adam O’Connor

Adam is a Product Manager on the ENVI & IDL software teams at Exelis Visual Information Solutions and he has been with the company for over 17 years. Adam obtained a M.Sc. degree from the University of Colorado in Geological and Earth sciences with a focus on remote sensing, GIS, digital image processing and geospatial analysis. Over his career at Exelis he has evangelized the use of GIS and remote sensing image analysis to facilitate geospatial awareness. As a product manager Adam's focus is to characterize customer needs that lead to the detailed definition and prioritization of functional requirements for new versions of our ENVI & IDL software. In his spare time he is an avid nature photographer and he enjoys all facets of digital imaging science.

7

Jun

2016

Fusing Point-Cloud Data With Imagery

Author: Adam O'Connor

By now we've all seen the power of multi-sensor data fusion to facilitate situational awareness which enhances our ability to understand and interpret a specific environment. Taking the most valuable components of disparate data sources then fusing them together can enrich contextual analysis and help us make better decisions based on the extraction of meaningful information from the fused data. When working with geospatial data such as LiDAR point-clouds and high-resolution imagery a relatively simple yet powerful technique is to utilize the georeferencing spatial reference metadata to encode each 3D point with corresponding image pixel values based on data geopositioning. This enables more realistic 3D visualization of the point-cloud data since the points can be displayed using colors derived from an alternate raster data source.

Fortunately the LAS format specification provides the ability to store RGB color information for every point stored in a *.las file. However, when a LiDAR data collection project is performed it does not always include cotemporal image acquisition so the process of coloring a point-cloud may need to be executed at a later time using raster data from a variety of sensors (e.g. EO/IR, SAR, etc.). For example, some of the Elevation Source Data (3DEP) available for download from The National Map does not include the RGB color information so it can be beneficial to also download corresponding High-Resolution Orthoimagery (HRO) then fuse the two datasets together.

With this in mind we have been working diligently on a new "Color Point Cloud" tool (and corresponding programmatic API task) within the upcoming ENVI 6.0 software version planned for release later this year. The new "Color Point Cloud" tool+task will allow users to process 3D point-cloud data along with any geographically overlapping raster dataset to generate a new output LAS 1.2 format *.las file which is RGB encoded with pixel values from user-selected image bands. This new processing capability also allows the user decide how to handle points that fall outside the spatial extent of the raster imagery by either removing the data from the generated output *.las result or simply coloring them all black (RGB=0,0,0):


Screenshot of ENVI's new "Color Point Cloud" Tool

Consider the USGS LiDAR Point-Cloud (LPC) source data that can be downloaded from The National Map for San Francisco, CA. Since the LAS datasets do not include RGB encoding a 3D point-cloud visualization will typically involve a simple colormap based on height attributes perhaps with shading based on intensity. While specific features are clearly visible in this style of data visualization it can be difficult to visually interpret the point-cloud:


Data downloaded from The National Map courtesy of USGS

Fusing this point-cloud data with the 1-foot resolution imagery also available for this region yields a much more realistic visual representation:


Data downloaded from The National Map courtesy of USGS

Keep in mind there's no rule that says the point-cloud RGB encoding must come Red | Green | Blue image channels which is why ENVI's "Color Point Cloud" tool+task is very flexible and allows the user to select any 3 bands from any raster dataset. For example, users can also utilize infrared bands from multispectral or hyperspectral datasets to obtain more complex coloring of the point-cloud data such as a CIR representation:



Data downloaded from The National Map courtesy of USGS

Moving forward we plan to support other point-cloud storage formats such as BPF (Binary Point File) and SIPC (Sensor Independent Point Cloud) that provide the ability to store even more per-point auxiliary attribute data that will enable not just visualization but also specialized algorithm development for automated analysis of fused 3D data products.

Comments (0) Number of views (2041) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

22

Mar

2016

IDL & ENVI License Server Security Patch

Author: Adam O'Connor

Harris Geospatial Solutions (Exelis VIS) was recently made aware of a security vulnerability in the Flexera FlexNet Publisher technology that is utilized for license management of the IDL & ENVI software products. This security vulnerability applies to all current and older versions of our software and applies to scenarios where a license server is utilized which includes network floating (FL) type licenses and some node-locked (SN) type licenses that use a license manager.
 
The security vulnerability is limited to computers running a license manager server and should not be an issue when the license server components are only exposed on a trusted network. Nevertheless, we recognize the severity of this issue and are committed to making sure our customers can use the IDL & ENVI software in secure fashion. Consequently, to address this vulnerability Harris has acquired the latest  FlexNet Publisher (v11.13.1.3) then built new lmgrd and vendor daemon components that are being released as a security patch for the license manager. Harris recommends that all customers using a license manager for IDL & ENVI products update their license server installations by following the instructions delineated in the following help article on our website:

Please note that we are also working diligently on our upcoming IDL 9.0 and ENVI 6.0 software versions for release later this year which will include these license manager component updates in an off-the-shelf / out-of-the-box fashion. We greatly appreciate your business and patience while we produced a software patch to resolve this security vulnerability. If you have questions or need assistance related to this security patch please refer to the Getting Help section of the aforementioned help article for instructions on how to contact our technical support.

Comments (0) Number of views (2448) Article rating: 5.0

18

Feb

2016

Color Image Display Using Bands From Different Datasets

Author: Adam O'Connor

One of the more powerful new features introduced in the recent ENVI 5.3 Service Pack 1 release is the ability to display a color RGB raster layer (i.e. image) using bands from different raster datasets. This style of color image visualization is useful in a wide variety of scenarios including the ability to display multiple derived raster products together or create a composite image of data from different times (i.e. temporal analysis). This functionality was actually available using the Available Bands List within the ENVI Classic application but as of the latest service pack release users can create the same raster visualization in the new ENVI application.
 
In previous versions of ENVI (5.3 and older) if you tried to create a color RGB raster layer using bands from different datasets you would have seen the following warning message:
 

The selected bands are from multiple files.
Creating a layer from multiple files is not supported.
 
Now in ENVI 5.3 SP1 moving forward you can select bands from any raster dataset as you choose what to load into the Red | Green | Blue channels of an image display within the Data Manager dialog, for example:
 

 
The steps to follow to  create a new RGB layer consisting of three bands from different files are described in greater detail within the "Manage Raster Layers" topic of the ENVI software documentation help system. The same restrictions apply as before where the image bands must all have the same spatial size in terms of number of pixels in the X and Y dimensions ("Columns x Rows" or "Samples x Lines" depending on your preferred terminology) in order to display together as a single RGB color image. In most scenarios it also helps if the raster datasets are coregistered with the same geospatial extent and pixel size but this actually is not a requirement enforced by the ENVI software.
 
It is important to note that this capability only creates a raster layer visualization for on-screen image display within the ENVI software. If you wish to create a new file-on-disk dataset that is a coregistered stack of multiple input rasters there are tools in the ENVI Toolbox (and concomitant programmatic API routines) including the Image Registration workflow and Layer Stacking tool that provide this functionality.
 
For the purpose of this blog post I want to illustrate a simple example where the results of three different processing tools used in a target detection scenario can be displayed as a single RGB color image. In this scenario I utilized some of the ProSpecTIR-VS hyperspectral data acquired over Avon, NY as part of the RIT SHARE 2012 project. For the target detection training data I used a spectrally pure endmember pixel from one of the bright red targets present within this HSI scene:
 

Hyperspectral image data provided courtesy of Rochester Institute of Technology
 
In the image animation below you will see the True Color image followed by a RGB composite image comprised of the processing results from three different spectral mapping algorithms:
 
R = Adaptive Coherence Estimator (ACE)
G = Constrained Energy Minimization (CEM)
B = Matched Filtering (MF)
 
In the RGB composite image display the pixels that have the highest value match to the input target in each of the three different spectral mapping algorithms will end up being pure white. Consequently, if you keep your eye on the bright red pixels from the input hyperspectral image you will notice they are "bright" in the RGB composite image comprised of results from the target detection algorithms. What is interesting to notice is how the red targets in shadow next to the trees at the southern end of the open field have a distinct magenta appearance which suggests that the ACE and MF algorithms did the best job identifying these targets (i.e. red + blue = magenta). Furthermore, one of the other bright targets in the parking lot toward the top of this scene has a bright green appearance in the RGB composite image suggesting that the CEM algorithm identified this region as a match which is probably a false positive in this target detection scenario.
 

Hyperspectral image data provided courtesy of Rochester Institute of Technology

Comments (0) Number of views (3116) Article rating: No rating

3

Nov

2015

Automated Cloud and Cloud Shadow Detection

Author: Adam O'Connor

One of the most exciting new features in the upcoming ENVI 5.3 Service Pack 1 release is an implementation of the popular Fmask (Function of mask) algorithm that provides automated cloud and cloud shadow detection in multispectral images. The initial focus of the ENVI implementation is on the generation of a cloud mask raster that can be used in subsequent image processing analysis to mask-out all cloud+shadow pixels. Furthermore, the ability to invert the mask in tools such as the Classification Workflow will allow users who are actually interested in analyzing the clouds to mask-out all non-cloud pixels.
 
The Fmask (3.2) algorithm will be exposed in both a new "Calculate Cloud Mask Using Fmask Algorithm" desktop application tool and associated "ENVICalculateCloudMaskUsingFmaskTask" routine in the programmatic API. Both the GUI tool and API task will create a cloud mask for Landsat 4-5 TM, Landsat 7 ETM+, Landsat 8 OLI/TIRS and NPP VIIRS M-Band datasets (we plan to expand support to include Sentinel-2 in a future release). This tool/task requires the following inputs:
 
- An image containing multispectral bands calibrated to top-of-atmosphere (TOA) reflectance
- A thermal-band image calibrated to brightness temperatures (in Kelvins)
- A cirrus-band image calibrated to TOA reflectance (applicable to Landsat 8 only)
 
Here is an example input Landsat 7 ETM+ scene with what I call popcorn clouds that has been calibrated to top-of-atmosphere (TOA) reflectance using the "Radiometric Calibration" tool in ENVI:
 

Image data downloaded from USGS EarthExplorer
 
Here is the output mask raster generated by the new "Calculate Cloud Mask Using Fmask Algorithm" tool with the cloud pixels displayed with a Cyan color (by default Masked pixels in a Mask raster are displayed Black but this color can be changed by the user):
 

Image data downloaded from USGS EarthExplorer
 
It is also worth mentioning that output mask raster will have the calculated scene cloud cover percentage captured in the new 'cloud cover' metadata that can be viewed in the View Metadata dialog:
 

 
The output mask raster can then be used as a mask in a subsequent land use/cover classification so that the cloud+shadow pixels do not impact the image processing. In the resulting classification image the cloud+shadow pixels will be designated into a "Masked Pixels" class displayed as dark gray (RGB=64,64,64):
 

Image data downloaded from USGS EarthExplorer
 
Here is another example using a mosaic of two NPP VIIRS moderate resolution M-Band scenes from 23 Oct 2015 when Hurricane Patricia was making landfall on the coast of Mexico:
 

Image data downloaded from NOAA CLASS
 
Although the Classification Workflow has a convenient "Inverse Mask" option checkbox there are situations where it can be beneficial to invert the mask raster and save into a new file-on-disk that can be used independently. It just so happens that you can use the Band Math tool with clever expressions that involve binary operators like "B1 LT 1" in order to invert the values of a binary mask raster. In this case mask inversion results in the cloud pixels being "Not Masked" (i.e. On and displayed as White):
 

Image data downloaded from NOAA CLASS
 
A subsequent unsupervised classification of the NPP VIIRS M-Band data using this mask will actually result in a classification image of the clouds:
 

Image data downloaded from NOAA CLASS
 
CREDIT
The ENVI software uses the Fmask algorithm cited in the following references:
 
Zhu, Z., S. Wang, and C. E. Woodcock. "Improvement and Expansion of the Fmask Algorithm: Cloud, Cloud Shadow, and Snow Detection for Landsats 4-7, 8, and Sentinel 2 Images." Remote Sensing of Environment 159 (2015): 269-277, doi:10.1016/j.rse.2014.12.014 (paper for Fmask version 3.2).
 
Zhu, Z., and C. E. Woodcock. "Object-Based Cloud and Cloud Shadow Detection in Landsat Imagery." Remote Sensing of Environment 118 (2012): 83-94, doi:10.1016/j.rse.2011.10.028 (paper for Fmask version 1.6).

Comments (0) Number of views (5346) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

24

Sep

2015

Making Image-to-Image Alignment Simpler

Author: Adam O'Connor

A very common geospatial processing task is to receive a new image product that needs to be orthorectified and coregistered to an existing controlled base orthoimagery reference that overlaps the geographic extent of the new dataset. This process can be accomplished through a variety of multi-step workflows such as RPC orthorectification with manual ground control point (GCP) definition potentially followed by image-to-image coregistration with interactive tie-point refinement. Furthermore, the process can become even more laborious if there is significant temporal difference between the datasets where over the period of time between image acquisitions there has been a substantial amount of change. Consequently, the primary issue with this approach is it involves a decent amount of human-in-the-loop software interaction that does not lend itself to headless automation or scalable big data processing deployments. 
 
Since a lot of the components of the processing puzzle already exist in our ENVI software one of our engineers (Dr. Xiaoying Jin) developed a much simpler and more automated solution that will be introduced in our upcoming ENVI 5.3 SP1 release as two new tools (with corresponding programmatic API tasks):
 
RPC Orthorectification Using Reference Image – performs a refined RPC orthorectification by automatically generating ground control points (GCPs) from an orthorectified reference image with elevation derived from an auxiliary DEM raster dataset
 
Generate GCPs From Reference Image – generates and exports the ground control points (GCPs) in a format that can be used with other processing tools such as Image-to-Map Registration, Rigorous Orthorectification, DEM Extraction, and RPC Orthorectification workflows (e.g. edit the GCPs or review error statistics in an interactive environment)
 
Consider the following scenario for Castle Rock, CO where we have a historical QuickBird scene acquired in 2002 (data provided courtesy of DigitalGlobe) and a more recent High Resolution Orthoimagery acquired in 2012 (data downloaded from USGS National Map). In order to perform an accurate change detection analysis over this ten year period the two image datasets must be properly aligned. However, the georeferencing for the original raw datasets clearly shows significant spatial offset between the two images:
 

Image data provided courtesy of DigitalGlobe and USGS
 
Even after performing a RPC orthorectification of the QuickBird Level 1B product (without ground control) there are still several pixel offsets in comparison to the reference image we are trying to match. Fortunately in ENVI 5.3 SP1 a user can now input these two image datasets and DEM elevation source into a single tool where the QuickBird dataset can be orthorectified and coregistered to the High Resolution Orthoimagery in one quick-n-easy processing step:
 

 
Another benefit of this new ENVI software functionality is the user does not need to be concerned with the spatial extent of image overlap or different coordinate system & pixel size … the software handles these processing complexities for the user automatically. Here is a screenshot of processing output result which shows nearly perfect pixel alignment between the two image datasets:
 

Image data provided courtesy of DigitalGlobe and USGS

Comments (0) Number of views (2915) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

123

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation