Adam is a Product Manager on the ENVI & IDL software teams at Exelis Visual Information Solutions and he has been with the company for over 17 years. Adam obtained a M.Sc. degree from the University of Colorado in Geological and Earth sciences with a focus on remote sensing, GIS, digital image processing and geospatial analysis. Over his career at Exelis he has evangelized the use of GIS and remote sensing image analysis to facilitate geospatial awareness. As a product manager Adam's focus is to characterize customer needs that lead to the detailed definition and prioritization of functional requirements for new versions of our ENVI & IDL software. In his spare time he is an avid nature photographer and he enjoys all facets of digital imaging science.
Author: Adam O'Connor
By now we've all seen the power of multi-sensor data fusion to facilitate situational awareness which enhances our ability to understand and interpret a specific environment. Taking the most valuable components of disparate data sources then fusing them together can enrich contextual analysis and help us make better decisions based on the extraction of meaningful information from the fused data. When working with geospatial data such as LiDAR point-clouds and high-resolution imagery a relatively simple yet powerful technique is to utilize the georeferencing spatial reference metadata to encode each 3D point with corresponding image pixel values based on data geopositioning. This enables more realistic 3D visualization of the point-cloud data since the points can be displayed using colors derived from an alternate raster data source.
Fortunately the LAS format specification provides the ability to store RGB color information for every point stored in a *.las file. However, when a LiDAR data collection project is performed it does not always include cotemporal image acquisition so the process of coloring a point-cloud may need to be executed at a later time using raster data from a variety of sensors (e.g. EO/IR, SAR, etc.). For example, some of the Elevation Source Data (3DEP) available for download from The National Map does not include the RGB color information so it can be beneficial to also download corresponding High-Resolution Orthoimagery (HRO) then fuse the two datasets together.
With this in mind we have been working diligently on a new "Color Point Cloud" tool (and corresponding programmatic API task) within the upcoming ENVI 6.0 software version planned for release later this year. The new "Color Point Cloud" tool+task will allow users to process 3D point-cloud data along with any geographically overlapping raster dataset to generate a new output LAS 1.2 format *.las file which is RGB encoded with pixel values from user-selected image bands. This new processing capability also allows the user decide how to handle points that fall outside the spatial extent of the raster imagery by either removing the data from the generated output *.las result or simply coloring them all black (RGB=0,0,0):
Consider the USGS LiDAR Point-Cloud (LPC) source data that can be downloaded from The National Map for San Francisco, CA. Since the LAS datasets do not include RGB encoding a 3D point-cloud visualization will typically involve a simple colormap based on height attributes perhaps with shading based on intensity. While specific features are clearly visible in this style of data visualization it can be difficult to visually interpret the point-cloud:
Fusing this point-cloud data with the 1-foot resolution imagery also available for this region yields a much more realistic visual representation:
Keep in mind there's no rule that says the point-cloud RGB encoding must come Red | Green | Blue image channels which is why ENVI's "Color Point Cloud" tool+task is very flexible and allows the user to select any 3 bands from any raster dataset. For example, users can also utilize infrared bands from multispectral or hyperspectral datasets to obtain more complex coloring of the point-cloud data such as a CIR representation:
Moving forward we plan to support other point-cloud storage formats such as BPF (Binary Point File) and SIPC (Sensor Independent Point Cloud) that provide the ability to store even more per-point auxiliary attribute data that will enable not just visualization but also specialized algorithm development for automated analysis of fused 3D data products.
Categories: ENVI Blog | Imagery Speaks
Harris Geospatial Solutions (Exelis VIS) was recently made aware of a security vulnerability in the Flexera FlexNet Publisher technology that is utilized for license management of the IDL & ENVI software products. This security vulnerability applies to all current and older versions of our software and applies to scenarios where a license server is utilized which includes network floating (FL) type licenses and some node-locked (SN) type licenses that use a license manager.
The security vulnerability is limited to computers running a license manager server and should not be an issue when the license server components are only exposed on a trusted network. Nevertheless, we recognize the severity of this issue and are committed to making sure our customers can use the IDL & ENVI software in secure fashion. Consequently, to address this vulnerability Harris has acquired the latest FlexNet Publisher (v22.214.171.124) then built new lmgrd and vendor daemon components that are being released as a security patch for the license manager. Harris recommends that all customers using a license manager for IDL & ENVI products update their license server installations by following the instructions delineated in the following help article on our website:
Categories: ENVI Blog | Imagery Speaks, IDL Blog | IDL Data Point
One of the more powerful new features introduced in the recent ENVI 5.3 Service Pack 1 release is the ability to display a color RGB raster layer (i.e. image) using bands from different raster datasets. This style of color image visualization is useful in a wide variety of scenarios including the ability to display multiple derived raster products together or create a composite image of data from different times (i.e. temporal analysis). This functionality was actually available using the Available Bands List within the ENVI Classic application but as of the latest service pack release users can create the same raster visualization in the new ENVI application.
In previous versions of ENVI (5.3 and older) if you tried to create a color RGB raster layer using bands from different datasets you would have seen the following warning message:
The selected bands are from multiple files.
Creating a layer from multiple files is not supported.
Tags: ENVI, ENVI 5.3 Service Pack, RGB raster layer
One of the most exciting new features in the upcoming ENVI 5.3 Service Pack 1 release is an implementation of the popular Fmask (Function of mask) algorithm that provides automated cloud and cloud shadow detection in multispectral images. The initial focus of the ENVI implementation is on the generation of a cloud mask raster that can be used in subsequent image processing analysis to mask-out all cloud+shadow pixels. Furthermore, the ability to invert the mask in tools such as the Classification Workflow will allow users who are actually interested in analyzing the clouds to mask-out all non-cloud pixels.
The Fmask (3.2) algorithm will be exposed in both a new "Calculate Cloud Mask Using Fmask Algorithm" desktop application tool and associated "ENVICalculateCloudMaskUsingFmaskTask" routine in the programmatic API. Both the GUI tool and API task will create a cloud mask for Landsat 4-5 TM, Landsat 7 ETM+, Landsat 8 OLI/TIRS and NPP VIIRS M-Band datasets (we plan to expand support to include Sentinel-2 in a future release). This tool/task requires the following inputs:
- An image containing multispectral bands calibrated to top-of-atmosphere (TOA) reflectance
- A thermal-band image calibrated to brightness temperatures (in Kelvins)
- A cirrus-band image calibrated to TOA reflectance (applicable to Landsat 8 only)
Here is an example input Landsat 7 ETM+ scene with what I call popcorn clouds that has been calibrated to top-of-atmosphere (TOA) reflectance using the "Radiometric Calibration" tool in ENVI:
A very common geospatial processing task is to receive a new image product that needs to be orthorectified and coregistered to an existing controlled base orthoimagery reference that overlaps the geographic extent of the new dataset. This process can be accomplished through a variety of multi-step workflows such as RPC orthorectification with manual ground control point (GCP) definition potentially followed by image-to-image coregistration with interactive tie-point refinement. Furthermore, the process can become even more laborious if there is significant temporal difference between the datasets where over the period of time between image acquisitions there has been a substantial amount of change. Consequently, the primary issue with this approach is it involves a decent amount of human-in-the-loop software interaction that does not lend itself to headless automation or scalable big data processing deployments.
Since a lot of the components of the processing puzzle already exist in our ENVI software one of our engineers (Dr. Xiaoying Jin) developed a much simpler and more automated solution that will be introduced in our upcoming ENVI 5.3 SP1 release as two new tools (with corresponding programmatic API tasks):
RPC Orthorectification Using Reference Image – performs a refined RPC orthorectification by automatically generating ground control points (GCPs) from an orthorectified reference image with elevation derived from an auxiliary DEM raster dataset
Generate GCPs From Reference Image – generates and exports the ground control points (GCPs) in a format that can be used with other processing tools such as Image-to-Map Registration, Rigorous Orthorectification, DEM Extraction, and RPC Orthorectification workflows (e.g. edit the GCPs or review error statistics in an interactive environment)
Consider the following scenario for Castle Rock, CO where we have a historical QuickBird scene acquired in 2002 (data provided courtesy of DigitalGlobe) and a more recent High Resolution Orthoimagery acquired in 2012 (data downloaded from USGS National Map). In order to perform an accurate change detection analysis over this ten year period the two image datasets must be properly aligned. However, the georeferencing for the original raw datasets clearly shows significant spatial offset between the two images:
Sign Up for News & Updates: Stay informed with the latest news, events, technologies and special offers.