Using Deep Learning for Feature Extraction

Author: Barrett Sather

In August, I talked about how to pull features out of images using known spatial properties about an object. Specifically, in that post, I used rule-based feature extraction to pull stoplights out of an image.

Today, I’d like to look in to a new way of doing feature extraction using deep learning technology. With our deep learning tools developed here in house, we can use examples of target data in order to find other similar objects in other images.

In order to train the system, we will need 3 different kinds of examples for the deep learning network to learn what to look for. This will be target, non-target, and confusers. These examples are patches cut out of similar images, and the patches will all be the same size. In my case for this exercise, I've picked a size of 50 by 50 pixels.

The first patch type is actual target data – I’ll be looking for illuminated traffic lights. For the model to work well, we’ll need different kinds of traffic signals, lightning conditions, and camera angles. This will help the network generalize what the object looks like.

Next, we’ll need negative data, or data that does not contain the object. This will be the areas surrounding the target, and other features that will possibly appear in the background of the image. In our case for traffic lights, this will include cars, streetlights, road signs, foliage, and others.

For the final patch type, I went through some images and marked things that may confuse the system. These are called confusers, and will be objects with a similar size, color, and/or shape of the target. In our case, this could be other signals like red arrows or a “don’t walk” hand. I’ve also included some bright road signs and a distant stop sign.

Once we have all of these patches, we can use our machine learning tool known as MEGA to train a neural network that can be used to identify similar objects in other images. 

Do note that I have many more patches created than just the ones displayed. With more examples, and more diverse examples, MEGA has a better chance of accurately classifying target vs non-target in an image.

In our case here, we’ll only have three possible outcomes as we look though the image. This will be lights, not lights, and looks-like-a-light classes. If you have many different objects in your scene, you can even get something more like a classification image, as MEGA can be used to identify as many objects in an image as you like. If we wanted to extend this idea here, we could look for red lights, green lights, street lights, lane markers, or other cars. (This is a simple example of how deep learning would be used in autonomous cars!)

To learn more about MEGA and what it can do in your analysis stack, contact our custom solutions group for more details! For my next post, we’ll look at the output from the trained neural network, and analyze the results.

Comments (0) Number of views (3310) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks





ENVI: Adding custom polygons to the display

Author: Daryl Atencio

A recent project I worked on required custom polygons – Controlled by my application – to be added to the ENVI display.  The following code defines an object class that allows the user to place a polygon in the ENVI display using window coordinates.  To run the application:

  1. Save the code that follows to a file named envi_polygon_example.pro
  2. Open and compile the file in the IDLDE
  3. Execute the following at the IDL command prompt

  4. envi_polygon_example



; Lifecycle method for destroying the object


pro my_polygon::Cleanup

  compile_opt idl2, logical_predicate






; Cleans up the member variables of the object class


pro my_polygon::Destruct

  compile_opt idl2, logical_predicate








; Lifecycle method for initializing the class


; :Returns:

;   1 if the object initializes successfully and 0 otherwise


; :Params:

;   xy: in, optional, type="int"

;     a [2,n] array containing the [x,y] points in window coordinates


; :Keywords:

;   POLYGONS: in, optional, type="int"

;     An integer array of one or more polygon descriptions.  See IDL help for

;     IDLgrPolygon for more information

;   _REF_EXTRA: Used to set properties of the inherited IDLgrPolygon


function my_polygon::Init, xy, $

  POLYGONS=poly, $


  compile_opt idl2, logical_predicate

  void = self->IDLmiManipGraphicOverlay::Init(_EXTRA=refExtra)

  if ~void then begin

    return, 0





  if n_elements(xy) then begin

    self->SetProperty, DATA=xy, POLYGONS=poly


  if n_elements(refExtra) then begin

    self->SetProperty, _EXTRA=refExtra


  return, 1





; This method initializes the data space used by the graphics layer


pro my_polygon::InitializeDataspace

  compile_opt idl2, logical_predicate

  e = envi(/CURRENT)

  eView = e->GetView()

  eView->GetProperty, _COMPONENT=ecfViewGroup

  oDS = ecfViewGroup->GetDescendants(BY_TYPE='DATASPACE', /FIRST_ONLY)

  self._oTargetDS = oDS





; Initializes the graphics components of the class


pro my_polygon::InitializeGraphics

  compile_opt idl2, logical_predicate

  void = self->IDLgrPolygon::Init(COLOR=[255,0,0], /PRIVATE, THICK=2)

  self._oGrOverlay->IDLmiContainer::Add, self





; This method is for setting class properties


pro my_polygon::SetProperty, $

  DATA=data, $

  POLYGONS=poly, $


  compile_opt idl2, logical_predicate

  if n_elements(data) then begin

    self->SetData, data, POLYGONS=poly


  if n_elements(refExtra) then begin

    self->IDLgrPolygon::SetProperty, _EXTRA=refExtra

    self->IDLmiManipLayer::SetProperty, _EXTRA=refExtra






; This method maps the points from window coordinates to map coordinates and

; adds the mapped points to the IDLgrPolygon.


; :Params:

;   xy: in, required, type="int"

;     A [2,n] array of points (In window coordinates) to be added to the polygon


; :Keywords:

;   POLYGONS: in, optional, type="int"

;     An integer array of one or more polygon descriptions.  See IDL help for

;     IDLgrPolygon for more information


pro my_polygon::SetData, xy, $


  compile_opt idl2, logical_predicate

  self._oTargetDS->WindowToVis, reform(xy[0,*]), reform(xy[1,*]), xVis, yVis

  self->IDLgrPolygon::SetProperty, DATA=transpose([[xVis],[yVis]]), $






; Class structure definition


pro my_polygon__define

  compile_opt idl2, logical_predicate

  void = {my_polygon                    $

    , inherits IDLmiManipGraphicOverlay $

    , inherits IDLmiManipLayer          $

    , inherits IDLgrPolygon             $







pro envi_polygon_example

  compile_opt idl2, logical_predicate

  e = envi(/CURRENT)

  if ~isa(e, 'envi') then begin

    e = envi()


  file = FILEPATH('qb_boulder_msi', ROOT_DIR=e.ROOT_DIR, SUBDIRECTORY=['data'])

  eRaster = e->OpenRaster(file)

  eView = e->GetView()

  eLayer = eView->CreateLayer(eRaster)

  xy = [[470,140],[560,140],[560,230],[470,230], $


  conn = [4,0,1,2,3,4,4,5,6,7]

  oPolygon = obj_new('my_polygon', xy, LINESTYLE=5, POLYGONS=conn, STYLE=1)


Comments (0) Number of views (2027) Article rating: No rating




Customizing the Geospatial Services Framework with Node.js

Author: Daniel Platt

One of the greatest features of the Geospatial Services Framework (GSF), in my opinion, is the fact that it is built upon Node.js. There are many reasons why this is great, but I wanted to talk about one in particular which is the amount of customization this provides.

Below I will show a simple but powerful example of this customization, but before I get there, I want to give a quick overview of both GSF and Node.js. 

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. With that said, what is important about Node.js in the context of this blog post is that it is a powerful, scalable backend for a web application that is written in the same language almost every website uses, JavaScript.

We have improved ENVI Service Engine by adding GSF – a lightweight, but powerful framework based on Node.js that can provide scalable geospatial intelligence for any size organization. I like to describe it as a “messenger” system that provides a way to communicate between the web and our various products or “engines” that provide analytics such as ENVI, IDL and more. 

GSF by design has its Node.js code exposed. This allows the product to be customized infinitely to fit whatever architecture that it needs to reside in. It is a modular system, and has different modules that can be toggled on/off or even duplicated and extended upon. This makes customization easy and safe.

One of these modules is called a Request Handler. This module serves up endpoints for GSF. This can be simply REST based calls, a webpage or even better, both. 

While developing an ENVI web client demo that takes Amazon S3 hosted raster data and passes it to GSF to run analytics on, I found that I didn’t have a way to simply list what data is available in my S3 storage. While exploring ways to solve this problem I came to a realization that I can simply use the power of Node.js to accomplish this task.

After importing the aws-sdk package that is already installed with GSF into my request handler, I just wrote a simple function to use that package to list any .dat files in my S3 storage data and return that information to my front end web application to be ingested and displayed.

Here is the slightly modified request handler code with comments explaining each part.

//Import express, used by node.js to setup rest endpoints
var express = require('express');
//Import AWS, used to interact with Amazon Web Services including S3 storage
var aws = require('aws-sdk');
//Extra tools that should be included in request handlers
var extend = require('util')._extend;
var defaultConfig = require('./config.json');

 * Dynamic Request Handler

function DynUiHandler() {
    var s3Bucket, s3;
    //Setup config options
    var config = {};
    extend(config, defaultConfig);
    //Grab information for S3 from config.json
    s3Workspace = config.S3Root;
    s3Bucket = config.S3Bucket;

    // Setup S3
    s3 = new aws.S3(config);

     * List files in the s3 bucket.
     * @param {object} req - The Express request object.
     * @param {object} res - The Express response object.
    function listS3Data(req, res) {
    //Take the parameter passed from the REST call and set that as the bucket to be accessed in the call to s3.
        var params = {
            Bucket: req.params.bucket
        //Call the S3 package's built in listObjects function to return what is avaiable in the specified bucket

        s3.listObjects(params, function(err, data) {
        //check if error occured and if so, halt and report it to client.
            if (err) {
                var code = err.code || 500;
                var message = err.message || err;
                    error: message
            // If no error, push every object found in the response with a '.dat' extension to an array that will be returned to client 
            else {
                var files = [];
                //Look at each file contained in data returned by s3.listObjects()
                data.Contents.forEach(function(file) {
                    //Searches for files with .dat in the bucket requested
                        //If found, store that file information in the files array.
            //send the files array containing metadata of all .dat files found.

    //Initialize request handler, run when GSF starts up.
    this.init = function(app) {
        // Set up request handler to host the html subdirectory as a webpage.
        app.use('/dynui/', require('express').static(__dirname + '/html/'));
        // Set up a rest call that runs listS3Data and supplies the accomponying bucket parameter.
        app.get('/s3data/:bucket', listS3Data);
module.exports = DynUiHandler;

After restarting the server I was able to hit my rest endpoint by pasting the following in my browser:

This would return me JSON of the contents of “mybucket”

Using this information I was able to give a user of my application real time information right inside of the interface about which data they have available to them. 

This is a simple example of how I customized a stock request handler to make my web demo more powerful. However, the ability to modify the source code of such a powerful system in a modular fashion provides a safe way to mold GSF into the exact form and shape you need to fit into your own architecture. 

It might just be me but I much prefer that over restructuring my own existing architecture to fit closed source code I know nothing about.

Comments (0) Number of views (3524) Article rating: No rating




The Future of Weather Observation is Coming Soon! GOES-R

Author: Joey Griebel

If all goes according to plan, next month the GOES-R weather satellite will be launching and the next generation of weather forecasting, solar activity monitoring, and lightening detection will be here. This advanced satellite will change how quickly and accurately we are able to monitor and predict hazardous weather and help give those in harm’s way the time needed to prepare and evacuate. The GOES-R satellite will include:

Advanced Baseline Imager (ABI) – an advanced imager that has 3 times more channels, 4 times better resolution , and 5 times faster than before. All of this leads to better observation of severe storms, fire, smoke, aerosols, and volcanic ash.

Geostationary Lightening Mapper - The lightning mapper will allow mapping of lightning strikes on ground, as well as lightning in the atmosphere. Researchers have found that an increase in lightning activity may be a sign of tornadoes forming, thus providing the data to detect tornadoes faster.

Space Weather Observation - GOES-R will work with NOAA instruments to gather information on radiation hazards from sun that can interfere with communication and navigation systems, damage satellites, threaten power utilities.


Harris Corporation has been supporting NOAA with some aspects on the GOES-R Satellite construction and will be providing the Ground System Support, as well as the 16.4 Meter triband antenna needed to stay in touch with it.

Now what does this have to do with ENVI? Once GOES-R is operational and collecting data, ENVI will be working to support the Harris Weather groups WXconnect systems for data validation and visualization, supporting the ABI data directly in ENVI, as well as working with NOAA to help continue to create advanced products in the future. It is an exciting time for NOAA with this milestone launch and an exciting time to be working with ENVI to get to work the advanced data that will be coming down!

Below are the baseline products, as well as some future products that can be expected.




Advanced Baseline Imager (ABI)

Absorbed Shortwave Radiation: Surface

Aerosol Detection (Including Smoke and Dust)

Aerosol Particle Size

Aerosol Optical Depth (AOD)

Aircraft Icing Threat

Clear Sky Masks

Cloud Ice Water Path

Cloud and Moisture Imagery

Cloud Layers/Heights

Cloud Optical Depth

Cloud Liquid Water

Cloud Particle Size Distribution

Cloud Type

Cloud Top Height

Convective Initiation

Cloud Top Phase


Cloud Top Pressure

Currents: Offshore

Cloud Top Temperature

Downward Longwave Radiation: Surface

Derived Motion Winds

Enhanced "V" / Overshooting Top Detection

Derived Stability Indices

Flood/Standing Water

Downward Shortwave Radiation: Surface

Ice Cover

Fire/Hot Spot Characterization

Low Cloud and Fog

Hurricane Intensity Estimation

Ozone Total

Land Surface Temperature (Skin)

Probability of Rainfall

Legacy Vertical Moisture Profile

Rainfall Potential

Legacy Vertical Temperature Profile

Sea and Lake Ice: Age


Sea and Lake Ice: Concentration

Rainfall Rate / QPE

Sea and Lake Ice: Motion

Reflected Shortwave Radiation: TOA

Snow Depth (Over Plains)

Sea Surface Temperature (Skin)

SO2 Detection

Snow Cover

Surface Albedo

Total Precipitable Water

Surface Emissivity

Volcanic Ash: Detection and Height

Tropopause Folding Turbulence Prediction

Geostationary Lightning Mapper (GLM)

Upward Longwave Radiation: Surface

Lightning Detection: Events, Groups & Flashes

Upward Longwave Radiation: TOA

Space Environment In-Situ Suite (SEISS)

Vegetation Fraction: Green

Energetic Heavy Ions

Vegetation Index

Magnetospheric Electrons & Protons: Low Energy


Magnetospheric Electrons & Protons: Med & High Energy

Solar & Galactic Protons

Magnetometer (MAG)

Geomagnetic Field

Extreme Ultraviolet and X-ray Irradiance Suite (EXIS)

Solar Flux: EUV

Solar Flux: X-ray Irradiance

Solar Ultraviolet Imager (SUVI)

Solar EUV Imagery


Comments (0) Number of views (2620) Article rating: 4.7

Categories: ENVI Blog | Imagery Speaks





Continuum Removal

Author: Austin Coates

Recently, I was given the chance to practice some spectroscopy and in preparation for the project, I realized that I did not have a simple way to visualize the variations in different absorption features between very discreet wavelengths. The method that I elected to employ for this task is called continuum removal  (Kokaly, Despain, Clark, & Livo, 2007). This method allows you to compare different spectra and essentially normalize the data so that they can be more easily compared with one another.

To use the algorithm, you first select the region that you are interested in (for me this was between 550 nm and 700 nm -this is the region of my spectra that deals with chlorophyll absorption and pigment). Once the region is selected then a linear model is fit between the two points and this is called the continuum. The continuum is the hypothetical background absorption feature, or artifact or other absorption feature, which acts as the baseline for the target features to be compared against (Clark, 1999). Once the continuum has been set then the continuum removal process can be performed on all spectra in question using the following equation (Kokaly, Despain, Clark, & Livo, 2007).

 RC is the resulting continuum removed spectra, RL is the continuum line and, Ro is the original reflectance value.

Figure 1: Original spectra of two healthy plants. The dotted line denotes the continuum line. The x axis shows wavelengths in nm and the y axis represents reflectance.

Figure 2: The continuum removal for wavelengths 550 nm - 700 nm.


The resulting code gives you a tool that will take in two spectral libraries, with one spectra per library, and return two plots similar to what is shown in Figure 1 and Figure 2.


pro Continuum_Removal

compile_opt IDL2


Spectra_File_1  =

Spectra_File_2 =


; Find the bounds for the feature

FB_left = 550

FB_right =700


; Open Spectra 1

oSLI1 = ENVISpectralLibrary(Spectra_File_1)

spectra_name = oSLI1.SPECTRA_NAMES

Spectra_Info_1 = oSLI1.GetSpectrum(spectra_name)


; Open Spectra 2

oSLI2 = ENVISpectralLibrary(Spectra_File_2)

spectra_name = oSLI2.SPECTRA_NAMES

Spectra_Info_2 = oSLI2.GetSpectrum(spectra_name)


; Get the wavelengths

wl = Spectra_Info_1.wavelengths


; Create Bad Bands List (this removes some regions of the spectra associated with water vapor absorption)

bb_range = [[926,970],[1350,1432],[1796,1972],[2349,2500]]

bbl = fltarr(n_elements(wl))+1

dims = size(bb_range, /DIMENSIONS)

for i = 0 , dims[1]-1 do begin

  range = bb_range[*,i]

  p1 = where(wl eq range[0])

  p2 = where(wl eq range[1])

  bbl[p1:p2] = !VALUES.F_Nan



;Plot oSLI1 / oSLI2

p = plot(wl, Spectra_Info_1.spectrum*bbl, xrange = [min(wl, /nan),max(wl, /nan)],$

  yrange=[0,max([Spectra_Info_1.spectrum*bbl,Spectra_Info_2.spectrum*bbl], /nan)], thick = 2, color = 'blue')


p = plot(wl, Spectra_Info_2.spectrum*bbl, /overplot, thick = 2, color = 'green')


; create the linear segment

Spectra_1_y1 = Spectra_Info_1.spectrum[where( wl eq FB_left)]

Spectra_1_y2 = Spectra_Info_1.spectrum[where( wl eq FB_right)]

pl_1 = POLYLINE([FB_left,FB_right], [Spectra_1_y1, Spectra_1_y2], /overplot, /data, thick = 2, LINESTYLE = '--')

Spectra_2_y1 = Spectra_Info_2.spectrum[where( wl eq FB_left)]

Spectra_2_y2 = Spectra_Info_2.spectrum[where( wl eq FB_right)]

pl_2 = POLYLINE([FB_left,FB_right], [Spectra_2_y1, Spectra_2_y2], /overplot, /data, thick = 2, LINESTYLE = '--')


; Get the equation of the line

LF_1 = LINFIT([FB_left,FB_right], [Spectra_1_y1, Spectra_1_y2])

LF_2 = LINFIT([FB_left,FB_right], [Spectra_2_y1, Spectra_2_y2])


; Get the values between the lower and upper bounds

x_vals = wl [ where(wl eq FB_left) : where(wl eq FB_right)]


; Compute the continuum line

RL_1 = LF_1[0] + LF_1[1]* x_vals

RL_2 = LF_2[0] + LF_2[1]* x_vals


; Perform Continuum Removal

Ro_1 = Spectra_Info_1.spectrum[ where(wl eq FB_left) : where(wl eq FB_right)]

RC_1 =  Ro_1 / RL_1

Ro_2 = Spectra_Info_2.spectrum[ where(wl eq FB_left) : where(wl eq FB_right)]

RC_2 = Ro_2 / RL_2


; Plot the new Continuum Removal Spectra

pl_RC_1 = plot(x_vals, RC_1, color = 'Blue', xrange = [min(x_vals, /NAN), max(x_vals, /NAN)] )

pl_RC_2 = plot(x_vals, RC_2, color = 'Green', /overplot)




Kokaly, R. F., Despain, D. G., Clark, R. N., & Livo, K. E. (2007). Spectral analysis of absorption features for mapping vegetation cover and microbial communities in Yellowstone National Park using AVIRIS data.

Clark, R. N. (1999). Spectroscopy of rocks and minerals, and principles of spectroscopy. Manual of remote sensing3, 3-58. 

Comments (0) Number of views (1115) Article rating: No rating

Categories: IDL Blog | IDL Data Point




«March 2017»





















© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation