1

Dec

2016

Optimizing Max Kernel Operation in IDL

Author: Atle Borsholm

I found this optimization question on the comp.lang.idl-pvwave, and decided to give it a try. The question was to implement an algorithm that replaces every element in a 2-D array with its neighborhood maximum value. In this case the neighborhood size was 101x101 (i.e. -50 to +50), and the array size was 3200x3248. The nested FOR loop approach looks like the following code snippet.

  data = randomu(seed,3200,3248)
  dim = size(data,/dimension)
  nx = dim[0]
  ny = dim[1]
 
  t0 = tic('Nested FOR')
  result2 = data
  for i=0,nx-1 do begin
    for j=0,ny-1 do begin
      result2[i,j] = max(data[(i-50)>0:(i+50)<(nx-1),(j-50)>0:(j+50)<(ny-1)])
    endfor
  endfor
  toc,t0
 

My first thought was to use the > operator which returns the maximum of 2 arguments. It operates on arrays, and in conjunction with the SHIFT function it serves to return the larger of 2 neighbors. The other trick here is that since we are looking for a 101x101 neighborhood maximum, we can use a combination of smaller neighborhood maxima as input in a structured way in order to achieve the exact 101x101 neighborhood size. The code that I ended up with after some trial and error was the following.

  t0 = tic('Iterative >')
  ; Using SHIFT and > in an iterative way
  padded = replicate(min(data),size(data,/dimension)+100)
  padded[50,50] = data
  tmp3 = shift(padded,1,0) > padded > shift(padded,-1,0)
  tmp9 = shift(tmp3,3,0) > tmp3 > shift(tmp3,-3,0)
  tmp27 = shift(tmp9,9,0) > tmp9 > shift(tmp9,-9,0)
  tmp81 = shift(tmp27,27,0) > tmp27 > shift(temporary(tmp27),-27,0)
  tmp99 = shift(tmp9,44,0) > temporary(tmp81) > shift(temporary(tmp9),-44,0)
  tmp101 = shift(tmp3,49,0) > temporary(tmp99) > shift(temporary(tmp3),-49,0)
 
  ; Same for Y-dim
  tmp3 = shift(tmp101,0,1) > tmp101 > shift(temporary(tmp101),0,-1)
  tmp9 = shift(tmp3,0,3) > tmp3 > shift(tmp3,0,-3)
  tmp27 = shift(tmp9,0,9) > tmp9 > shift(tmp9,0,-9)
  tmp81 = shift(tmp27,0,27) > tmp27 > shift(temporary(tmp27),0,-27)
  tmp99 = shift(tmp9,0,44) > temporary(tmp81) > shift(temporary(tmp9),0,-44)
  tmp101 = shift(tmp3,0,49) > temporary(tmp99) > shift(temporary(tmp3),0,-49)
 
  result1 = (temporary(tmp101))[50:50+nx-1,50:50+ny-1]
  toc,t0
 

I didn’t say that optimized code always looks pretty, but the goal here is to run fast. Adding in some result comparison checking to make sure the results are equivalent.

 
  print, array_equal(result1,result2) ? 'Results are matching' : 'SOMETHING went wrong'
 

Finally, here are the results, which yielded an impressive amount of speed-up, the execution time went from 189.4 seconds to 1.4 seconds, and the results are identical:

  % Time elapsed Nested FOR: 189.39470 seconds.
  % Time elapsed Iterative >: 1.4241931 seconds.
  Results are matching
 
 
 

Comments (0) Number of views (225) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

17

Nov

2016

New in ENVI 5.4: ENVITask Returning a Virtual Raster

Author: Brian Griglak

Since the creation of custom ENVITasks in ENVI 5.2 SP1, there has been the requirement that your procedure must commit all output objects to disk.  There was the rule that the procedure wrapped by the task must have an input keyword telling it the filename to use to write the object to.  In the task template, you would have an output parameter that mapped to that keyword, and then during task loading the framework would magically create an input parameter for you, mapped to the same keyword with TYPE set to “ENVIURI”. 

The automatic creation of the input parameter and the internal correlation of the two parameters were done with the best of intentions, to simplify the process of creating custom tasks.  Alas the user feedback on this feature wasn’t always as rosy as we hoped.

So in ENVI 5.4 we’re shaking things up and giving you the task developer more control.  If you still use the “version” property in the task template, and have it set to “5.2.1”, “5.3”, or “5.3.1”, then you’ll get the old behavior.  But if you switch to using “schema” set to the value “envitask_3.0”, then a new set of rules apply to the procedure, and what you can do inside it.  In the new paradigm, your procedure will have separate keywords for the input filename and output object reference.  If you like, you can skip the filename keyword completely and return an object that hasn’t been tethered to disk at all.  This makes life much easier for types like ENVIGCPSet and ENVITiePointSet, but also allows for a procedure that constructs a complex virtual raster chain based on the other input parameters.

You might be asking why you would want a task to return a virtual raster.  With the new Classification Framework that is part of ENVI 5.4, you need to make sure that you prepare the data you want to run through a trained classifier the same way you prepared your training data.  One way to do this is to create a task that returned the preprocessed data in a consistent way.  If you can get away with not having to save that preprocessed attribute raster to disk, why not take advantage of the time and space advantages of using a virtual raster.

The following example is a modified version of the new Custom Task Tutorial in the ENVI 5.4 release.  That task wraps a procedure that uses a number of other tasks internally to perform all the preprocessing.  Here I’ve modified it to use virtual rasters and other ENVI API function calls to avoid ever writing a file to disk.

The code goes through a number of steps, which I will describe after showing it:

pro SentinelVegIndices_blog, INPUT_RASTER_10M=raster10m, $
                             INPUT_RASTER_20M=raster20m, $
                             OUTPUT_RASTER=outputRaster
  compile_opt idl2, hidden
 
  ; Get the spatial reference of the 10-meter raster
  spatialRef = raster10m.SPATIALREF
  coordSys = ENVICoordSys(COORD_SYS_CODE=spatialRef.COORD_SYS_CODE)
 
  ; Create a spatial grid definition
  grid = ENVIGridDefinition(coordSys, $
                            PIXEL_SIZE=spatialRef.PIXEL_SIZE, $
                            NCOLUMNS = raster10m.NCOLUMNS, $
                            NROWS = raster10m.NROWS, $
                            TIE_POINT_MAP=spatialRef.TIE_POINT_MAP, $
                            TIE_POINT_PIXEL = spatialRef.TIE_POINT_PIXEL)
 
  ; Regrid the 20-meter bands to 10 meters
  regriddedRaster = ENVISpatialGridRaster(raster20m, GRID_DEFINITION=grid)
 
  ; Create a layer stack
  layerStack = ENVIMetaspectralRaster([raster10m, regriddedRaster], $
                                      SPATIALREF=spatialRef)
 
  ; Compute image statistics
  stats = ENVIRasterStatistics(layerStack)
 
  ; Perform dark subtraction as an alternative to atmospheric correction
  bandRasters = ObjArr(layerStack.nBands)
  for i = 1, layerStack.nBands do begin
    expression = 'b' + i.ToString() + ' - ' + stats.Min[i-1].ToString()
    bandRasters[i-1] = ENVIPixelwiseBandMathRaster(layerStack, expression)
  endfor
  bandStackRaster = ENVIMetaspectralRaster(bandRasters, $
                                           SPATIALREF=spatialRef)
 
  ; we need to put the wavelengths back into the band stack,
  ; they were removed by the band math
  metadata = ENVIRasterMetadata()
  metadata['wavelength'] = layerStack.Metadata['wavelength']
  metadata['wavelength units'] = layerStack.Metadata['wavelength units']
  correctedRaster = ENVIMetadataOverrideRaster(bandStackRaster, $
                                               METADATA_OVERRIDE=metadata)
 
  ; Scale pixel values from 0 to 1
  gains = MAKE_ARRAY(layerStack.NBANDS, /FLOAT, VALUE=0.0001)
  offsets = FLTARR(layerStack.NBANDS)
  scaledRaster = ENVIGainOffsetRaster(correctedRaster, gains, offsets)
 
 
  ; Create vegetation indices
  indices = [ 'Enhanced Vegetation Index', $
              'Global Environmental Monitoring Index', $
              'Leaf Area Index', $
              'Plant Senescence Reflectance Index', $
              'Red Edge Normalized Difference Vegetation Index' ]
  outputRaster = ENVISpectralIndexRaster(scaledRaster, indices)
end

The first step is to upsample the 20m raster to 10m pixels, which in the tutorial is performed using the ENVIRegridRasterTask.  This can be done with the ENVISpatialGridRaster virtual raster, once we have constructed an ENVIGridDefinition with the appropriate mixing of properties form the 10m and 20m rasters.

Next the tutorial uses the ENVIBuildBandStackTask to build a metaspectral stack of all the 10m bands.  Here we use the ENVIMetaspectralRaster virtual raster, though we have to pass in the original 10m raster’s spatial reference to keep this raster registered on the map.

Next is the dark subtraction.  The tutorial uses the ENVIRasterStatisticsTask, but here we just use the API function ENVIRasterStatistics() to accomplish the same thing.

The band minima values are used to perform dark object subtraction.  The tutorial uses the ENVIDarkSubtractionCorrectionTask, which handles this as a single raster.  Here I had to build separate band math equations for each band, as the ENVIPixelwiseBandMathRaster virtual raster always returns a single band output, so I have to build an array of band math expressions and then metaspectrally stack the results, again passing in the spatial reference.

One pitfall of the band math is that it removes most of the spectral metadata, so I have to put the wavelength metadata back into the raster so the spectral index calculations select the correct bands.  I do this with ENVIMetadataOverrideRaster(), using a copy of the original metaspectral raster’s metadata values.

The raster is then scaled down by a factor of 10000.0 with ENVIGainOffsetRaster, to simulate the atmospheric correction better and produce spectral index values that are more accurate.  Lastly, the scaled raster is passed into ENVISpectralIndexRaster to calculate 5 different spectral indices.

Once this has been written, we can use an almost identical .task file to wrap the procedure.  The main difference is that we no longer need to specify the OUTPUT_URI parameter, and I had a slightly different procedure name.  This task’s output raster is never commited to disk, but it can be used as input to another task by calling ENVIRaster::Dehydrate() on it, which yields a 25KB JSON representation.

Here is the updated .task file:

{
  "name": "SentinelVegIndices_blog",
  "base_class": "ENVITaskFromProcedure",
  "routine": "SentinelVegIndices_blog",
  "display_name": "Compute Sentinel-2A Vegetation Indices",
  "description": "This task regrids the visible and near-infrared bands of a Sentinel-2A Level-1C 20-meter image to 10 meters. It creates a layer stack of all bands. It applies dark-object subtraction as a simple alternative to atmospheric correction. It scales the reflectance pixel values from 0 to 1, then computes a select group of vegetation indices.",
  "schema": "envitask_3.0",
  "parameters": [
    {
    "name": "INPUT_RASTER_10M",
    "display_name": "Select a 10-meter image",
    "type": "ENVIRASTER",
    "direction": "input",
    "required": true,
    "description": "Select a Sentinel Level-1C 10-meter image."
    },
    {
    "name": "INPUT_RASTER_20M",
    "display_name": "Select a 20-meter image",
    "type": "ENVIRASTER",
    "direction": "input",
    "required": true,
    "description": "Select a Sentinel Level-1C 20-meter image."
    },
    {
    "name": "OUTPUT_RASTER",
    "display_name": "Output image",
    "type": "ENVIRASTER",
    "direction": "output",
    "required": true,
    "description": "This is a reference to the output raster."
    }
  ]
}

Comments (0) Number of views (462) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

14

Nov

2016

Using Deep Learning for Feature Extraction

Author: Barrett Sather

In August, I talked about how to pull features out of images using known spatial properties about an object. Specifically, in that post, I used rule-based feature extraction to pull stoplights out of an image.

Today, I’d like to look in to a new way of doing feature extraction using deep learning technology. With our deep learning tools developed here in house, we can use examples of target data in order to find other similar objects in other images.

In order to train the system, we will need 3 different kinds of examples for the deep learning network to learn what to look for. This will be target, non-target, and confusers. These examples are patches cut out of similar images, and the patches will all be the same size. In my case for this exercise, I've picked a size of 50 by 50 pixels.

The first patch type is actual target data – I’ll be looking for illuminated traffic lights. For the model to work well, we’ll need different kinds of traffic signals, lightning conditions, and camera angles. This will help the network generalize what the object looks like.

Next, we’ll need negative data, or data that does not contain the object. This will be the areas surrounding the target, and other features that will possibly appear in the background of the image. In our case for traffic lights, this will include cars, streetlights, road signs, foliage, and others.

For the final patch type, I went through some images and marked things that may confuse the system. These are called confusers, and will be objects with a similar size, color, and/or shape of the target. In our case, this could be other signals like red arrows or a “don’t walk” hand. I’ve also included some bright road signs and a distant stop sign.

Once we have all of these patches, we can use our machine learning tool known as MEGA to train a neural network that can be used to identify similar objects in other images. 

Do note that I have many more patches created than just the ones displayed. With more examples, and more diverse examples, MEGA has a better chance of accurately classifying target vs non-target in an image.

In our case here, we’ll only have three possible outcomes as we look though the image. This will be lights, not lights, and looks-like-a-light classes. If you have many different objects in your scene, you can even get something more like a classification image, as MEGA can be used to identify as many objects in an image as you like. If we wanted to extend this idea here, we could look for red lights, green lights, street lights, lane markers, or other cars. (This is a simple example of how deep learning would be used in autonomous cars!)

To learn more about MEGA and what it can do in your analysis stack, contact our custom solutions group for more details! For my next post, we’ll look at the output from the trained neural network, and analyze the results.

Comments (0) Number of views (704) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

28

Oct

2016

ENVI: Adding custom polygons to the display

Author: Daryl Atencio

A recent project I worked on required custom polygons – Controlled by my application – to be added to the ENVI display.  The following code defines an object class that allows the user to place a polygon in the ENVI display using window coordinates.  To run the application:

  1. Save the code that follows to a file named envi_polygon_example.pro
  2. Open and compile the file in the IDLDE
  3. Execute the following at the IDL command prompt

  4. envi_polygon_example

;------------------------------------------------------------------------------

;+

; Lifecycle method for destroying the object

;-

pro my_polygon::Cleanup

  compile_opt idl2, logical_predicate

  self->Destruct

end

 

;------------------------------------------------------------------------------

;+

; Cleans up the member variables of the object class

;-

pro my_polygon::Destruct

  compile_opt idl2, logical_predicate

  self->IDLmiManipGraphicOverlay::Cleanup

  self->IDLmiManipLayer::Cleanup

  self->IDLgrPolygon::Cleanup

end

 

;------------------------------------------------------------------------------

;+

; Lifecycle method for initializing the class

;

; :Returns:

;   1 if the object initializes successfully and 0 otherwise

;  

; :Params:

;   xy: in, optional, type="int"

;     a [2,n] array containing the [x,y] points in window coordinates

;  

; :Keywords:

;   POLYGONS: in, optional, type="int"

;     An integer array of one or more polygon descriptions.  See IDL help for

;     IDLgrPolygon for more information

;   _REF_EXTRA: Used to set properties of the inherited IDLgrPolygon

;-

function my_polygon::Init, xy, $

  POLYGONS=poly, $

  _REF_EXTRA=refExtra

  compile_opt idl2, logical_predicate

  void = self->IDLmiManipGraphicOverlay::Init(_EXTRA=refExtra)

  if ~void then begin

    return, 0

  endif

  self->InitializeDataspace

  self->SetupManipulatorGraphics

  self->InitializeGraphics

  if n_elements(xy) then begin

    self->SetProperty, DATA=xy, POLYGONS=poly

  endif

  if n_elements(refExtra) then begin

    self->SetProperty, _EXTRA=refExtra

  endif

  return, 1

end

 

;------------------------------------------------------------------------------

;+

; This method initializes the data space used by the graphics layer

;-

pro my_polygon::InitializeDataspace

  compile_opt idl2, logical_predicate

  e = envi(/CURRENT)

  eView = e->GetView()

  eView->GetProperty, _COMPONENT=ecfViewGroup

  oDS = ecfViewGroup->GetDescendants(BY_TYPE='DATASPACE', /FIRST_ONLY)

  self._oTargetDS = oDS

end

 

;------------------------------------------------------------------------------

;+

; Initializes the graphics components of the class

;-

pro my_polygon::InitializeGraphics

  compile_opt idl2, logical_predicate

  void = self->IDLgrPolygon::Init(COLOR=[255,0,0], /PRIVATE, THICK=2)

  self._oGrOverlay->IDLmiContainer::Add, self

end

 

;------------------------------------------------------------------------------

;+

; This method is for setting class properties

;-

pro my_polygon::SetProperty, $

  DATA=data, $

  POLYGONS=poly, $

  _REF_EXTRA=refExtra

  compile_opt idl2, logical_predicate

  if n_elements(data) then begin

    self->SetData, data, POLYGONS=poly

  endif

  if n_elements(refExtra) then begin

    self->IDLgrPolygon::SetProperty, _EXTRA=refExtra

    self->IDLmiManipLayer::SetProperty, _EXTRA=refExtra

  endif

end

 

;------------------------------------------------------------------------------

;+

; This method maps the points from window coordinates to map coordinates and

; adds the mapped points to the IDLgrPolygon.

;

; :Params:

;   xy: in, required, type="int"

;     A [2,n] array of points (In window coordinates) to be added to the polygon

;

; :Keywords:

;   POLYGONS: in, optional, type="int"

;     An integer array of one or more polygon descriptions.  See IDL help for

;     IDLgrPolygon for more information

;-

pro my_polygon::SetData, xy, $

  POLYGONS=poly

  compile_opt idl2, logical_predicate

  self._oTargetDS->WindowToVis, reform(xy[0,*]), reform(xy[1,*]), xVis, yVis

  self->IDLgrPolygon::SetProperty, DATA=transpose([[xVis],[yVis]]), $

    POLYGONS=poly

end

 

;------------------------------------------------------------------------------

;+

; Class structure definition

;-

pro my_polygon__define

  compile_opt idl2, logical_predicate

  void = {my_polygon                    $

    , inherits IDLmiManipGraphicOverlay $

    , inherits IDLmiManipLayer          $

    , inherits IDLgrPolygon             $

    }

end

 

;------------------------------------------------------------------------------

;+

;-

pro envi_polygon_example

  compile_opt idl2, logical_predicate

  e = envi(/CURRENT)

  if ~isa(e, 'envi') then begin

    e = envi()

  endif

  file = FILEPATH('qb_boulder_msi', ROOT_DIR=e.ROOT_DIR, SUBDIRECTORY=['data'])

  eRaster = e->OpenRaster(file)

  eView = e->GetView()

  eLayer = eView->CreateLayer(eRaster)

  xy = [[470,140],[560,140],[560,230],[470,230], $

    [750,115],[1200,115],[1200,665],[750,665]]

  conn = [4,0,1,2,3,4,4,5,6,7]

  oPolygon = obj_new('my_polygon', xy, LINESTYLE=5, POLYGONS=conn, STYLE=1)

end

Comments (0) Number of views (741) Article rating: No rating

27

Oct

2016

Customizing the Geospatial Services Framework with Node.js

Author: Daniel Platt

One of the greatest features of the Geospatial Services Framework (GSF), in my opinion, is the fact that it is built upon Node.js. There are many reasons why this is great, but I wanted to talk about one in particular which is the amount of customization this provides.

Below I will show a simple but powerful example of this customization, but before I get there, I want to give a quick overview of both GSF and Node.js. 

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. With that said, what is important about Node.js in the context of this blog post is that it is a powerful, scalable backend for a web application that is written in the same language almost every website uses, JavaScript.

We have improved ENVI Service Engine by adding GSF – a lightweight, but powerful framework based on Node.js that can provide scalable geospatial intelligence for any size organization. I like to describe it as a “messenger” system that provides a way to communicate between the web and our various products or “engines” that provide analytics such as ENVI, IDL and more. 

GSF by design has its Node.js code exposed. This allows the product to be customized infinitely to fit whatever architecture that it needs to reside in. It is a modular system, and has different modules that can be toggled on/off or even duplicated and extended upon. This makes customization easy and safe.

One of these modules is called a Request Handler. This module serves up endpoints for GSF. This can be simply REST based calls, a webpage or even better, both. 

While developing an ENVI web client demo that takes Amazon S3 hosted raster data and passes it to GSF to run analytics on, I found that I didn’t have a way to simply list what data is available in my S3 storage. While exploring ways to solve this problem I came to a realization that I can simply use the power of Node.js to accomplish this task.

After importing the aws-sdk package that is already installed with GSF into my request handler, I just wrote a simple function to use that package to list any .dat files in my S3 storage data and return that information to my front end web application to be ingested and displayed.

Here is the slightly modified request handler code with comments explaining each part.


//Import express, used by node.js to setup rest endpoints
var express = require('express');
//Import AWS, used to interact with Amazon Web Services including S3 storage
var aws = require('aws-sdk');
//Extra tools that should be included in request handlers
var extend = require('util')._extend;
var defaultConfig = require('./config.json');

/**
 * Dynamic Request Handler
 */

function DynUiHandler() {
    var s3Bucket, s3;
    //Setup config options
    var config = {};
    extend(config, defaultConfig);
    //Grab information for S3 from config.json
    s3Workspace = config.S3Root;
    s3Bucket = config.S3Bucket;

    // Setup S3
    s3 = new aws.S3(config);


    /**
     * List files in the s3 bucket.
     * @param {object} req - The Express request object.
     * @param {object} res - The Express response object.
     */
    function listS3Data(req, res) {
    //Take the parameter passed from the REST call and set that as the bucket to be accessed in the call to s3.
        var params = {
            Bucket: req.params.bucket
        }; 
        //Call the S3 package's built in listObjects function to return what is avaiable in the specified bucket

        s3.listObjects(params, function(err, data) {
        //check if error occured and if so, halt and report it to client.
            if (err) {
                var code = err.code || 500;
                var message = err.message || err;
                res.status(code).send({
                    error: message
                });
            } 
            // If no error, push every object found in the response with a '.dat' extension to an array that will be returned to client 
            else {
                var files = [];
                //Look at each file contained in data returned by s3.listObjects()
                data.Contents.forEach(function(file) {
                    //Searches for files with .dat in the bucket requested
                    if(file.Key.endsWith('.dat')){
                        //If found, store that file information in the files array.
                        files.push(file);
                    }
                });
            }
            //send the files array containing metadata of all .dat files found.
            res.send(files);
        });
    };

    //Initialize request handler, run when GSF starts up.
    this.init = function(app) {
        // Set up request handler to host the html subdirectory as a webpage.
        app.use('/dynui/', require('express').static(__dirname + '/html/'));
        // Set up a rest call that runs listS3Data and supplies the accomponying bucket parameter.
        app.get('/s3data/:bucket', listS3Data);
    };
}
//
module.exports = DynUiHandler;

After restarting the server I was able to hit my rest endpoint by pasting the following in my browser:
http://localhost:9191/s3data/mybucket

This would return me JSON of the contents of “mybucket”

 
Using this information I was able to give a user of my application real time information right inside of the interface about which data they have available to them. 
 

This is a simple example of how I customized a stock request handler to make my web demo more powerful. However, the ability to modify the source code of such a powerful system in a modular fashion provides a safe way to mold GSF into the exact form and shape you need to fit into your own architecture. 

It might just be me but I much prefer that over restructuring my own existing architecture to fit closed source code I know nothing about.


Comments (0) Number of views (1073) Article rating: No rating
12345678910 Last

MONTHLY ARCHIVE

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2016 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation