9

Feb

2017

First Valley Shadow Detection

Author: Austin Coates

Shadows can be difficult to deal with when introduced into an analysis pipeline. There are many different ways in which to deal with shadows, but I’ve found that one of the easiest ways is to simply mask them out. I work primarily with agricultural data and row-planted crops in particular. In this type of data, the shadows are cast on bare earth between rows, because of that they can be masked out without much heartache, assuming that your target material is the vegetation itself. Depending on what type of imagery is being used, shadows are normally one of the darkest, if not the darkest, surfaces in the image. This assumption can be used to our advantage—in a histogram of the imagery, the shadows should fall at the lower end.

For example, in this image of an orange orchard, the shadows are the darkest feature and thus fall at the bottom of the histogram.

Figure 1: NIR image of orange orchard

Figure 2: Histogram of figure 1

By looking at the imagery, you can see that there is a distinctive valley between two major features. Using the NIR First Valley Method (Jan-Chang, Yi-Ta, Chaur-Tzuhn, & Shou-Tsung, 2016), you can isolate the local minima then use that as the upper bound of a threshold used for shadow masking. The local minima can be easily derived by using the method presented in the support document entitled: “Finding multiple local max/min values for 2D plots with IDL”.

Figure 3: Shadows displayed in red

Figure 4: First valley identified

function local_max_finder, datax, datay, minima = minima
  compile_opt idl2
 
  ;initialize list
  max_points = list()
 
  data_x = datax
  data_y = datay
  
  ;check for keyword, flip the sign of the y values
  if keyword_set(minima) then data_y = -datay
 
  ;iterate through elements
  for i=1, n_elements(data_y)-2 do begin
    ;previous point less than i-th point and next point less than i-th point
    if ( (data_y[i-1] le data_y[i]) AND (data_y[i] ge data_y[i+1])) then max_      points.add, i
  endfor
 
  ;return an array of the indices where the extrema occur
  return, max_points.toarray()
 
end

        
pro ShadowFinder
  compile_opt IDL2
 
  ; Select your image
  e = envi(/current)
  oRaster = e.UI.SelectInputData(/Raster, bands = bands)
 
  ; Check for single band
  if N_ELEMENTS(bands) gt 1 then begin
    MESSAGE, 'Input raster may only contain 1 band', /INFORMATIONAL
    return
  endif
 
  ; Pull the data out of the image
  data = oRaster.GetData(bands = bands, pixel_state = pixel_state)
 
  ; Convert data to float
  data = float(bytscl(data))
 
  ; Mask all values in the image
  bkgrd_pos = where(pixel_state ne 0)
  data[bkgrd_pos] = !VALUES.F_NAN
 
  ; find all local minima based off the histogram of the image
  h = histogram(data, LOCATIONS=xbin)
 
  ; Remove zero values
  non_zeros = where(h ne 0)
  h = h[non_zeros]
  xbin = xbin[non_zeros]
 
  ; Get the number of points
  n_pts = n_elements(h)
 
  ; Smooth data (7 pixel moving average)
  boxcar = 7
  p1 = (boxcar - 1) / 2
  p2 = boxcar - p1
  for i = 0 , n_pts-1 do begin
    pos = [i-p1:i+p1]
    pos = pos[where((pos ge 0) and (pos le n_pts-1))]
    h[i] = mean(h[pos])
  endfor
 
  
  MINIMA = local_max_finder(xbin, h, /MINIMA)
  x_extrema = xbin[MINIMA]
  y_extrema = h[MINIMA]
 
  ; Plot all local minima
  p = plot(xbin, h)
  p3 = scatterplot(x_extrema, y_extrema, /current, /overplot, $
    symbol = 'o', sym_color = 'b', sym_thick = 2)
 
  ; Create a shadow mask
  mask = bytarr(oRaster.ns, oRaster.nl)
  mask[where(data le x_extrema[0])] = 1
 
  ; create a new metadata object
  metadata = ENVIRasterMetadata()
  metadata.AddItem, 'classes', 2
  metadata.AddItem, 'class names', ['Background', 'Shadow']
  metadata.AddItem, 'class lookup', [[0,0,0],[255,0,0]]
  metadata.AddItem, 'data ignore value', 0
  metadata.AddItem, 'band names', 'Shadows'
 
  ; Create a classification image
  oClass = e.CreateRaster(e.GetTemporaryFileName(), mask, SPATIALREF = oRaster.SPATIALREF, $
    data_type = 1, metadata = metadata)
  oClass.save
 
  ; Add the new class image to envi
  e.data.add, oClass
 
 
end

References

Jan-Chang, C., Yi-Ta, H., Chaur-Tzuhn, C., & Shou-Tsung, W. (2016). Evaluation of Automatic Shadow Detection Approaches Using ADS-40 High Radiometric Resolution Aerial Images at High Mountainous Region. Journal of Remote Sensing & GIS.

 

 

Comments (0) Number of views (245) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

2

Feb

2017

Base 60 encoding of positive floating point numbers in IDL

Author: Atle Borsholm

Here is an example of representing numbers efficiently using a restricted set of symbols. I am using a set of 60 symbols (or characters) to encode floating point numbers as strings of any selected length. The longer the strings are, the more precise the numbers will potentially be.
 
Here is an example of a representation, this is restricted to positive numbers, in order to keep the example short.
 
IDL> a=[14.33, 3.1415, 12345]
IDL> a
       14.330000       3.1415000       12345.000
IDL> base60(a)
FotV*
FdiDx
HdzS*
IDL> base60(a, precision=8)
FotV**aO
FdiDx*^c
HdzS****
IDL> base60(base60(a)) - a
 -4.5533356836102712e-006 -4.6258149324351905e-006    -0.016666666666424135
IDL> base60(base60(a, precision=8)) - a
 -9.2104102122902987e-012 -4.6052051061451493e-013 -7.7159711509011686e-008
 
In this example, it can be seen that the 5-digit representations are not as close to the original numbers as the 8-digit representations.
 
The code example for the base60 function is listed below.
;
; Converts from a numeric type to a base 60 representation
; Converts from a base 60 string to a floating point representation
; PRECISION is only used to determine how many symbols to use when encoding,
; and is ignored for decoding.
function Base60, input, precision=precision
  compile_opt idl2,logical_predicate
 
  ; set default precision of 5 digits for encoding only
  if ~keyword_set(precision) then precision = 5
 
  ; base 60 symbology
  symbols = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*'
  base = strlen(symbols)
 
  ; fast conversion from symbol to value
  lut = bytarr(256)
  lut[byte(symbols)] = bindgen(base)
 
  if isa(input, /string) then begin
    ; convert from base60 string to float
    ; find exponent first
    scale = replicate(double(base),n_elements(input)) ^ $
      (lut[byte(strmid(input,0,1))] - base/2)
    res = dblarr(n_elements(input))
    for i=max(strlen(input))-1,1,-1 do begin
      dig = lut[byte(strmid(input,i,1))]
      res += dig
      res /= base
    endfor
    res *= scale
  endif else begin
    ; convert from float to base60 strings
    ; encode exponent(scale) first
    ex = intarr(n_elements(input))
    arr = input
    dbase = double(base)
    repeat begin
      dec = fix(arr ge 1)
      ex += dec
      arr *= dbase ^ (-dec)
      inc = fix(arr lt 1/dbase)
      ex -= inc
      arr *= dbase ^ inc
    endrep until array_equal(arr lt 1 and arr ge 1/dbase,1b)
    if max(ex) ge base/2 || min(ex) lt -base/2 then begin
      message, 'Number is outside representable range'
    endif
    bsym = byte(symbols)
    res = string(bsym[reform(ex+base/2,1,n_elements(ex))])
    for i=1,precision-1 do begin
      arr *= base
      fl = floor(arr)
      arr -= fl
      res += string(bsym[reform(fl,1,n_elements(fl))])
    endfor
  endelse
  return, res
end
 
 

Comments (0) Number of views (354) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

26

Jan

2017

When Might I Use An IDL Task? IDL As a Key to Data Analysis in a Heterogeneous Computing Environment

Author: Jim Pendleton

In IDL 8.6 we've exposed a new feature that standardizes a way for IDL to be called from any environment which can communicate between processes on a single operating system via standard input, standard output, and standard error. 

In our Harris Geospatial Custom Solutions Group, we look forward to deploying this new feature extensively to help our clients expose even more analysis capabilities into large, heterogeneous processing environments.

Although a programmer, in earlier IDL releases, could accomplish the goal of calling an IDL routine from an operating system-level script in an ad-hoc way using a combination of the IDL Runtime executable, the IDL function COMMAND_LINE_ARGS, and other techniques, the new IDL Task architecture adds a level of commonality and standardization.

In the past, you might have written individual IDL Runtime applications to execute atomic processes on data in this type of environment. Your architecture would package up arguments and make a call to the idlrt.exe with those arguments passed on the standard input command line, via a system(), fork(), or another language's equivalent to IDL's SPAWN, along with a path to the IDL SAVE file containing your "task" to execute.

With the IDL Task architecture, you write procedural wrappers to your functionality using standard IDL, in combination with a simple JSON file which defines the arguments for your task, their data types, transfer directions, etc.

Placing the compiled IDL task code along with the JSON in your IDL session's search path exposes the tasks to the IDL Task Engine. This is essentially a stateless application that wraps an IDL Runtime interpreter. It performs the essential bits of validating input and output arguments and packaging them up before calling your IDL routine.

Your job distribution system, such as Harris' Geospatial Framework, will call the IDL Task Engine with JSON that represents the name of the task to be executed along with the task's arguments, written to the task script's standard input.

The task engine starts an independent IDL interpreter process for each task, allowing multiple tasks to be executed in parallel, up to the number of available processing licenses.

The arguments to and from the IDL Task must use data types that can be represented in JSON.  That restriction precludes arguments that are disallowed from crossing process boundaries, such as references to objects or pointers, as defined either in IDL or in another language.

An Example - Generating a Summary Report From Multiple Images

Let's say through some mechanism outside IDL and ENVI you have generated a directory of image files. Perhaps you own a fleet of UAVs with sensors, or a satellite or two. These image files contain data that, when distilled through various processing algorithms, can produce a single intelligence report.

You want to fold this workflow into a larger processing sequence of services that consists of multiple steps, only one of which involves the generation of the reports.

For the IDL portion, let's say you already have an object class in IDL that takes as input the path to a directory of images, performs classifications and time series analysis and outputs a PDF report with a summary of the results. Because it's just that simple in IDL. 

Let's call this class IntelReportGenerator. We will look at this class first, outside the context of the IDL Task Engine. For simplicity, the class I will describe will only have two methods, an ::Init method and a ::GenerateReport method.

This class is super-efficient and only has a handful of member variables.

Pro IntelReportGenerator__Define
!null = {IntelReportGenerator,  $
    ImageDirectory : '', $ ; path to the images to be read, input
    OutputReport   : '', $ ; path to the report file to be written, input
    Author         : '', $ ; a name to be applied to the report, input
    Debug          : !false $ ; a flag to toggle more detailed debugging information
}
End

PSA: I highly recommend adding a debug flag to each class. Debugging might not be enabled in an operational environment, but it's always nice to know it can be turned on without a modification and redeployment of the code.

The ::Init method of the class is primarily used to populate the member variables with the keyword parameters.

Function IntelReportGenerator::Init, $
    Image_Directory = Image_Directory, $
    Author = Author, $
    Output_Report = Output_Report, $
    Debug = Debug, $
    Status = Status, $
    Error = Error
On_Error, 2
Status = !false ; Assume failure
Error = !null ; Clear any error string on input
Catch, ErrorNumber ; Handle any unexpected error conditions
If (ErrorNumber ne 0) then Begin
    Catch, /Cancel
    If (self.Debug) then Begin
        ; Return a complete traceback if debugging is enabled
        Help, /Last_Message, Output = Error
    EndIf Else Begin
        ; Return a summary error instead of a traceback
        Error = !error_state.msg
    EndElse
    Return, Status
EndIf
self.Debug = Keywod_Set(Debug)
self.Author = Author ne !null ? Author : 'UNKNOWN'
If (~File_Test(Image_Directory, /Dir)) then Message, 'Image directory does not exist.', /Traceback
self.ImageDirectory = Image_Directory
; ... More here.  you get the idea.
Status = !true
Return, 1
End

Next, let's consider the ::GenerateReport method. It's a simple matter of programming. We loop over the files in the input image directory, magic occurs, and an output file is generated. I relish the elegance of a simple design, don't you?

Pro IntelReportGenerator::GenerateReport, $
    Status = Status, $
    Error = Error
On_Error, 2
Status = !false
Error = !null
Catch, ErrorNumber
If (ErrorNumber ne 0) then Begin
    Catch, /Cancel
    If (self.Debug) then Begin
        Help, /Last_Message, Output = Error
    EndIf Else Begin
        Error = !error_state.msg
    EndElse
    Return
EndIf
Files = File_Search(self.ImageDirectory)
ForEach File, Files Do Begin
  ;... Magic analysis here.  Batteries not included.
EndFor
; Magic report-writing here.  Nope, still no batteries.
Status = !true
End

All this should look familiar to you thus far if you have written any IDL code, especially the magic bits.

In order to put this functionality into an IDL Task workflow, we will need to write a procedural wrapper for our class that will instantiate an object with the appropriate keywords, then execute the method to generate the report. We will name this new routine IntelReportTask.

Pro IntelReportTask, $
    Image_Directory = Image_Directory, $
    Author = Author, $
    Output_Report = Output_Report, $
    Debug = Debug, $
    Status = Status, $ ; an output from this procedure, 0 = failure, 1 = success
    Error = Error ; An error string if Status is 0, or null on return otherwise
On_Error, 2
Error = !null ; Clear any error string
Status = !false ; assume failure
; ALWAYS include a CATCH handler to manage unexpected
; exception conditions.
Catch, ErrorNumber
If (ErrorNumber ne 0) then Begin
    Catch, /Cancel
    If (self.Debug) then Begin
        ; Return a complete traceback if debugging is enabled
        Help, /Last_Message, Output = Error
    EndIf Else Begin
        ; Return only a summary error without traceback if debugging is off
        Error = !error_state.msg
    EndIf
    Return
EndIf
; Attempt to create the report-generation object, passing through the keywords.
o = IntelReportGenerator( $
    Image_Directory = Image_Directory, $
    Author = Author, $
    Output_Report = Output_Report)
    Status = Status, $
    Error = Error, $
    Debug = Debug)
If (Obj_Valid(o)) then Begin
    ; Call the method to generate the report
    o.GenerateReport, Status = Status, Error = Error
EndIf
End

An IDL Task routine definition is required to pass all its arguments via keywords. Other than that restriction, it is a standard IDL procedure. There is no magic required.

The new piece of functionality is the requirement of a JSON task definition file. Within this file we define the name of the task (which corresponds to the IDL procedure name) and the type definitions associated with each of the keywords.

The argument type definitions allow the IDL Task Engine itself to execute parameter type checking and validation before your procedure is even called, relieving you of the burden of writing code to ensure, for example, that a directory path that should be a string is not being populated by a floating point number, instead. For some pedants of certain schools of computer science thought, IDL's weak data type validation at compile time is a turn-off rather than a strength. Wrapping pure IDL in a task with stricter argument types enforced by the Task Engine is one way to assuage such opinions, perhaps as a stepping stone to more illuminated paths to consciousness.

Of course, it also means it makes your IDL Tasks less generic than they are within IDL itself.  A single IDL routine that may operate on any data type from byte values to double precision numbers may require two or more different IDL Task routines as wrappers if you want to expose more than one. Another option is to write your task with multiple keywords to accept different data types, then pass the input to a common processing algorithm.

The general JSON syntax of a Custom IDL Task is described here.

The JSON associated with the IDL task follows.

{
  "name": "IntelReportTask",
  "description": "Generates a report from a directory of images.",
  "base_class": "IDLTaskFromProcedure",
  "routine": "intelreporttask",
  "schema": "idltask_1.0",
  "parameters": [
    {
      "name": "IMAGE_DIRECTORY",
      "description": "URI to the directory containing image files",
      "type": "STRING",
      "direction": "input",
      "required": true
    },
    {
      "name": "AUTHOR",
      "description": "Label to apply as the author to the output report",
      "type": "STRING",
      "direction": "input",
      "required": false,
      "default": "UNKNOWN"
    },
    {
      "name": "OUTPUT_REPORT",
      "description": "URI to the output report file",
      "type": "STRING",
      "direction": "input",
      "required": true
    },
    {
      "name": "DEBUG",
      "description": "Flag to enable verbose debugging information during errors or processing",
      "type": "BOOLEAN",
      "direction": "input",
      "required": false,
      "default": false
    },
    {
      "name": "STATUS",
      "description": "Status of the report generation request at completion, or error.",
      "type": "BOOLEAN",
      "direction": "output",
      "required": false
    },
    {
      "name": "ERROR",
      "description": "Any error text generated during processing",
      "type": "STRINGARRAY",
      "dimensions": "[*]",
      "direction": "output",
      "required": false
    }
  ]
}

Here, we have identified optional and required keywords, their input/output directions, and data types, among other things.

In the IDL documentation, we show some examples for calling a procedure within the context of an IDLTask object within IDL itself.  In truth, this has limited utility outside of debugging. If you're even a semi-competent IDL wizard (which I assume you are if you have read this far), you will recognize that within the context of IDL, the IDLTask class and the task wrapper you have written is simply adding some overhead to a call you could make directly to your intended "worker" routine.

The real value of an IDL Task is shown when you insert your functionality into a heterogeneous workflow, outside of IDL itself.

In this environment, your framework will launch a command line-level script to execute your task.

On Windows, the default location for the script is in the installation directory, "C:\Program Files\Harris\idl86\bin\bin.x86_64\idltaskengine.bat".

On Linux, the default path is /usr/local/harris/idl/bin/idltaskengine.

The input to the idltaskengine script is JSON-format text that represents the name of the task along with the parameters.  The JSON may be passed to the script's standard input either through redirection from a file (<) or a pipe (|), for example,

<installpath>\idltaskengine.bat < <filepath>\my_intel_report_request.json

or

echo '{"taskName":"IntelReportTask","inputParameters":{"IMAGE_DIRECTORY":"<imagespath>"}, etc.}' | <installpath>/idltaskengine

It is the responsibility of your framework to construct the appropriate JSON object to be passed to the task engine script.

For our current example, the JSON might be constructed like this:

{
	"taskName": "IntelReportTask",
	"inputParameters": {
		"IMAGE_DIRECTORY": "/path-to-data/",
		"AUTHOR": "MELENDY",
		"OUTPUT_REPORT": "/path-to-report/myreport.pdf"
	}
}

Any parameters defined as having an output direction will be written to standard output in JSON format. In our example, the output might be returned in this general format if a handled error was encountered:

{
    "outputParameters": [{
        "STATUS": false
    }, {
        "ERROR": [
            "% SWAP_ENDIAN: Unable to swap object reference data type",
            "% Execution halted at: SWAP_ENDIAN        99 C:\\Program Files\\Harris\\IDL86\\lib\\swap_endian.pro",
            "%                      $MAIN$"
        ]
    }]
}

In the event of a truly wretched error, one that was unable to populate the JSON, the stderr return from the call to the IDL Task Engine script should be queried as well. See the "Exit Status" section of the online help topic, at the bottom of the page.

Your surrounding framework should be designed to validate the status return from the IDL Task Engine script on standard error first, then check for and parse any JSON returned on standard output.

More Examples

Additional IDL Task examples can be found here.

Geospatial Framework (GSF)

The Harris Geospatial Framework product (GSF) is just one example implementation of  a distributed processing architecture into which IDL Tasks might be "snapped".  Despite its marketing name, it is not limited to processing geospatial data only.



Comments (0) Number of views (407) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

25

Jan

2017

Don’t forget to Stretch! Using ENVI’s stretch tools to see things our eyes can’t.

Author: Jeff McKissick

Living in Boulder, we mountain people out here like to do a lot of physical activities whether it’s hiking, skiing, or yoga. Everyone knows the first thing you have to do before any physical activity is STRETCH! This also applies in ENVI as well! Over the past few months I have worked on various projects where, had I applied one of our stretches in ENVI first, I would have saved a lot of time for myself. This example today was a dataset of a large grass field in which the user was looking for an invasive species weed within this field.

You can see from the figure above that EVERYTHING LOOKS GREEN! How can you pick out a weed when everything looks like grass? With a little help from the customer, we were able to get access to a shapefile they provided that showed us areas in the scene that actually were the weed we were looking for. Still, even with these shapefiles everything looks the same color. This is where, before you start any of your preprocessing or classification workflows, you stretch!

ENVI has some really great stretch tools to choose from, but seeing them isn’t actually helping you know what they mean. For this example we used a few different linear percent stretches to help accentuate some of our features. What these percent stretches do is trim the X% of extreme values at the beginning and end of the histogram.

So for example, if you look at our three images with the histogram stretch plot shown, you can see in the first image with no stretch that our pixel values are 0-255 which is standard. If we look at our Linear 2% and 5% stretched images respectively you see the pixel values get trimmed on each end of each color band.

From here we were easily able to identify the invasive weed in our scene and compare it to the shapefiles provided for us so that we could run a classification workflow and extract the features that we wanted. Our shapefiles, not shown here, were all around the areas in the scene above that were a very dark green. These stretches allowed us to make more accurate ROIs  (Regions of Interest) for our classification which in turn gave us a more accurate result.

So remember, DON’T FORGET TO STRETCH!

Comments (0) Number of views (634) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

Approve

19

Jan

2017

Extending ENVI Extensions

Author: Zachary Norman

One of my favorite parts of being able to extend ENVi with IDL is that I have the ability to add custom buttons into ENVI's toolbox. These buttons are called extensions and I have made many of them during my time with Harris Geospatial. I have created buttons for our tradeshow demos, examples for customers, and even for the Precision Ag Toolkit. Extensions in ENVI can you anything that you want them to. Most of the time this is using ENVI's dynamic UI for creating simple widgets that allow a user to select all the inputs and outputs for a task, but you can also just call IDL directly and instantiate some other GUI based application if you want.

While extensions are very easy to make (I just copy/paste old code I had and tweak it), they can be cumbersome when you have many extensions to create all at once. In addition to this, if you have some addition that you want added to all your extensions, then you have to go through each one by hand to change them. This is where getting creative with IDL programming comes in really handy because IDL can be used to automate the generation of buttons in ENVI. For this example, I decided to finally take a crack at dynamically creating X number of buttons from ENVI tasks without needing to write a separate extension for every one.

The basic idea is that I wanted a dynamic extension that would allow me to pass in any task, create a button, run the right task when I press the button, and offer the ability to display any and all results from the task that are EVNIRasters, ENVIROIs, or ENVIVectors (or shapefiles). If you're unfamiliar how to add extensions to ENVI, then here is a link to the docs which can provide some background information

http://www.harrisgeospatial.com/docs/ENVI__AddExtension.html

Following the example in the docs, the way that I decided to proceed was to use the UVALUE for the buttons to pass in information about which task I wanted to create a dynamic UI for. I decided to use an orderedhash for my data structure with: the key is the task name, the value is a two element string array that represents the folder that we want the task to appear in the name of the button that you will see in ENVI's toolbox. Here is an example of what the data structure looked like:

;create an orderedhash of buttons that we want to create
buttons = orderedhash() 
buttons['ROIMaskRaster'] = ['/Regions of Interest', 'ROI Mask Raster']

At this point it just came down to looping over the hash correctly so that the buttons would be made in the right space. I chose to use the foreach loop because it makes it easy to get the key and value from hashes or orderedhashes all at the same time. Here is what that code looks like:

foreach val, buttons, key do begin
  e.AddExtension, val[1], 'better_extensions', $
    PATH = '/Better Extensions' + val[0], $
    UVALUE = key 
endforeach

That short code block will dynamically go through every entry in my orderedhash and then create the buttons with the right placement and task to be executed when clicked on. Note that I created a subfolder called 'Better Extensions' that would contain all the fancy buttons that the extension would create. Once I had this code figured out, I just needed to look at the actual procedure which would run the extension (I called it "better_extensions")

You can see the complete code for better_extensions below, but there are a few key points to mention about the code:

  • To get the task name for the button that was clicked, when creating the extension I used the UVALUE keyword to store the name. Then, in the better_extensions procedure, I get the event as an argument and can get the uvalue for the button using widget_control.

  • To create the task UI all I needed to do was the following:

        ;create a dialogue for the rest of the task
        ok = e.UI.SelectTaskParameters(task)
    
  • After that, I just needed to go through each task parameter, check for INPUT or OUTPUT and, if it was output, check for ENVIRaster, ENVIVector, or ENVIROI. I then saved these and displayed them in ENVI's current view if they were found in the output.

The main goal here was to show that, with a little bit of IDL programming, you can take your ENVI analytics to the next level. The code for the extension can be found below. Cheers!

Also, keep an eye out for my next blog where I'm going to talk about giving an old extension (from the online extensions library) a much needed update!

; Add the extension to the toolbox. Called automatically on ENVI startup.
pro better_extensions_extensions_init

  ; Set compile options
  compile_opt idl2

  ; Get ENVI session
  e = envi(/CURRENT)
  
  ;create an orderedhash of buttons that we want to create
  buttons = orderedhash() 
  
  ;sample ROI tools
  buttons['ROIMaskRaster'] = $
    ['/Regions of Interest', 'ROI Mask Raster']
  bttons['ROIToClassification'] = $
    ['/Regsions of Interest', 'Classification Image from ROis']
  
  ;classification tools
  buttons['ISODataClassification'] = $
    ['/Classification/Unsupervised Classification', 'ISOData Classification']
  buttons['ClassificationAggregation'] = $
    ['/Classification/Post Classification', 'Classification Aggregation']
  buttons['ClassificationClumping'] = $
    ['/Classification/Post Classification', 'Clump Classes']
  buttons['ClassificationSieving'] = $
    ['/Classification/Post Classification', 'Sieve Classes']
  buttons['ClassificationSmoothing'] = $
    ['/Classification/Post Classification', 'Classification Smoothing']
  buttons['ClassificationToShapefile'] = $
    ['/Classification/Post Classification', 'Classification to vector']
  
  ;radiometric correction tools
  buttons['DarkSubtractionCorrection'] = $
    ['/Radiometric Correction', 'Dark Subtraction']
  buttons['ApplyGainOffset'] = $
    ['/Radiometric Correction', 'Apply Gain and Offset']

  ;filters
  buttons['BitErrorAdaptiveFilter'] = $
    ['/Filters', 'Bit Errors Filter']
  buttons['GaussianHighPassFilter'] = $
    ['/Filters', 'Gaussian High Pass Filter']
  buttons['GaussianLowPassFilter'] = $
    ['/Filters', 'Gaussian Low Pass Filter']
  buttons['HighPassFilter'] = $
    ['/Filters', 'High Pass Filter']
  buttons['LowPassFilter'] = $
    ['/Filters', 'Low Pass Filter']
  
  ;add the buttons
  foreach val, buttons, key do begin
    e.AddExtension, val[1], 'better_extensions', $
      PATH = '/Better Extensions' + val[0], $
      UVALUE = key 
  endforeach

end

; ENVI Extension code. Called when the toolbox item is chosen.
pro better_extensions, event

  ; Set compile options
  compile_opt idl2

  ;Get ENVI session
  e = envi(/CURRENT)
  if (e eq !NULL) then begin
    e = envi()
  endif

  CATCH, Error_status
  IF Error_status NE 0 then begin
    catch, /CANCEL
    help, /LAST_MESSAGE, output = err_txt
    p = dialog_message(err_txt)
    return
  ENDIF

  ;get the directory that our extension lives in
  WIDGET_CONTROL, event.id, GET_UVALUE = taskName
  
  ;create the task object
  task = ENVITask(taskname)
  
  ;add parameter to ask if we want to displayt he results
  displayParam = ENVITaskParameter(NAME='DISPLAY_RESULT', $
    DISPLAY_NAME = 'Display Result',$
    DEFAULT = !FALSE,$
    TYPE='bool', $
    DIRECTION='input', $
    REQUIRED = 1)
  ;replace old parameter
  task.AddParameter, displayParam

  ;create a dialogue for the rest of the task
  ok = e.UI.SelectTaskParameters(task)

  ;user selected OK
  if (ok eq 'OK') then begin
    ;get rid of dummy result
    display = task.DISPLAY_RESULT
    task.RemoveParameter, 'DISPLAY_RESULT'

    ;run the task
    task.execute

    ;check if we want to display things
    if display then begin
      ;things to display
      rasters = list()
      rois = list()
      shapefiles = list()
      
      ;check for output datatypes that are rasters, vectors, or rois
      foreach paramName, task.ParameterNames() do begin
        param = task.parameter(paramName)
        
        if (param.direction eq 'OUTPUT') then begin
          if (param.TYPE eq 'ENVIRASTER') then begin
            e.data.add, param.VALUE
            rasters.add, param.VALUE
          endif
          if (param.TYPE eq 'ENVIROI') then begin
            e.data.add, param.VALUE
            rois.add, param.VALUE
          endif
          if (param.TYPE eq 'ENVIVECTOR') then begin
            e.data.add, param.VALUE
            shapefiles.add, param.VALUE
          endif
        endif
      endforeach
      
      ;get envi's view
      View1 = e.GetView()
      
      ;disable refresh
      e.refresh, /DISABLE
      
      ;check for rasters and ROIs
      if (n_elements(rasters) gt 0) then begin
        
        ;display each raster in the view
        foreach raster, rasters do begin
          rasterLayer = View1.CreateLayer(raster)
        endforeach
        
        ;only display roi is there is a rasterlayer
        foreach roiarr, rois do begin
          ;handle arrays of rois
          foreach roi, roiarr do begin
            roilayer = rasterlayer.AddROI(roi)
          endforeach
        endforeach
      endif
      
      ;check for vectors
      ;only display vectors if ROI is present
      foreach vector, shapefiles do begin
        ; Create a vector layer
        vectorLayer = view.CreateLayer(vector)
      endforeach
      
      ;refresh display again
      e.refresh
    endif
  endif
end

Comments (0) Number of views (431) Article rating: 3.5

Categories: IDL Blog | IDL Data Point

Tags:

12345678910 Last

MONTHLY ARCHIVE

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation