Blogs

3

Aug

2016

### Take Your Analytics to the Data

Author: Joey Griebel

ENVI Services Engine: Change Detection

I was recently on NASA’s Earth Observatory where I was reading about the shrinking glacier in Montana’s Glacier National Park. They acquired 8 images dating from 1984 going through 2015 that focused near Lake McDonald where the Jackson and Blackfoot Glaciers were very visible in false color images. They put together a slideshow showing the time series of the extent of the glacier loss due to global warming and the changing climate. The one thing that stuck out in my mind was how time consuming it must have been to search for the data to find images that didn’t have clouds, than download the data… and after that you would still need to run your analysis or comparison.

In the past few years we have started to see a shift to where users want to run the analysis where this data resides, for instance on applications like Amazon Web Services. As we continue to implement more and more ENVI tasks on Amazon Web Services, you can truly take your analysis to the where the data is. In the Change Detection example below, one of our engineers put together a quick interface utilizing ESRI basemaps to define the area of interest. By linking to ESRI’s endpoint (Landsat.arcgis.com) you can stream in the Landsat data available for that area of interest. In this example, we can search for the area we are interested in seeing (Glacier National Park), see what data is available during different years, filter out the data based upon cloud coverage, and then apply Spectral Indices if wanted:

Once you have found the two scenes you want, you simply click change detection and the ENVI tasks run through the steps of the normal analysis and provide updates along the way:

In no time at all, you are given the results of the quick change detection analysis that shows you in Dark Red (red is what has fled the image from time 1). The blue areas shown in the result are new areas to the scene. In these scenes it looks like snowpack that hasn’t quite melted. If you take a look at the examples provided on Earth Observatory, they focus on the Jackson and Blackfoot glacier, which are the areas you see in the Dark red below:

This gives you an idea of how you can further the original visual comparison and create shape files to highlight the glacier loss without having to take the time to comb through data for the right set and then download it for analysis. The possibilities for applications like this are really endless as we continue to wrap ENVI functionality into ENVI tasks. ENVI Services Engine allows you to quickly and easily take the analysis to where the data is and save time on downloading, as well as utilize powerful processing tools.

Give it a try yourself here

Comments (0) Number of views (1756) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

28

Jul

2016

### Hough Transform Demo GUI

Author: Daryl Atencio

Hough Transform Demo GUI

The following code demonstrates the Hough transform.  It provides one interactive display for moving points in an image domain and a second display which displays the resulting Hough transform.  To run the application:

1. Save the code that follows to a file named hough_demo.pro
2. Open and compile the file in the IDLDE
3. Execute the following at the IDL command prompt

hough_demo

The optional keyword NUM_POINTS may be used to set the number of points in the image-domain display.  This number must be in the range [2,100].

A brief description of the graphical user interface (Launched using the command hough_demo, NUM_POINTS=10).

The points in the image-domain (left) can be moved by clicking and dragging.  The image in the Hough domain (right) will update as a point moves.  When moving the mouse over the Hough domain a representative line for that point will appear in the image domain.  For instance, hovering over the point in the Hough domain where many of the waves intersect will draw a line that crosses several points in the image domain (Sorry you can’t see the cursor I the image)

Additionally, two line equations are displayed:

1. Line Equation – The equation of the line between the first two points (red) in order from left to right in the image domain.
2. Hough Line Equation – The equation of the line defined by a point in the Hough domain.  This will update as the cursor moves the mouse cursor in the Hough-domain window.

The source code:

;------------------------------------------------------------------------------

;+

; Main event handler

;

; :Params:

;   sEvent: in, required, type="structure"

;     An IDL widget event structure

;

;-

pro hough_demo_event, sEvent

compile_opt idl2, logical_predicate

widget_control, sEvent.top, GET_UVALUE=oHoughDemo

oHoughDemo->Event, sEvent

end

;------------------------------------------------------------------------------

;+

; This method is called when the Hough demo GUI has been realized

;

; :Params:

;   tlb: in, required, type="long"

;     The widget ID of the top-level-base

;-

pro hough_demo_realize, tlb

compile_opt idl2, logical_predicate

widget_control, tlb, GET_UVALUE=oHoughDemo

oHoughDemo->NotifyRealize

end

;------------------------------------------------------------------------------

;+

; Lifecycle method called when the object is destroyed vis OBJ_DESTROY.

;

;-

pro hough_demo::Cleanup

compile_opt idl2, logical_predicate

self->Destruct

end

;------------------------------------------------------------------------------

;+

; Constructs the GUI for the demo application

;-

pro hough_demo::ConstructGUI

compile_opt idl2, logical_predicate

self.tlb = widget_base(EVENT_PRO='hough_demo_event', MAP=0, $NOTIFY_REALIZE='hough_demo_realize', /ROW, TITLE='Hough Demo',$

/TLB_KILL_REQUEST_EVENTS)

wBase = widget_base(self.tlb, /COLUMN)

wDraw = widget_draw(wBase, /BUTTON_EVENTS, GRAPHICS_LEVEL=2, /MOTION_EVENTS, $UNAME='draw_points', XSIZE=300, YSIZE=300) wLabel = widget_label(wBase, VALUE='Line Equation') wLabel = widget_label(wBase, /DYNAMIC_RESIZE, UNAME='label_line', VALUE='') wLabel = widget_label(wBase, VALUE='Hough Line Equation') wLabel = widget_label(wBase, /DYNAMIC_RESIZE, UNAME='label_hough', VALUE='') wBase = widget_base(self.tlb, /COLUMN) wDraw = widget_draw(wBase, GRAPHICS_LEVEL=2, /MOTION_EVENTS,$

/TRACKING_EVENTS, UNAME='draw_hough', XSIZE=500, YSIZE=400)

widget_control, self.tlb, /REALIZE, SET_UVALUE=self

end

;------------------------------------------------------------------------------

;+

; Cleans up any heap member variables

;-

pro hough_demo::Destruct

compile_opt idl2, logical_predicate

if widget_info(self.tlb, /VALID_ID) then begin

void = self->GetObject('model/points', WINDOW=oWindow)

obj_destroy, oWindow

void = self->GetObject('model/image', /HOUGH, WINDOW=oWindow)

obj_destroy, oWindow

widget_control, self.tlb, /DESTROY

endif

ptr_free, [self.pRho, self.pTheta]

end

;------------------------------------------------------------------------------

;+

; Main entry point for widget events

;

; :Params:

;   sEvent: in, required, type="structure"

;     An IDL widget event structure

;-

pro hough_demo::Event, sEvent

compile_opt idl2, logical_predicate

case tag_names(sEvent, /STRUCTURE_NAME) of

'WIDGET_DRAW': self->EventDraw, sEvent

'WIDGET_KILL_REQUEST': self->Destruct

'WIDGET_SLIDER': self->EventSlider, sEvent

'WIDGET_TRACKING': self->EventTracking, sEvent

else: help, sEvent

endcase

end

;------------------------------------------------------------------------------

;+

; This method handles events from draw widgets.  It simply passes the event

; to the event handler for the specific draw widget.

;

; :Params:

;   sEvent: in, required, type="structure"

;     An IDL {WIDGET_DRAW} structure

;-

pro hough_demo::EventDraw, sEvent

compile_opt idl2, logical_predicate

case widget_info(sEvent.id, /UNAME) of

'draw_hough': self->EventDrawHough, sEvent

'draw_points': self->EventDrawPoints, sEvent

else:

endcase

end

;------------------------------------------------------------------------------

;+

; This method handles events from the draw widget displaying the Hough domain.

;

; :Params:

;   sEvent: in, required, type="structure"

;     An IDL {WIDGET_DRAW} structure

;-

pro hough_demo::EventDrawHough, sEvent

compile_opt idl2, logical_predicate

case sEvent.type of

2: self->UpdateLinePlot, [sEvent.x,sEvent.y]

else:

endcase

end

;------------------------------------------------------------------------------

;+

; This method handles events from the draw widget displaying the image-domain

; points.

;

; :Params:

;   sEvent: in, required, type="structure"

;     An IDL {WIDGET_DRAW} structure

;-

pro hough_demo::EventDrawPoints, sEvent

compile_opt idl2, logical_predicate

widget_control, sEvent.id, GET_VALUE=oWindow

case sEvent.type of

0: begin

; Press event

oWindow->GetProperty, GRAPHICS_TREE=oView

oSelect = oWindow->Select(oVIew, [sEvent.x,sEvent.y])

if obj_valid(oSelect[0]) then begin

self.oSelect = oSelect[0]

self.oSelect->GetProperty, DATA=pts

void = min(abs(sEvent.x-pts[0,*]) + abs(sEvent.y-pts[1,*]), index)

self.oSelect->SetProperty, UVALUE=index

endif

end

1: begin

; Release event

self.oSelect = obj_new()

return

end

2: begin

; Motion event

if obj_valid(self.oSelect) then begin

widget_control, sEvent.id, GET_VALUE=oWindow

oWindow->GetProperty, DIMENSIONS=dims

if (sEvent.x LT 0) || (sEvent.y LT 0) $|| (sEvent.x GT dims[0]) || (sEvent.y GT dims[1]) then begin return endif self.oSelect->GetProperty, DATA=pts, UVALUE=index pts[*,index] = [sEvent.x,sEvent.y] self.oSelect->SetProperty, DATA=pts self->UpdateHough endif end else: return endcase oWindow->Draw end ;------------------------------------------------------------------------------ ;+ ; This method handles traking events ; ; :Params: ; sEvent: in, required, type="structure" ; An IDL {WIDGET_TRACKING} structure ;- pro hough_demo::EventTracking, sEvent compile_opt idl2, logical_predicate case widget_info(sEvent.id, /UNAME) of 'draw_hough': begin if ~sEvent.enter then begin oPlot = self->GetObject('model/plot', WINDOW=oWindow) oPlot->SetProperty, HIDE=1 oWindow->Draw widget_control, self->GetWID('label_hough'), SET_VALUE='' endif end else: endcase end ;------------------------------------------------------------------------------ ;+ ; Calculates the default MINX and MINY values used by the HOUGH algorithm ; ; :Keywords: ; X: out, optional, type="float" ; Set this keyword to a named variable to retrieve the default MINX value ; used by HOUGH ; Y: out, optional, type="float" ; Set this keyword to a named variable to retrieve the default MINY value ; used by HOUGH ;- pro hough_demo::GetMinXY,$

X=minX, Y=minY

compile_opt idl2, logical_predicate

void = self->GetObject('model/points', WINDOW=oWindow)

oWindow->GetProperty, DIMENSIONS=dims

minX = -(dims[0]-1)/2

minY = -(dims[1]-1)/2

end

;------------------------------------------------------------------------------

;+

; This method is for accessing objects in the object-graphics tree

;

; :Returns:

;   A reference to the object with the spacified name.  If no object contains

;   a match a null object will be returned.

;

; :Params:

;   name: in, required, type="string"

;     The name of the object to be retrieved.

;

; :Keywords:

;   HOUGH: in, optional, type="boolean"

;     Set this keyword to have the object retrieved from the Hough-domain

;     graphics window.  By default, the object will be retrieved from the

;     image-domian graphics window.

;   VIEW: out, optional, type="objref"

;     Set this keyword to a named variable to retrieve a reference to the

;     IDLGRVIEW object from the graphics window.

;   WINDOW: out, optional, type="objref"

;     Set this keyword to a named variable to retrieve a reference to the

;     IDLGRWINDOW object

;-

function hough_demo::GetObject, name, $HOUGH=hough,$

VIEW=oView, $WINDOW=oWindow uName = keyword_set(hough) ? 'draw_hough' : 'draw_points' widget_control, self->GetWID(uName), GET_VALUE=oWindow oWindow->GetProperty, GRAPHICS_TREE=oView return, oView->GetByName(name) end ;------------------------------------------------------------------------------ ;+ ; Calculates the slope and y-intercept of a line in either of the graphics ; windows ; ; :Returns: ; A tow element, floating-point vector containing, in order, the slope and ; y-intercept of the line ; ; :Params: ; xy: in, optional, type="float" ; Set this to the (x,y)-postition of the cursor in the HOUGH window. This ; information is only used when the HOUGH keyword is set. ; ; :Keywords: ; HOUGH: in, optional, type="boolean" ; Set this keyword to have the slope and intercept calculated from a point ; in the Hough domain. By default, the first two points in the image- ; domain are used. ;- function hough_demo::GetSlopeIntercept, xy,$

HOUGH=hough

mb = fltarr(2)

if keyword_set(hough) then begin

rho = (*self.pRho)[xy[1]]

theta = (*self.pTheta)[xy[0]]

self->GetMinXY, X=xMin, Y=yMin

mb[0] = -1.0/tan(theta)

mb[1] = (rho - xMin*cos(theta) - yMin*sin(theta)) / sin(theta)

endif else begin

oPoints = self->GetObject('model/points')

oPoints->GetProperty, DATA=pts

mb[0] = (pts[1,1]-pts[1,0])/(pts[0,1]-pts[0,0])

mb[1] = pts[1,0] - mb[0]*pts[0,0]

endelse

return, mb

end

;------------------------------------------------------------------------------

;+

; This method the ID of the widget that uses the specified name as its UNAME.

;

; :Returns:

;   The ID of the widget using the specified UNAME.  If no widget is using the

;   UNAME then 0 is returned

;

; :Params:

;   name: in, required, type="string"

;       The UNAME of the widget whose ID is to be returned

;

; :Keywords:

;   PARENT: in, optional, type="integer"

;       The widget ID of the parent widget.  If not set, self.tlb will be used.

;-

function hough_demo::GetWID, name, $PARENT=wParent compile_opt idl2, logical_predicate if ~n_elements(wParent) then wParent = self.tlb if ~widget_info(wParent, /VALID_ID) then return, 0 return, widget_info(wParent, FIND_BY_UNAME=name) end ;------------------------------------------------------------------------------ ;+ ; Lifecycle method for initializing an instance of the object via OBJ_NEW() ; ; :Returns: ; 1 if the object is successfully initialized ; ; :Params: ; NUM_POINTS: in, optional, type="integer" ; Sets the numner of points to be displayed in the image-domain display. ; The default is 2. If more than 2 points are displayed the first two will ; be red and the rest will be blue. This is because the slope/intercept ; calculation for this window is done using the first two points. ;- function hough_demo::Init,$

NUM_POINTS=nPoints

compile_opt idl2, logical_predicate

self.pRho = ptr_new(/ALLOCATE_HEAP)

self.pTheta = ptr_new(/ALLOCATE_HEAP)

self.nPoints = n_elements(nPoints) ? nPoints : 2

return, 1

end

;------------------------------------------------------------------------------

;+

; This method initializes the object graphics used by the Hough demo

;-

pro hough_demo::InitializeGraphics

compile_opt idl2, logical_predicate

; Image domain (Points)

widget_control, self->GetWID('draw_points'), GET_VALUE=oWindowPoint

oWindowPoint->GetProperty, DIMENSIONS=dims

oSymbol = obj_new('IDLgrSymbol', 1, THICK=2, SIZE=5)

x = fix(randomu(s,self.nPoints)*dims[0])

y = fix(randomu(s,self.nPoints)*dims[1])

colors = bytarr(3,self.nPoints)

colors[0,0:1] = 255

if (self.nPoints GT 2) then colors[2,2:self.nPoints-1] = 255

oPoints = obj_new('IDLgrPolyline', x, y, LINESTYLE=6, NAME='points', $SYMBOL=oSymbol, VERT_COLORS=colors) oPlot = obj_new('IDLgrPlot', HIDE=1, NAME='plot') oModelPoints = obj_new('IDLgrModel', NAME='model') oModelPoints->Add, [oPoints, oPlot] oViewPoints = obj_new('IDLgrView', COLOR=[255,255,255], VIEWPLANE_RECT=[0,0,dims]) oViewPoints->Add, oModelPoints oWindowPoint->Setproperty, GRAPHICS_TREE=oViewPoints oWindowPoint->Draw ; Hough domain widget_control, self->GetWID('draw_hough'), GET_VALUE=oWindowHough oImage = obj_new('IDLgrImage', NAME='image') oModelHough = obj_new('IDLgrModel', NAME='model') oModelHough->Add, oImage oViewHough = obj_new('IDLgrView', COLOR=[255,255,255]) oViewHough->Add, oModelHough oWindowHough->SetProperty, GRAPHICS_TREE=oViewHough oWindowHough->Draw end ;------------------------------------------------------------------------------ ;+ ; This method is called when the GUI is realized. It finishes up some ; initialization and starts XMANAGER. ;- pro hough_demo::NotifyRealize compile_opt idl2, logical_predicate ss = get_screen_size() self->InitializeGraphics self->UpdateHough, /RESIZE_WINDOW wGeom = widget_info(self.tlb, /GEOMETRY) widget_control, self.tlb, XOFFSET=(ss[0]-wGeom.scr_xSize)/2, YOFFSET=(ss[1]-wGeom.scr_ySize)/2 widget_control, self.tlb, MAP=1 xmanager, 'hough_demo', self.tlb, /NO_BLOCK end ;------------------------------------------------------------------------------ ;+ ; This method updates the image in the hough domain according to the points ; in the image doimain. ; ; :Keywords: ; RESIZE_WINDOW: in, optional, type="boolean" ; Set this keyword to have the Hough-domain display resized to the size ; of the output from HOUGH. This is only called the first time a Hough ; image is calculated. ;- pro hough_demo::UpdateHough,$

RESIZE_WINDOW=resize

compile_opt idl2, logical_predicate

oPoints = self->GetObject('model/points', WINDOW=oWindowPoints)

oWindowPoints->GetProperty, DIMENSIONS=dims

oPoints->GetProperty, DATA=pts

temp = make_array(dims, /FLOAT, VALUE=0.0)

pts = transpose(pts)

temp[pts[*,0],pts[*,1]] = 1.0

self->GetMinXY, X=xMin, Y=yMin

imgHough = hough(temporary(temp), RHO=rho, THETA=theta)

*self.pRho = rho

*self.pTheta = theta

dimHough = size(imgHough, /DIMENSIONS)

oImage = self->GetObject('model/image', /HOUGH, VIEW=oViewHough, WINDOW=oWindowHough)

if keyword_set(resize) then begin

widget_control, self->GetWID('draw_hough'), XSIZE=dimHough[0], YSIZE=dimHough[1]

oViewHough->SetProperty, VIEWPLANE_RECT=[0,0,dimHough]

endif

oImage->SetProperty, DATA=255-bytscl(imgHough)

oWindowHough->Draw

self->UpdateLineEquation

end

;------------------------------------------------------------------------------

;+

; This method updates the Hough line-equation on the GUI.

;

; :Params:

;   xy: in, required, type="float"

;     A 2-element vector containing the xy-location in the Hough domain for

;     which the line will be calculated.

;-

pro hough_demo::UpdateHoughEquation, xy

compile_opt idl2, logical_predicate

mb = self->GetSlopeIntercept(xy, /HOUGH)

str = 'y = '+string(mb[0], FORMAT='(f0.2)')+' * x + '+string(mb[1], FORMAT='(f0.2)')

widget_control, self->GetWID('label_hough'), SET_VALUE=str

end

;------------------------------------------------------------------------------

;+

; This method updates the image-domain line equation on the GUI according to

; the first two points.

;-

pro hough_demo::UpdateLineEquation

compile_opt idl2, logical_predicate

mb = self->GetSlopeIntercept()

str = 'y = '+string(mb[0], FORMAT='(f0.2)')+' * x + '+string(mb[1], FORMAT='(f0.2)')

widget_control, self->GetWID('label_line'), SET_VALUE=str

end

;------------------------------------------------------------------------------

;+

; This method updates the line plot in the image-domain display using a point

; in the Hough domain.

;

; :Params:

;   xy: in, required, type="long"

;     A two-element vector containing the xy-location in the Hough domain for

;     which the line will be drawn in the image-domain display

;-

pro hough_demo::UpdateLinePlot, xy

compile_opt idl2, logical_predicate

oPlot = self->GetObject('model/plot', WINDOW=oWindow)

oWindow->GetProperty, DIMENSIONS=dims

rho = (*self.pRho)[xy[1]]

theta = (*self.pTheta)[xy[0]]

self->GetMinXY, X=xMin, Y=yMin

m = -1.0/tan(theta)

b = (rho - xMin*cos(theta) - yMin*sin(theta)) / sin(theta)

x = findgen(dims[0])

y = m*x+b

oPlot->SetProperty, DATAX=x, DATAY=y, HIDE=0

oWindow->Draw

self->UpdateHoughEquation, xy

end

;------------------------------------------------------------------------------

;+

; Class structure definition

;

; :Fields:

;   nPoints: The number of points in the image-domain display

;   oSelect: A reference to the selected graphics object.  This is used for

;     moving points in the image-domain display

;   pRho: Holds the rho values for the hough transform

;   pTheta: Holds the theta values for the hough transform

;   tlb: The widget ID of the top-level-base of the GUI

;-

pro hough_demo__define

compile_opt idl2, logical_predicate

void = {hough_demo     $,nPoints : 0$

,oSelect : obj_new() $,pRho : ptr_new()$

,pTheta  : ptr_new() $,tlb : 0L$

}

end

;------------------------------------------------------------------------------

;+

; This routine starts the Hough demo

;

; :Keywords:

;   NUM_POINTS: in, optional, type="integer"

;     Sets the numner of points to be displayed in the image-domain display.

;     The allowed range is [2,100].  The default is 2.  If more than 2 points

;     are displayed the first two will be red and the rest will be blue.  This

;     is because the slope/intercept calculation for this window is done using

;     the first two points.

;-

pro hough_demo, NUM_POINTS=nPoints

compile_opt idl2, logical_predicate

if n_elements(nPoints) && ((nPoints LT 2) || (nPoints GT 100)) then begin

void = dialog_message('The number ofpoints must be in [2,100]', /ERROR)

return

endif

oHoughDemo = obj_new('hough_demo', NUM_POINTS=nPoints)

oHoughDemo->ConstructGUI

end

Comments (0) Number of views (877) Article rating: 2.0

Categories: IDL Blog | IDL Data Point

Tags:

26

Jul

2016

### Creating Image Tiles in ENVI

Author: Jason Wolfe

I want to preview a new feature that will be available in the upcoming ENVI 6.0 release: a series of “Dice Raster” tools than can separate images into multiple tiles.

You will have four different options for specifying how to separate (or dice) images. The first is indicating how many tiles to create in the X and Y direction. For example, I diced the  above image into four tiles in the X direction and three tiles in the Y direction, for a total of 12 tiles.

Another option is to create tiles based on linear distance. In this case, the image must be georeferenced to a standard or RPC map projection and you must know the units associated with the projection. The following example shows an image georeferenced to a UTM Zone 13N WGS-84 projection, split into tiles that are 1000 x 1000 meters.

In most cases the tiles in the last row and column will be smaller than the specified distance, as this example shows.

Another option is to dice an image by a given number of pixels in the X and Y direction. This is similar to the tile distance option.

Finally, you can create image tiles based on the spatial extent of vector records in a shapefile. Here is an example of how you might use this feature. Let’s say you have a Landsat scene and you want to separate it into tiles that correspond to USGS 7.5-minute topographic quadrangles. This screen capture shows a Landsat TM scene of the Grand Canyon along with a shapefile of Arizona quadrangle map boundaries (available from Data.gov).

Since there are so many polygons in this shapefile, I created a vector subset of only the records that overlap the Landsat image. Open the Attribute Viewer, then click in the display to select the records that you want to keep. The Attribute Viewer highlights the selected records.

Then choose the File > Save Selected Records to a new Shapefile menu option. Now we have a shapefile that only contains quadrangle map boundaries around the Landsat scene.

Now we’re ready to create the tiles. The Dice Raster tools are available from the Toolbox under Raster Management > Raster Dicer. Use the Dice Raster by Vector tool to separate the Landsat image into separate tiles based on the quadrangle boundaries.

Since the shapefile had 270 polygon records, the Dice Raster tool will create 270 separate images. Here is one of the resulting image tiles that corresponds to the House Rock Springs Quadrangle boundary, displayed with map grid line annotations (also available in ENVI 6.0):

You can also use the Dice Raster tools, for example, to split an image of a large study area into smaller areas of analysis. They are just one of many versatile and easy-to-use features that will be available in the next release of ENVI.

Comments (1) Number of views (2842) Article rating: 5.0

21

Jul

2016

### Getting Electric with ENVI and IDL

Author: Zachary Norman

A few days ago we were lucky to have a pretty awesome lightning storm East of where I live (by Boulder, CO). The storm was far enough East that we weren't getting any rain, but the distance also provided an great view to the amazing lightning show that was going on in the clouds. Here is an animation of some pictures I took of the storm:

From my vantage point, you could see tons of lighting (one or two each second), so I decided to grab my camera and snap a few great pictures. After I had my pictures taken, they then became my data for this blog post. Because there was so much lightning, I wondered if I would be able to take advantage of ENVI's analytics to go through my images and detect the lightning in them.

This turned out to be pretty easy to code up with the use of three tasks and a raster series. To find the lightning, I decided to use anomaly detection which works really well to find features in images depending on what you are looking for (similar to feature extraction). With anomaly detection I found local areas that stand out from their surroundings which was perfect for this scenario because lightning is bright compared to the clouds behind it. Once you find your anomalies you then just have to turn the anomalous raster to a classification rater and perform classification cleanup if you want to. The only cleanup I performed was classification sieving to throw out single pixels which were determined to be anomalies. With all the processing, the tasks I ended up using were:

RXAnomalyDetection

PercentThresholdClassification

ClassificationSieving

In order to make the results more friendly to visualize, I took one additional step to combine the images from two raster series into one raster series. I did this by taking the data from each raster (original images and the lightning pixels) and producing one raster which had an array containing both datasets. This allows you to visualize the original data and processed data side by side so you can see where the lightning should be in the images.

After I had that extra piece of code, it just came down to running it, which took a bit of time, but produced some cool output. Here is what the lightning-detector produced for a final product:

From the animation, it does a pretty good job of finding the lightning. If the clouds are really bright behind the lightning, then the lightning isn't always found, but the lightning detector works great when the clouds are dark.

In case anyone wants to take a stab at this on their own, below is the IDL code that I used to generate the raster series used to create the images. You can give it a whirl with your own data, then you just need to create a raster series ahead of time which is the INPUT_RASTER_SERIES for the procedure find_lightning (used to find the lightning). There are three procedures I used to generate the output above. First is the procedure that I used to run my two other procedures which is called run_find_lightning. To use this, just modify the code that says 'C:\path\to\raster\series' for the path to your raster series.

pro run_find_lightning  compile_opt idl2    ;start ENVI if it hasn't opened already  e = envi(/current)  if (e eq !NULL) then begin    e = envi(/headless)  endif  ;check if we have opened our raster series yet  if ISA(series, 'ENVIRASTERSERIES') then begin    ;return to first raster if opened already    series.first  endif else begin    series = ENVIRasterSeries('C:\path\to\raster\series')  endelse    ;find where the lightning is in the images  find_lightning,$INPUT_RASTER_SERIES = series,$    OUTPUT_RASTER_SERIES = output_raster_series    ;return both raster series to their first raster in case they arent already  series.first  output_raster_series.first    ;combine our raster series iamges together to produce a mega raster series!  combine_series,$INPUT_SERIES_1 = series,$    INPUT_SERIES_2 = output_raster_series,$OUTPUT_COMBINED_SERIES = output_combined_seriesend There are two more programs needed to call the run_find_lightning procedure. Here is the first, find lightning. ;find lightning! pro find_lightning,$
INPUT_RASTER_SERIES = input_raster_series,$OUTPUT_RASTER_SERIES = output_raster_series compile_opt idl2 ;get current session of ENVI, start UI if not open e = envi(/current) if (e eq !NULL) then begin e = envi() endif ;create list to hold our lightning raster files lightning_raster_uris = list() ;iterate over each raster series nrasters = input_raster_series.COUNT foreach raster, input_raster_series, count do begin ;perform anomaly detection AnomTask = ENVITask('RXAnomalyDetection') AnomTask.INPUT_RASTER = raster AnomTask.ANOMALY_DETECTION_METHOD = 'RXD' AnomTask.MEAN_CALCULATION_METHOD = 'local' AnomTask.KERNEL_SIZE = 15 AnomTask.execute ;open output anom_raster = e.openraster(AnomTask.OUTPUT_RASTER_URI) ;threshold anomalies to classification ThreshTask = ENVITask('PercentThresholdClassification') ThreshTask.INPUT_RASTER = anom_raster ThreshTask.THRESHOLD_PERCENT = .05 ThreshTask.execute ;open output thresh_raster = e.openraster(ThreshTask.OUTPUT_RASTER_URI) ;sieve the results SieveTask = ENVITask('ClassificationSieving') SieveTask.INPUT_RASTER = thresh_raster SieveTask.MINIMUM_SIZE = 40 SieveTask.Execute ;open output sieve_raster = e.openraster(SieveTask.OUTPUT_RASTER_URI) ;save result lightning_raster_uris.add, sieve_raster.URI ;close intermediate rasters anom_raster.close thresh_raster.close sieve_raster.close ;print some info about how many images we have processed print, string(9b) + 'Completed lightning finder on ' + strtrim(count+1,2) + ' of ' + strtrim(nrasters,2) + ' rasters' endforeach ;convert lightning raster uris to an array lightning_raster_uris = lightning_raster_uris.toarray() ;create a raster series SeriesTask = ENVITask('BuildRasterSeries') SeriesTask.INPUT_RASTER_URI = lightning_raster_uris SeriesTask.Execute output_raster_series = ENVIRasterSeries(SeriesTask.OUTPUT_RASTERSERIES_URI) end  Here is the last procedure, combine_series, which combines the two raster series together in one image. pro combine_series,$
INPUT_SERIES_1 = input_series_1,$INPUT_SERIES_2 = input_series_2,$
OUTPUT_COMBINED_SERIES = output_combined_series

compile_opt idl2

e = envi(/current)
if (e eq !NULL) then begin
e = envi()
endif

;save combined image URIs
combined_uris = list()

;combine the images together into one MEGA series
nrasters = input_series_1.COUNT
for i=0, nrasters-1 do begin
;get rasters
image_raster = input_series_1.RASTER
lightning_raster = input_series_2.RASTER
lightning_colors = lightning_meta['CLASS LOOKUP']

;pre-allocate a byte array to hold the data from each raster
if (i eq 0) then begin
dims = [image_raster.NCOLUMNS, image_raster.NROWS]
data = make_array(dims[0], 2*dims[1], 3, TYPE=1, VALUE=0)
endif

;get image data
image_data = image_raster.GetData(INTERLEAVE='BSQ')

;get ligtning data
lightning_data = lightning_raster.GetData()
;convert lightning data to RGB
temp = reform(lightning_colors[*, lightning_data], 3, dims[0], dims[1])
;change interleave
lightning_data = transpose(temp, [1, 2, 0])

;fill preallocated array with data
data[*,0:dims[1]-1,*] = image_data
data[*,dims[1]:*,*] = lightning_data

;make a new raster
outraster = ENVIRaster(data)
outraster.save

;save output raster URI

;step to next rasters
input_series_1.next
input_series_2.next
print, string(9b) + 'Combined ' + strtrim(i+1,2) + ' of ' + strtrim(nrasters,2) + ' total rasters'

endfor

;convert list to array
combined_uris = combined_uris.toarray()

;create another raster series with our combined images

;open the output raster series

end


Happy lightning hunting!

Comments (0) Number of views (1957) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

15

Jul

2016

### TWI Finish Line

Author: Guss Wright

Hello Everyone,

This is officially my last blog as an Army Training with Industry (TWI) Student, although you will certainly hear from me again. Can you believe a year has passed already? When I came on board at Harris Geospatial last August the goal was to facilitate mutually improved understanding, strengthen the partnership and better learn ENVI to ultimately enhance the US Army’s combat effectiveness. With all that has been learned, shared and documented in the last 12 months, I think we’ve accomplished what we set out to do and more. I was made to feel as though I was a part of the Harris Geospatial team. To reciprocate this hospitality, a few Army Challenge Coins have been passed out this week. If you are a Soldier, then you know what that means. Harris is a part of the team, so when you see a member at the upcoming ENVI Analytics Symposium or any other conference or encounter, challenge them to show you their coin or beverages are on them; just kidding about the beverages J.

With respect to the past twelve months, I’d say it has been an absolute marathon of learning. When I first arrived, my experience with ENVI was novice at best. I had successfully implemented solutions such as anomaly detection and change detection during tours in Operation Iraqi Freedom and Operation New Dawn. However, like many other Defense & Intelligence users, I was still heavily reliant on other software suites to perform certain workflows, such as mosaicking, orthorectification and producing specialized Compressed Arc-Digitized Raster Graphics (CADRG). This wasn’t because ENVI couldn’t accomplish these tasks, but rather because Soldiers like me just didn’t know how to using ENVI.

I’m confident enough to now say that this knowledge gap has been bridged for the D&I community with the help of the ENVI Pocket Guides, VOLUME 1 | BASICS and the recently finished VOLUME 2 | INTERMEDIATE.

Volume 1 provides succinct instructions on how to perform the following tasks using ENVI:

1.       Understand the Interface

2.       Mosaic data

3.       Subset data

4.       Orthorectify data

5.       Pan Sharpen data

6.       Perform change detection

7.       Perform anomaly detection

8.       Produce a terrain categorization (TERCAT)

9.       Export data to CADRG

Volume 2 builds on the basics by providing succinct steps on how to perform the following tasks using ENVI and IDL:

1.       Add Grid reference & count features

2.       Perform Band Math

3.       Layer Stack images

4.       Exploit Shortwave Infrared (SWIR) bands

5.       Perform Spectral Analysis in general

6.       Perform Image Calibration/ Atmospheric Correction

7.       Extract features from LiDAR

8.       Batch Processing using IDL

Keep an eye on this blog for a hyperlink to VOLUME 2 of the Pocket Guide. It’s currently being formatted and printed.

TWI has been an honor and a privilege. I strongly recommend continuation of this program by both Harris Geospatial and the Army. I can certainly say that Army Technicians’ and Noncommissioned Officers’ development yearns for such opportunities. There is absolutely no way I could have learned enough to compile the Pocket Guides in any other setting. Again, it has been a marathon, but I’d do it again in a heartbeat. Thanks for the hospitality and opportunity from the bottom of my heart to the good folks at ENVI.

Chief Wright~ Out for Now.

Comments (0) Number of views (1956) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

## MONTHLY ARCHIVE

 « March 2017 »
SunMonTueWedThuFriSat
2627281234
567891011
12131415161718
19202122232425
2627282930311
2345678

## GUEST AUTHORS

 Industries Defense & Intelligence Environmental Monitoring Academic Precision Agriculture
 Learn Videos Blogs Events & Webinars Training Case Studies Whitepapers Resources

 Support Forums Help Articles Reference Guides Updates & Maintenance
 Company Mission Careers Press Room Legal Exelis, Inc
© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation