28

Sep

2016

The Amazing Race!

Author: Zachary Norman

It wasn't so long ago that the IDL-Python bridge was introduced to IDL 8.5. It was with this new version, that I got my first experience programming with Python and testing the IDL-Python bridge. Through the past year it has been exciting to see the new changes and improvements that have become a part of the bridge.



Some of these new features include:

-Better error catching with the IDL-Python bridge

-Enhanced Jupyter notebook that allows for the development of full IDL programs and Python code in the same environment

-Improved support for variables passing back and forth


With all the time I have spent with Python, I have always wondered what some of the advantages are between Python and IDL. One thing that I have commonly heard several engineers say was that IDL was much faster than Python. For this blog, I decided to put that to the test and see how Python and IDL really compared to one another.



Before talking about the test, I do just want to explain things a bit about how it was set up and some potential caveats about the processing times that will be shown. With the tests I created, I did my best to choose tests that were comparable between IDL and Python. Since I'm no expert at Python, there very well may have been other methods that could be faster than what I will show. Most of the pieces I included in the test are things I found easily by doing a web search - meaning that most of the approaches I used were the most common programming methods that people are likely using. This shows how much faster IDL might be than a general program than something that someone might write in Python.





The test:

Here is what was actually tested between IDL and Python with an array of [10000,10000] or 10000*10000 elements

-Array creation time
-Type conversion times
-Index array creation times (i.e. [0,1,2,3,4...,n-1])

-Incrementing array values of all elements by 1

-Complex math expression with array (exact equation: sqrt(sin(arr*arr)))

-Single threaded for IDL and multithreaded

-Array element access times (i.e. setting y = arr[i])

-Simple image processing filter times (filters: sobel, roberts, prewitt)




The results:

Average array creation time (seconds):

    Python : 0.213000 +/- 0.00953933

    IDL    : 0.0936666 +/- 0.0155028

Total time (seconds):

    Python : 0.639000

    IDL    : 0.281000

Python/IDL time ratio: 2.27402

Average array data type conversion time (seconds):

     Python : 0.171333 +/- 0.0155028

    IDL    : 0.0730000 +/- 0.00866031

Total time (seconds):

     Python : 0.514000

     IDL    : 0.219000

Python/IDL time ratio: 2.34703





Average index array creation time (seconds):

     Python : 0.229000 +/- 0.00866031

     IDL    : 0.124667 +/- 0.0160104

Total time (seconds):

    Python : 0.687000

    IDL    : 0.374000

Python/IDL time ratio: 1.83690





Average increasing array value time (seconds):

     Python : 0.0933333 +/- 0.000577446

     IDL    : 0.0313334 +/- 0.000577377

Total time (seconds):

    Python : 0.280000

    IDL    : 0.0940001

Python/IDL time ratio: 2.97872





Average complex math statements (1 thread) time (seconds):

    Python : 6.36967 +/- 0.0645319

    IDL    : 8.34667 +/- 0.0155028

Total time (seconds):

    Python : 19.1090

    IDL    : 25.0400

Python/IDL time ratio: 0.763139





Average complex math statements (8 thread) time (seconds):

    Python : 6.34400 +/- 0.0321871

    IDL    : 1.93933 +/- 0.00923762

Total time (seconds):

    Python : 19.0320

    IDL    : 5.81800

Python/IDL time ratio: 3.27123





Average loop through array element time (seconds):

    Python : 11.5290 +/- NaN

    IDL    : 3.29100 +/- NaN

Total time (seconds):

    Python : 11.5290

    IDL    : 3.29100

Python/IDL time ratio: 3.50319





Average image processing routines time (seconds):

    Python : 15.3660 +/- 0.0829635

    IDL    : 1.39900 +/- 0.0238955

Total time (seconds):

    Python : 46.0980

    IDL    : 4.19700

Python/IDL time ratio: 10.9836







Conclusion:



In short, IDL significantly outperformed Python will all the speed tests apart from the complex math statement. However, IDL has access to built in multithreading for large arrays and, with multithreading enabled, IDL outperforms Python significantly when using all available cores.



Below is the IDL code used to compare the processing speed of IDL and Python. To use it you will need a few Python modules which can be found at the beginning of the procedure "python_test". 

IDL Code:
pro printTimes, pythontimes, idltimes, TIMED = timed
  compile_opt idl2
  
  ;check if we have a name for the process to print with the mean
  if ~keyword_set(timed) then begin
    add = ' time (seconds):'
  endif else begin
    add = timed + ' time (seconds):'
  endelse
  
  print, 'Average ' + add 
  print, '  Python : ' + strtrim(pythontimes.mean(),2) + ' +/- ' + strtrim(stddev(pythontimes),2)
  print, '  IDL    : ' + strtrim(idltimes.mean(),2) + ' +/- ' + strtrim(stddev(idltimes),2)
  print, 'Total time (seconds):'
  print, '  Python : ' + strtrim(total(pythontimes),2)
  print, '  IDL    : ' + strtrim(total(idltimes),2)
  print, 'Python/IDL time ratio: ' + strtrim(total(pythontimes)/total(idltimes),2)
  
end

pro python_test
  compile_opt idl2
  
  
  ;initialize Python
  >>> 'import numpy as np'
  >>> 'import time'
  >>> 'import math'
  >>> 'from idlpy import *'
  >>> 'from skimage.filters import roberts, sobel, prewitt'
  
  ;number of times we want to run each test
  nloops = 3
  
  ;array dimensions for iterator tests
  dims = 10000
  ;>>>'dims = ' + strtrim(dims,2)
  python.dims = dims
  
  ;array dimensions for filter tests
  dims_filter = 10000
  python.dims_filter = dims_filter
  
  ;initialize arrays to hol dpython times
  pythontimes = fltarr(nloops)
  idltimes = fltarr(nloops)
  
  ;test array creation in Python and IDL
  for i=0,nloops-1 do begin
    tStart = systime(/seconds)
    >>> 'arr = np.ones((dims,dims))'
    tEndPython = systime(/seconds)
    arr = fltarr(dims,dims)
    tEndIDL = systime(/seconds)
    
    ;save the times
    pythontimes[i] = tEndPython - tStart
    idltimes[i] = tEndIDL - tEndPython
  endfor
  print
  printTimes, pythontimes, idltimes, TIMED = 'array creation'

  ;check type conversion times
  for i=0,nloops-1 do begin
    tStart = systime(/seconds)
    >>> 'arr2 = arr.astype(int)'
    tEndPython = systime(/seconds)
    arr2 = fix(arr)
    tEndIDL = systime(/seconds)
    
    ;save the times
    pythontimes[i] = tEndPython - tStart
    idltimes[i] = tEndIDL - tEndPython
  endfor
  print
  printTimes, pythontimes, idltimes, TIMED = 'array data type conversion'

  ;check index array creation times
  for i=0,nloops-1 do begin
    tStart = systime(/seconds)
    >>> 'arr = np.arange(0, dims*dims, dtype=long)'
    tEndPython = systime(/seconds)
    arr = lindgen(dims*dims)
    tEndIDL = systime(/seconds)

    ;save the times
    pythontimes[i] = tEndPython - tStart
    idltimes[i] = tEndIDL - tEndPython
  endfor
  print
  printTimes, pythontimes, idltimes, TIMED = 'index array creation'

  ;check adding values to array
  for i=0,nloops-1 do begin
    tStart = systime(/seconds)
    >>> 'arr += 1'
    tEndPython = systime(/seconds)
    arr += 1
    tEndIDL = systime(/seconds)

    ;save the times
    pythontimes[i] = tEndPython - tStart
    idltimes[i] = tEndIDL - tEndPython
  endfor
  print
  printTimes, pythontimes, idltimes, TIMED = 'increasing array value'

  ;check complex math expressions with a single thread
  pref_set, 'IDL_CPU_TPOOL_NTHREADS', 1, /commit
  for i=0,nloops-1 do begin
    tStart = systime(/seconds)
    >>> 'y = np.sin(arr*arr)**.5'
    tEndPython = systime(/seconds)
    y = sqrt(sin(arr*arr))
    tEndIDL = systime(/seconds)

    ;save the times
    pythontimes[i] = tEndPython - tStart
    idltimes[i] = tEndIDL - tEndPython
  endfor
  print
  printTimes, pythontimes, idltimes, TIMED = 'complex math statements (' + strtrim(!CPU.TPOOL_NTHREADS,2) + ' thread)'
  pref_set, 'IDL_CPU_TPOOL_NTHREADS', /default, /commit
  
  ;check complex math expressions with all threads
  for i=0,nloops-1 do begin
    tStart = systime(/seconds)
    >>> 'y = np.sin(arr*arr)**.5'
    tEndPython = systime(/seconds)
    y = sqrt(sin(arr*arr))
    tEndIDL = systime(/seconds)

    ;save the times
    pythontimes[i] = tEndPython - tStart
    idltimes[i] = tEndIDL - tEndPython
  endfor
  print
  printTimes, pythontimes, idltimes, TIMED = 'complex math statements (' + strtrim(!CPU.TPOOL_NTHREADS,2) + ' thread)'

  ;check array element access times
  nhere = 1
  for i=0,nhere-1 do begin
    tStart = systime(/seconds)
    >>>'for x in np.nditer(arr):\n    y=x'
    tEndPython = systime(/seconds)
    foreach x, arr do y = x
    tEndIDL = systime(/seconds)

    ;save the times
    pythontimes[i] = tEndPython - tStart
    idltimes[i] = tEndIDL - tEndPython
  endfor
  print
  printTimes, pythontimes[0:nhere-1], idltimes[0:nhere-1], TIMED = 'loop through array element'

  ;check times for image processing
  im_dat = lonarr(dims_filter, dims_filter)
  ;set a square in the middle of the image to 1
  im_dat[.4*dims_filter:.6*dims_filter,.4*dims_filter:.6*dims_filter] = 1
  ;send the array to Python as well
  python.im_dat = im_dat
  
  for i=0,nloops-1 do begin
    tStart = systime(/seconds)
    >>> 'edge_sobel = sobel(im_dat)'
    >>> 'edge_roberts = roberts(im_dat)'
    >>> 'edge_prewitt = prewitt(im_dat)'
    tEndPython = systime(/seconds)
    edge_sobel = sobel(im_dat)
    edge_roberts = roberts(im_dat)
    edge_prewitt = prewitt(im_dat)
    tEndIDL = systime(/seconds)

    ;save the times
    pythontimes[i] = tEndPython - tStart
    idltimes[i] = tEndIDL - tEndPython
  endfor
  print
  printTimes, pythontimes, idltimes, TIMED = 'image processing routines'

  stop
end

Comments (0) Number of views (1859) Article rating: 5.0

Categories: IDL Blog | IDL Data Point

Tags:

21

Jul

2016

Getting Electric with ENVI and IDL

Author: Zachary Norman

A few days ago we were lucky to have a pretty awesome lightning storm East of where I live (by Boulder, CO). The storm was far enough East that we weren't getting any rain, but the distance also provided an great view to the amazing lightning show that was going on in the clouds. Here is an animation of some pictures I took of the storm:

From my vantage point, you could see tons of lighting (one or two each second), so I decided to grab my camera and snap a few great pictures. After I had my pictures taken, they then became my data for this blog post. Because there was so much lightning, I wondered if I would be able to take advantage of ENVI's analytics to go through my images and detect the lightning in them.

This turned out to be pretty easy to code up with the use of three tasks and a raster series. To find the lightning, I decided to use anomaly detection which works really well to find features in images depending on what you are looking for (similar to feature extraction). With anomaly detection I found local areas that stand out from their surroundings which was perfect for this scenario because lightning is bright compared to the clouds behind it. Once you find your anomalies you then just have to turn the anomalous raster to a classification rater and perform classification cleanup if you want to. The only cleanup I performed was classification sieving to throw out single pixels which were determined to be anomalies. With all the processing, the tasks I ended up using were:

RXAnomalyDetection

PercentThresholdClassification

ClassificationSieving

In order to make the results more friendly to visualize, I took one additional step to combine the images from two raster series into one raster series. I did this by taking the data from each raster (original images and the lightning pixels) and producing one raster which had an array containing both datasets. This allows you to visualize the original data and processed data side by side so you can see where the lightning should be in the images.

After I had that extra piece of code, it just came down to running it, which took a bit of time, but produced some cool output. Here is what the lightning-detector produced for a final product:

From the animation, it does a pretty good job of finding the lightning. If the clouds are really bright behind the lightning, then the lightning isn't always found, but the lightning detector works great when the clouds are dark.

In case anyone wants to take a stab at this on their own, below is the IDL code that I used to generate the raster series used to create the images. You can give it a whirl with your own data, then you just need to create a raster series ahead of time which is the INPUT_RASTER_SERIES for the procedure find_lightning (used to find the lightning). There are three procedures I used to generate the output above. First is the procedure that I used to run my two other procedures which is called run_find_lightning. To use this, just modify the code that says 'C:\path\to\raster\series' for the path to your raster series. 

pro run_find_lightning
  compile_opt idl2
  
  ;start ENVI if it hasn't opened already
  e = envi(/current)
  if (e eq !NULL) then begin
    e = envi(/headless)
  endif
  ;check if we have opened our raster series yet
  if ISA(series, 'ENVIRASTERSERIES') then begin
    ;return to first raster if opened already
    series.first
  endif else begin
    series = ENVIRasterSeries('C:\path\to\raster\series')
  endelse
  
  ;find where the lightning is in the images
  find_lightning,$
    INPUT_RASTER_SERIES = series,$
    OUTPUT_RASTER_SERIES = output_raster_series
  
  ;return both raster series to their first raster in case they arent already
  series.first
  output_raster_series.first
  
  ;combine our raster series iamges together to produce a mega raster series!
  combine_series,$
    INPUT_SERIES_1 = series,$
    INPUT_SERIES_2 = output_raster_series,$
    OUTPUT_COMBINED_SERIES = output_combined_series

end

There are two more programs needed to call the run_find_lightning procedure. Here is the first, find lightning.

;find lightning!
pro find_lightning, $
  INPUT_RASTER_SERIES = input_raster_series,$
  OUTPUT_RASTER_SERIES = output_raster_series
  
  compile_opt idl2
  
  ;get current session of ENVI, start UI if not open
  e = envi(/current)
  if (e eq !NULL) then begin
    e = envi()
  endif
  
  ;create list to hold our lightning raster files
  lightning_raster_uris = list()
  
  ;iterate over each raster series
  nrasters = input_raster_series.COUNT
  foreach raster, input_raster_series, count do begin
   
    
    ;perform anomaly detection
    AnomTask = ENVITask('RXAnomalyDetection')
    AnomTask.INPUT_RASTER = raster
    AnomTask.ANOMALY_DETECTION_METHOD = 'RXD'
    AnomTask.MEAN_CALCULATION_METHOD = 'local'
    AnomTask.KERNEL_SIZE = 15
    AnomTask.execute
    
    ;open output
    anom_raster = e.openraster(AnomTask.OUTPUT_RASTER_URI)
    
    ;threshold anomalies to classification
    ThreshTask = ENVITask('PercentThresholdClassification')
    ThreshTask.INPUT_RASTER = anom_raster
    ThreshTask.THRESHOLD_PERCENT = .05
    ThreshTask.execute
    
    ;open output
    thresh_raster = e.openraster(ThreshTask.OUTPUT_RASTER_URI)
    
    ;sieve the results
    SieveTask = ENVITask('ClassificationSieving')
    SieveTask.INPUT_RASTER = thresh_raster
    SieveTask.MINIMUM_SIZE = 40
    SieveTask.Execute
    
    ;open output
    sieve_raster = e.openraster(SieveTask.OUTPUT_RASTER_URI)
    
    ;save result
    lightning_raster_uris.add, sieve_raster.URI
    
    ;close intermediate rasters
    anom_raster.close
    thresh_raster.close
    sieve_raster.close
    
    ;print some info about how many images we have processed
    print, string(9b) + 'Completed lightning finder on ' + strtrim(count+1,2) + ' of ' + strtrim(nrasters,2) + ' rasters'
    
  endforeach
  
  ;convert lightning raster uris to an array
  lightning_raster_uris = lightning_raster_uris.toarray()
  
  ;create a raster series
  SeriesTask = ENVITask('BuildRasterSeries')
  SeriesTask.INPUT_RASTER_URI = lightning_raster_uris
  SeriesTask.Execute
  
  output_raster_series = ENVIRasterSeries(SeriesTask.OUTPUT_RASTERSERIES_URI)
  
end

Here is the last procedure, combine_series, which combines the two raster series together in one image.

pro combine_series,$
  INPUT_SERIES_1 = input_series_1,$
  INPUT_SERIES_2 = input_series_2,$
  OUTPUT_COMBINED_SERIES = output_combined_series
  
  compile_opt idl2
  
  e = envi(/current)
  if (e eq !NULL) then begin
    e = envi()
  endif
  
  ;save combined image URIs
  combined_uris = list()

  ;combine the images together into one MEGA series
  nrasters = input_series_1.COUNT
  for i=0, nrasters-1 do begin
    ;get rasters
    image_raster = input_series_1.RASTER
    lightning_raster = input_series_2.RASTER
    lightning_meta = lightning_raster.metadata.dehydrate()
    lightning_colors = lightning_meta['CLASS LOOKUP']

    ;pre-allocate a byte array to hold the data from each raster
    if (i eq 0) then begin
      dims = [image_raster.NCOLUMNS, image_raster.NROWS]
      data = make_array(dims[0], 2*dims[1], 3, TYPE=1, VALUE=0)
    endif

    ;get image data
    image_data = image_raster.GetData(INTERLEAVE='BSQ')

    ;get ligtning data
    lightning_data = lightning_raster.GetData()
    ;convert lightning data to RGB
    temp = reform(lightning_colors[*, lightning_data], 3, dims[0], dims[1])
    ;change interleave
    lightning_data = transpose(temp, [1, 2, 0])

    ;fill preallocated array with data
    data[*,0:dims[1]-1,*] = image_data
    data[*,dims[1]:*,*] = lightning_data

    ;make a new raster
    outraster = ENVIRaster(data)
    outraster.save

    ;save output raster URI
    combined_uris.add, outraster.URI

    ;step to next rasters
    input_series_1.next
    input_series_2.next
    print, string(9b) + 'Combined ' + strtrim(i+1,2) + ' of ' + strtrim(nrasters,2) + ' total rasters'

  endfor

  ;convert list to array
  combined_uris = combined_uris.toarray()

  ;create another raster series with our combined images
  SeriesTask = ENVITask('BuildRasterSeries')
  SeriesTask.INPUT_RASTER_URI = combined_uris
  SeriesTask.Execute

  ;open the output raster series
  output_combined_series = ENVIRasterSeries(SeriesTask.OUTPUT_RASTERSERIES_URI)

end


Happy lightning hunting!

Comments (0) Number of views (1963) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

24

Jun

2016

Crop Counting on Mars!?

Author: Zachary Norman

Believe it or not, there is a new market for precision agriculture on our big red neighbor. However, this is not your conventional agriculture analysis, but rather an innovative use of the crop counter tool in the Precision Ag Toolkit

Our crop of choice that we will be counting is not alive, but rather lifeless scars on the surface of Mars. Because Mars has such a thin atmosphere, about about 0.6% of Earth's mean sea level atmospheric pressure, smaller meteorites don't necessarily burn up in the atmosphere like they do in our night sky. Instead, the surface or Mars is covered in small, round craters which present ideal objects to find with the crop counter tool. These craters range in size from a few meters in diameters to kilometers in size. For this case, I'm focusing on caters that are about 40-50 meters in diameter.



Step One: Getting the Data

The coolest thing about all of the international space missions is that most of the data collected is easy to find and freely available. For Mars, I actually decided to use a DEM (Digital Elevation Model) that was derived from stereo pairs with the HiRISE (High Resolution Imaging Science Experiment) sensor which is on board the Mars Reconnaissance Orbiter. The HiRISE sensor takes images with up to .3 meter spatial resolution, which allows for lots of detail in the images that are captured. The resulting DEM I used had .99 meters per pixel for spatial resolution. The exact DEM I decided to go with can be found here

Here is a screenshot of a portion of the image that I used to count craters:


I chose this image because it had lots of uniform, round craters. After exploring the craters a bit, I decided to look for craters between 35 and 45 meters in size. Before I could count the craters, I first needed to figure out how I wanted to count them.



Step Two: Preprocessing to get the Data Analysis Ready

For the crop counter, the best way to get accurate results is to preprocess your data and do some thresholding. The first reason for this is that thresholding helps prevent false positives from being counted. Secondly, thresholding drastically increases the speed for finding crop centers. This speed increase is because, when pixels are Nan's, the crop counter skips those pixels and decreases processing time. For crater counting, this preprocessing step just becomes a matter of how to get the craters to pop out of the image.

After some thinking, I decided to isolate the crater pits themselves I would use some smoothing, band math, and then thresholding. Here are the exact steps I took to count craters:

1) Smooth the data with a kernel that is the twice the size as the largest crater size we are looking for.

The kernel size for smoothing the data was 91 because we were looking for craters between 35 and 45 meters in diameter. Below you can see the smoothed DEM with a similar color table as above:

2) Take the difference between the smoothed data set and the actual DEM. 

With the smoothed DEM, the crater features we are looking for should be pretty much erased. This means that taking the difference between our smoothed image and our original image will return the approximate crater depth. Because the crop counting tool works with bright images we can then use the crater depth image with the plant counter after one more step is applied. Below you will see the the difference between the DEM and the smoothed image. The green areas have a higher value (height in meters) and the red areas have a lower value.

3) Threshold our crater depth image

This step is very important to reduce false positives for counting craters because our difference image is quite noisy in some places. This step eliminates smaller craters and noise in our image by saying the crater depth needs to be greater than 'x' meters. I used .75 meters for this step which really isolates the craters I was looking for. To illustrate how this helps isolate the craters we are looking for, you can see the image below. This screenshot is of the original DEM with the image difference shown after a threshold has been applied.At this point you can see that we have isolated most of the craters with a few spots of noise here and there.

Step Three: Using the Crop Counter!

Now that we have preprocessed our data and have a great image to count craters in, it's time to use the crop counter. Here are the setting I used in the crop counter to find the crater locations:

CropCountTask = ENVITask('AgCropCount')
CropCountTask.INPUT_RASTER = ThresholdRaster
CropCountTask.WIDTH_IN_METERS = 0
CropCountTask.PERCENT_OVERLAP = .01
CropCountTask.SMOOTH_DATA = 1
CropCountTask.SEARCH_WIDTH = [35,45,1]
CropCountTask.MAX_TILE_SIZE = [400,400]


After running the task, here is what our results look like. This image is a grayscale version of the DEM with the crop locations shown as cyan circles.

As you can see, the crop counting tool does a pretty good job of finding the actual crater locations!

Comments (0) Number of views (1811) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

6

May

2016

Getting creative with ENVI + IDL

Author: Zachary Norman

When it comes to ENVI + IDL, I like to get creative and see what else I can really accomplish with all of the awesome tools I have available. To give you some background on this blog, a few months ago I got my first DSLR camera. When I wanted to take pictures of larger scenes I realized that I didn't have the tools I needed to generate mosaics or panoramas of vast landscapes. 

From a previous blog post, Creating Mosaics from Pluto Imagery, I had a very simple approach to reference images with respect to one another which was just setting a constant offset between each image. If you are taking pictures by hand, this is a little more difficult to do, so I turned to my great friend IDL for some help.

With IDL's function graphics you can create interactive graphics windows. In order to correctly position my images, this is what I decided to use because I can interactively move around different images and put them where they should be. Then, based on their final positions in the window, I can create spatial references for ENVI to have an idea where the iamges are relative to one another. This is important because if you want to use ENVI's Seamless Mosaic tool or Image Registration Workflow then you need to have spatial references for your images.

Here is a screenshot of what the interface looks like for the widget I created with some sample images:

In order to use the tool you will need to download the code (see below). Once you get the code and run the procedure, then a dialog will appear asking you to select some images you want to load. The allowed formats are any images that can be read by IDL's READ_IMAGE function. After you select your images, the widget interface above will appear with different actions you can perform on a selected image (the imageis selected when there is a blue box around it). To load an image into the display, you just need to double click on the name of the file in the top right corner of the image. If you get any of the images in a goofy position, orientation, or weird zoom, then just select the image and press the 'Reset Properties' button.

Once you are done, press the 'Done!' button. This will create ENVI formatted files for every image in the display. Once this has finished, then a prompt will appear asking if you want to open up the images in ENVI. If you select yes, then you will see something like the following after the images are loaded:

At this point, you might need to use the Image Registration Workflow to really get the images lined up better before using the Seamless Mosaic tool. Have fun mosaicing! 

Here is the code for the widget that I call IDLMosaic. Feel free to edit/modify the code in any way you want. Copy and past the code into IDL and save it as "idlmosaic.pro". 

;+

;  

;   Program designed to create an interactive environment for lining up images

;   relative to one another so that they can be georeferenced in ENVI. When you

;   press the run button in IDL, a dialog will pop up that prompts you to select

;   images that will be read into IDL. THe supported file formats are the same

;   for the READ_IMAGE function in IDL.

;

; :Author: Zachary Norman

;-

function ImageInfo::init, oWin, MAX_WIDTH = max_width

  compile_opt idl2

  self.ImageHash = orderedhash()

  self.oImages = list()

  self.oFiles = list()

  ;check if a max width was supplied

  if (max_width eq !NULL) then begin

    self.max_width = .33d

  endif else begin

    self.max_width = double(max_width)

  endelse

  ;check if a window reference was supplied, if not make one

  if ~isa(oWin, 'GRAPHICSWIN')then begin

    self.oWin = window()

  endif else begin

    self.oWin = oWin

  endelse

  return, 1

end

pro ImageInfo::AddImage, drawfile, CURRENT = current

  compile_opt idl2

  ;overall hash with the image info

  info_hash = self.ImageHash

  oImages = self.oImages

  oFiles = self.oFiles

  max_width = self.max_width

  ;only draw the image if we have not yet

  if ~info_hash.haskey(strlowcase(drawfile)) then begin

    ;save the drawfile

    oFiles.add,drawfile

    ;new hash to add for file

    file_hash = orderedhash()

    ;only check if we have drawn something to the screen

    if (n_elements(oDrawnAlready) gt 0) then begin

      idx = (where(drawfile eq oDrawnAlready.ToArray()))[0]

      if (idx ne -1) then begin

        print,'Image drawn already, returning...'

        return

      endif

    endif

    ;read the image data

    dat = read_image(drawfile)

    dims = size(dat, /DIMENSIONS)

    ;remember the iamge diensions

    file_hash['DIMS'] = dims

    ;assume the number of channels is the smallest dimension

    ;if we have more than one, then we need to account for interleave change to BSQ

    if (n_elements(dims) eq 2) then begin

      nchannels = 1

      ;calculate aspect ratio

      aspect = float(dims[1])/dims[0]

     

      ;resize data so we use less memory

      dat = congrid(dat, dims[0]/4, dims[1]/4, /INTERP)

    endif else begin

      nchannels = min(dims)

      channel = (where(dims eq nchannels))[0]

      ;change image interleave to BSQ

      case channel of

        0:dat = transpose(dat, [1,2,0])

        1:dat = transpose(dat, [0,2,1])

        2:;do nothing

      endcase

     

      dims = size(dat, /DIMENSIONS)

      ;calculate aspect ratio

      aspect = float(dims[1])/dims[0]

      ;resize data to consume less memory

      dat = congrid(dat, dims[0]/4, dims[1]/4, nchannels, /INTERP)

    endelse

 

    ;add a random offset to the image so they don't all line up

    ;on top of one another

    center = [.5, .5] + randomu(!NULL)/10

    ;determine the image position

    if (aspect gt 1) then begin

      xrange = center[0] + (max_width/aspect)*[-.5, .5]

      yrange = center[1] + max_width*[-.5, .5]

    endif else begin

      xrange = center[0] + max_width*[-.5, .5]

      yrange = center[1] + (aspect*max_width)*[-.5, .5]

    endelse

    img_pos = [xrange[0], yrange[0], xrange[1], yrange[1]]

    img = image(dat, CURRENT = current, POSITION = img_pos)

    oImages.add, img

    ;get the original scale factors, scale centers, and positions

    file_hash['SCALE_CENTER'] = img.SCALE_CENTER

    file_hash['SCALE_FACTOR'] = img.SCALE_FACTOR

    file_hash['POSITION'] = img.POSITION

    file_hash['XRANGE'] = img.XRANGE

    file_hash['YRANGE'] = img.YRANGE

   

    ;default rotation to 0

    file_hash['ROTATION'] = 0

    ;save the image information

    info_hash[strlowcase(drawfile)] = file_hash

  endif else begin

    print, 'Image drawn already, skipping...'

  endelse

end

 

pro ImageInfo::DeselectAllImages

  compile_opt idl2

  ;disable window update

  self.oWin.Refresh, /disable

  foreach img, self.oImages do img.Select, /CLEAR

  ;refresh the window

  self.oWin.Refresh

end

pro Imageinfo::ResetImages

  compile_opt idl2

  ;get the reference to the images that are selected

  oImages = self.oImages

  ;disable window update

  self.oWin.Refresh, /disable

  selected = self.oWin.GetSelect()

  keys = self.ImageHash.keys()

  foreach img, selected do begin

    idx = (where(img eq self.oImages.toarray()))[0]

    ;reset rotation

    img.rotate, /reset

    ;return properties to their initial values

    img.SCALE_CENTER = (self.ImageHash[keys[idx]])['SCALE_CENTER']

    img.SCALE_FACTOR = (self.ImageHash[keys[idx]])['SCALE_FACTOR']

    img.XRANGE = (self.ImageHash[keys[idx]])['XRANGE']

    img.YRANGE = (self.ImageHash[keys[idx]])['YRANGE']

    img.POSITION = (self.ImageHash[keys[idx]])['POSITION']

    img.POSITION = (self.ImageHash[keys[idx]])['POSITION']

    img.Order, /BRING_TO_FRONT

  endforeach

  ;refresh the window

  self.oWin.Refresh

end

pro ImageInfo::BringImagesForward

  compile_opt idl2

  foreach selected, self.oWin.GetSelect() do selected.Order, /BRING_FORWARD

end

pro ImageInfo::RemoveImages

  compile_opt idl2

  ;get the selected images

  selected = self.oWin.GetSelect()

  ;disable window update

  self.oWin.Refresh, /disable

  foreach img, selected do begin

    ;get the reference to the image list

    oImages = self.oImages

    oFiles = self.oFiles

    ;get the leftover hash keys

    keys = self.ImageHash.keys()

    ;figure out which image was clicked on

    idx = (where(img eq oImages.toarray()))[0]

    imageref = oImages[idx]

    ;delete graphics

    imageref.Delete

    ;remove reference from the list

    oImages.Remove, idx

    oFiles.Remove, idx

    ;remove the key from the hash

    self.ImageHash.Remove, keys[idx]

  endforeach

  ;refresh the display

  self.oWin.Refresh

end

pro ImageInfo::ResetImageRotations

  compile_opt idl2

  ;get the reference to the images that are selected

  oImages = self.oImages

  imagehash = self.imagehash

  keys = imagehash.keys()

  ;disable window update

  self.oWin.Refresh, /disable

  selected = self.oWin.GetSelect()

  foreach img, selected do begin

    ;find the index for which image is selected

    img_idx = (where(oImages.toarray() eq img))[0]

    ;reset rotation in the iamge hash

    (imagehash[keys[img_idx]])['ROTATION'] = 0

    ;reset rotation

    img.rotate, /reset

  endforeach

  ;refresh the window

  self.oWin.Refresh

end

pro Imageinfo::ResetImageZooms

  compile_opt idl2

  ;get the reference to the images that are selected

  oImages = self.oImages

 

  ;disable window update

  self.oWin.Refresh, /disable

  selected = self.oWin.GetSelect()

  keys = self.ImageHash.keys()

  foreach img, selected do begin

    idx = (where(img eq self.oImages.toarray()))[0]

    ;return properties to their initial values

    img.SCALE_CENTER = (self.ImageHash[keys[idx]])['SCALE_CENTER']

    img.SCALE_FACTOR = (self.ImageHash[keys[idx]])['SCALE_FACTOR']

    img.XRANGE = (self.ImageHash[keys[idx]])['XRANGE']

    img.YRANGE = (self.ImageHash[keys[idx]])['YRANGE']

  endforeach

  ;refresh the window

  self.oWin.Refresh

end

pro ImageInfo::RotateImages, theta

  compile_opt idl2

  ;get the reference to the images that are selected

  oImages = self.oImages

  ;disable window update

  self.oWin.Refresh, /disable

  selected = self.oWin.GetSelect()

  keys = self.ImageHash.keys()

  foreach img, selected do begin

    ;figure out which image was clicked on

    idx = (where(img eq self.oImages.toarray()))[0]

    imagehash = self.ImageHash[keys[idx]]

    imageref = oImages[idx]

    ;reset the image, rotate by theta

    imageref.Rotate, /RESET

    imagehash['ROTATION']+=theta

    imageref.Rotate, imagehash['ROTATION']

  endforeach

  self.oWin.Refresh

end

pro ImageInfo::SelectAllImages

  compile_opt idl2

  ;disable window update

  self.oWin.Refresh, /disable

  foreach img, self.oImages do img.Select, /ADD

  ;refresh the window

  self.oWin.Refresh

end

pro ImageInfo::SendImagesBack

  compile_opt idl2

  foreach selected, self.oWin.GetSelect() do selected.Order, /SEND_BACKWARD

end

 

;method to convert iamges to ENVI formatted files, OUTFILES is not an input keyword, but output

pro ImageInfo::StitchImages, DOWNSIZE = downsize, OUTFILES = outfiles

  compile_opt idl2

  ;select all of the images that are displayed

  images = self.oImages.toarray()

  files = self.oFiles.toarray()

  keys = self.ImageHash.keys()

 

  ;make sure we have some images to process

  if (n_elements(files) gt 0) then begin

    outfiles = strarr(n_elements(images))

    ;start ENVI

    e = envi(/current)

    if (e eq !NULL) then begin

      e = envi(/headless)

      nokill = 1

    endif

    ; create ENVI formatted files

    print, 'Converting images to ENVI formatted files...'

    for i=0,n_elements(images)-1 do begin

      img = images[i]

      pos = img.position

      top_left = [pos[0], pos[3]] - .5d

      imagehash = self.imagehash[keys[i]]

      dims = imagehash['DIMS']

      dims_2d = dims[where(dims ne 3)]

      pix_size = [pos[2] - pos[0],pos[3] - pos[1]]/(dims_2d/downsize)

      outfile = strmid(files[i], 0, strpos(files[i],'.', /REVERSE_SEARCH)) + '_envi.dat'

      ;check if file exists and delete it

      if file_test(outfile) then FILE_DELETE, outfile, /QUIET

      ;if it still exists after deleting, then another program has a lock on the file

      if file_test(outfile) then begin

        print, 'File "' + outfile + '" locked by another program, skipping...'

        continue

      endif

      outfiles[i] = outfile

      dat = read_image(files[i])

      dims = size(dat, /DIMENSIONS)

      nchannels=1

      ;change interleave to BSQ

      if (n_elements(dims) gt 2) then begin

        nchannels = min(dims)

        channel = (where(dims eq nchannels))[0]

        ;change image interleave to BSQ

        case channel of

          0:dat = transpose(dat, [1,2,0])

          1:dat = transpose(dat, [0,2,1])

          2:;do nothing

        endcase

        dims = size(dat, /DIMENSIONS)

      endif

      ;resize the image if needed

      if (downsize gt 1) then begin

        if (nchannels eq 1) then begin

          dat = congrid(dat, floor(dims[0]/downsize), floor(dims[1]/downsize), /INTERP)

        endif else begin

          dat = congrid(dat, floor(dims[0]/downsize), floor(dims[1]/downsize), nchannels, /INTERP)

        endelse

      endif

      ;rotate the image if the user rotated the image with the degree buttons

      if (imagehash['ROTATION'] ne 0) then begin

        print, string(9b) + 'Rotating image, may take a minute or two depending on size...'

        dat = RotateImage(dat, imagehash['ROTATION'])

      endif

      ;assume the number of channels is the smallest dimension

      ;if we have more than one, then we need to account for interleave change to BSQ

      if (n_elements(dims) eq 2) then begin

        nchannels = 1

        ;rotate image to have ENVI orientation

        dat = rotate(dat, 7)

      endif else begin

        nchannels = min(dims)

        channel = (where(dims eq nchannels))[0]

        ;change image interleave to BSQ

        case channel of

          0:dat = transpose(dat, [1,2,0])

          1:dat = transpose(dat, [0,2,1])

          2:;do nothing

        endcase

        for z=0,nchannels-1 do begin

          dat[*,*,z] = rotate(reform(dat[*,*,z]),7)

        endfor

      endelse

     

      ;create a spatial reference

      spatialref = ENVIStandardRasterSpatialRef($

        coord_sys_code = 4326, $

        /GEOGCS,$

        pixel_size = pix_size, $

        tie_point_pixel = [0, 0], $

        tie_point_map = top_left)

      newraster = ENVIRaster(dat, URI = outfile, SPATIALREF = spatialref)

      ;add a data ignore value if we had rotation

      if (imagehash['ROTATION'] ne 0) then begin

        newraster.METADATA.AddItem, 'data ignore value', 0

      endif

     

      ;save and close the raster

      newraster.save

      newraster.close

      print, string(9b) + 'Converted ' + strtrim(i+1,2) + ' of 'strtrim(n_elements(images),2) + ' images, outfile:'

      print, string(9b) + string(9b) + outfile

    endfor

    ;close ENVI if we started it during this method

    if (nokill eq !NULL) then e.close

  endif else begin

    print, 'No images selected, returning'

    return

  endelse

end

pro ImageInfo__define

  compile_opt idl2

  struct = {$

    IMAGEINFO,$

    ImageHash:orderedhash(),$

    oImages:list(),$

    oFiles:list(),$

    oWin:obj_new(),$

    max_width:1d}

end

function RotateImage, img_dat, theta

  compile_opt idl2, hidden

  ;get the data dimensions

  dims = size(img_dat, /DIMENSIONS)

  nchannels = 1

 

  if ~keyword_set(orientation) then orientation = 2

  ;apply rotation for nchannel images

  ndims = n_elements(dims)

  if (ndims gt 2) then begin

    nchannels = min(dims)

    channel = (where(dims eq nchannels))[0]

    ;change image interleave to BSQ

    case channel of

      0:img_dat = transpose(img_dat, [1,2,0])

      1:img_dat = transpose(img_dat, [0,2,1])

      2:;do nothing

    endcase

    dims = size(img_dat, /DIMENSIONS)

  endif

  ;calculate our rotation matrix

  theta*=!DTOR

  r_mat = [[ cos(theta), sin(theta), 0],$

    [-sin(theta), cos(theta), 0],$

    [     0     ,      0    , 0]]

  ;pre-allocate an array for the initial pixel locations

  xy0 = make_array(3, dims[0]*dims[1], TYPE = 12, /NOZERO)

  ;fill xy0 with values

  indices = lindgen(dims[0]*dims[1])

  xy0[0,*] = indices mod dims[0]

  xy0[1,*] = indices/dims[0]

  xy0[2,*] = replicate(1.0,1,dims[0]*dims[1])

  delvar, indices

  ;rotate our initial positions to find out the new pixel locations

  xy1 = matrix_multiply(r_mat, xy0)

  ;grab the individual xy new locations

  xind = xy1[0,*]

  yind = xy1[1,*]

  ;create output grid for out image

  xout = floor(xind.min()) + dindgen(ceil(xind.max() - xind.min()))

  yout = floor(yind.min()) + dindgen(ceil(yind.max() - yind.min()))

  ;use custom, amazing interpolation

  new_dims = [n_elements(xout), n_elements(yout)]

  out_dat = make_array(new_dims[0], new_dims[1], nchannels,TYPE = img_dat.typecode, VALUE = 0)

  for i=0, nchannels-1 do begin

    channel_dat = reform(out_dat[*,*,i])

    channel_dat[xind - xind.min(), yind-yind.min()] = img_dat[*,*,i]

    ;fill in missing holes

    zero = where(channel_dat eq 0, zero_count)

    if (zero_count gt 0) then begin

      for z=0, zero_count-1 do begin

        ;check if we have neighbors that are not equal to 0

        xidx = zero mod new_dims[0]

        yidx = zero / new_dims[0]

        for z=0, zero_count-1 do begin

          sub = channel_dat[xidx[z]-1 > 0:xidx[z]+1 < (new_dims[0]-1), yidx[z]-1 > 0:yidx[z]+1 < (new_dims[1]-1)]

          close_nozero = where(sub ne 0, nozero_count)

          if (nozero_count gt 6) then channel_dat[xidx[z],yidx[z]] = total(sub[close_nozero])/nozero_count

        endfor

      endfor

    endif

    out_dat[*,*,i] = channel_dat

  endfor

  ;remove any irrelevant indices

  return, reform(out_dat)

end

pro IDLMosaic_event, event

  ;help, event

  widget_control, event.top, get_uvalue=info

  ;get the image properties lists

  oImageInfo = info.oImageInfo

  ;turn on the hourglass

  widget_control, HOURGLASS=1

  ;check if we have a kill request event

  case TAG_NAMES(event, /STRUCTURE_NAME) of

    'WIDGET_KILL_REQUEST':BEGIN

      !except = info.orig_except

      WIDGET_CONTROL, event.top, /DESTROY

      retall

    END

    ELSE: ;do nothing

  endcase

  ;handle the event by the widget that was pressed

  case event.id of

    ;user double clicked on a list element for all of the images

    ;so we will draw it

    info.wList:begin

      if (event.CLICKS eq 2) then begin

        ;get the list of files

        files = info.files

        ;draw the image

        oImageInfo.AddImage, files[event.index], CURRENT = info.oWin

      endif

    end

    ;user wants to zoom out

    info.wZoomOut:begin

      info.oWin.ZoomOut

      info.oWin.Refresh

    end

    ;user wants to zoom in

    info.wZoomIn:begin

      info.oWin.ZoomIn

      info.oWin.Refresh

    end

   

    ;bring selected items forward

    info.wBringForward:begin

      info.oImageInfo.BringImagesForward

    end

    ;send selected items forward

    info.wSendBack:begin

      info.oImageInfo.SendImagesBack

    end

    ;reset the image to have all of its original properties

    info.wResetImage:begin

      info.oImageInfo.ResetImages

    end

    ;reset the image zoom only

    info.wResetZoom:begin

      info.oImageInfo.ResetImageZooms

    end

    ;reset the image rotation

    info.wResetRotation:begin

      info.oImageInfo.ResetImageRotations

    end

    ;deselect all images

    info.wDeselectAll:begin

      info.oImageInfo.DeselectAllImages

    end

    ;select all images

    info.wSelectAll:begin

      info.oImageInfo.SelectAllImages

    end

    ;remove selected images

    info.wDeleteImage:begin

      info.oImageInfo.RemoveImages

    end

    ;images are rotated

    info.wRotCSmall:begin

      info.oImageInfo.RotateImages, 0.2

    end

    info.wRotCCSmall:begin

      info.oImageInfo.RotateImages, -0.2

    end

    info.wRotCMedium:begin

      info.oImageInfo.RotateImages, 1.0

    end

    info.wRotCCMedium:begin

      info.oImageInfo.RotateImages, -1.0

    end

    info.wRotCLarge:begin

      info.oImageInfo.RotateImages, 5.0

    end

    info.wRotCCLarge:begin

      info.oImageInfo.RotateImages, -5.0

    end

    ;convert images to ENVI formatted files

    info.wStitchImages:begin

      info.oImageInfo.StitchImages, DOWNSIZE = info.downsize, OUTFILES = outfiles

      ;check if the ENVI workbench is open, if so then ask if the user wants to open up the images

      ;if ENVI is not open, then open it now

      msg = dialog_message('Would you like to open the ENVI workbench and display the images in ENVI?', /QUESTION)

      if (msg eq 'Yes') then begin

        e = envi(/current)

        if (e eq !NULL) then begin

          e = envi()

        endif else begin

          ;open up the ENVI Workbench if not open already

          if (e.widget_id eq 0) then begin

            e.close

            e = envi()

          endif

        endelse

        ;disable ENVI Workbench update

        e.refresh, /disable

        ;add each image to ENVI's view

        view = e.GetView()

        foreach file, outfiles do begin

          if (file eq '') then continue

          raster = e.openraster(file)

          layer = view.createlayer(raster)

        endforeach

        ;refresh ENVI display

        e.refresh

      endif

      ;destroy the widget

      WIDGET_CONTROL, event.top, /DESTROY

    end

    ;user resized the base

    info.wBase:begin

      widget_control, info.wDraw, xSize=event.y, ySize=event.y

    end

    else:;print, 'dunno what to do with id ', event.id

  endcase

  widget_control, HOURGLASS=0

end

;+

;

;

; :Keywords:

;    DOWNSIZE: in, optional, type=int, default=4

;       Set this keyword to an integer which represents the resize factor when the

;       iamges are created as ENVI formatted files. For example, setting this keyword to 3

;       will regrid the results to have 1/3 the X and Y dimensions that they originally have.

;    EXTENSION: in, optional, type=string, default='jpg'

;       This keyword represents the file extensions that will be searched for if you set the

;       `PICKDIR` keyword.

;    MAX_WIDTH: in, optional, type=float, default=.33, min=.1, max=.5

;       Setting this keyword will control the maximum size that an image can be when added to

;       the widget. Setting this value to a smaller number will allow for more images to be added

;       to the display.

;    PICKDIR: in, optional, type=sting

;       This optional keyword can be set to a directory that contains images you want to use

;       with IDLMosaic. The directory that this keyword is set to will be searched for files

;       ending with `EXTENSION`. Make sure you change

;

; :Author: Norm3596

;-

pro IDLMosaic, DOWNSIZE = downsize, EXTENSION = extension, MAX_WIDTH = max_width, PICKDIR = pickdir

  compile_opt idl2

  ireset, /no_prompt

  ;select a directory of files

  if keyword_set(pickdir) then begin

    if (extension eq !NULL) then extension = 'jpg'

    if ~file_test(pickdir) then message, 'PICKDIR specified, but directory does not exist!'

    cd, pickdir, CURRENT = first_dir

    files = file_search('*' + extension, COUNT=nfiles)

    cd, first_dir

    if (nfiles eq 0) then message, 'No files found with extension!'

 

    ;sort the found files and prepend the input directory

    files = (pickdir + path_sep() + files).sort()

  ;default is to use dialog pickfile to select images

  endif else begin

    ;use dialog pickfile to select images

    files = dialog_pickfile(/MULTIPLE_FILES)

    nfiles = n_elements(files)

    if (files[0] eq '') then message, 'No files chosen!'

  endelse

  ;check if we want to downsample the rasters

  ;by default this is true because the size in a .dat file of a JPEG is much

  ;greater than just the jpeg image itself

  if keyword_set(downsize) then downsize = downsize else downsize = 4

  ;keyword for setting the maximum width of the iamges in the display window

  if ~keyword_set(max_width) then max_width = .33

  ;make suze max_width falls within realistic values to fit in our window

  max_width>=.1

  max_width<=.5

  ;create starting widgets

  wBase = widget_base(/row, title='IDLMosaic', tlb_size_events=0)

  ;create the draw widget

  window_size = [800, 800]

  wDraw = widget_window(wBase, xsize = window_size[0]+1, ysize = window_size[1]+1, $

    X_SCROLL_SIZE = window_size[0], Y_SCROLL_SIZE = window_size[1])

  wButtonCol = widget_base(wBase, /col)

  wStatus = widget_label(wButtonCol,  /dynamic_resize, value='Images:')

  ;create a list widget

  wList = widget_list(wButtonCol, VALUE = file_basename(files), YSIZE = nfiles<10)

  ;crete buttons for zooming in and out

  wZoomIn = widget_button(wButtonCol, /dynamic_resize, value='Zoom In')

  wZoomOut = widget_button(wButtonCol, /dynamic_resize, value='Zoom Out')

  ;reset the view for the image

  wImageText = widget_label(wButtonCol, value='Image Actions:')

  wResetImage = widget_button(wButtonCol, /dynamic_resize, value='Reset Properties')

  wResetZoom = widget_button(wButtonCol, /dynamic_resize, value='Reset Zoom')

  wResetRotation = widget_button(wButtonCol, /dynamic_resize, value='Reset Rotation')

 

  ;crete buttons for zooming in and out

  wBringForward = widget_button(wButtonCol, /dynamic_resize, value='Bring Forward')

  wSendBack = widget_button(wButtonCol, /dynamic_resize, value='Send Back')

  ;add buttons for rotating the images

  wRotateClockwise = widget_label(wButtonCol, value='Rotate Clockwise:')

  wBaseClockwise = widget_base(wButtonCol, /ROW, /ALIGN_CENTER)

  wRotCSmall = widget_button(wBaseClockwise, /dynamic_resize, value='0.2')

  wRotCMedium = widget_button(wBaseClockwise, /dynamic_resize, value='1.0')

  wRotCLarge = widget_button(wBaseClockwise, /dynamic_resize, value='5.0')

  wRotateCounterClockwise= widget_label(wButtonCol, value='Rotate Counter-clockwise:')

  wBaseCClockwise = widget_base(wButtonCol, /ROW, /ALIGN_CENTER)

  wRotCCSmall = widget_button(wBaseCClockwise, /dynamic_resize, value='0.2')

  wRotCCMedium = widget_button(wBaseCClockwise, /dynamic_resize, value='1.0')

  wRotCCLarge = widget_button(wBaseCClockwise, /dynamic_resize, value='5.0')

  ;crete buttons for zooming in and out

  wSelectAll = widget_button(wButtonCol, /dynamic_resize, value='Select All')

  wDeselectAll = widget_button(wButtonCol, /dynamic_resize, value='Deselect All')

  wDeleteImage = widget_button(wButtonCol, /DYNAMIC_RESIZE, VALUE='Remove Images')

  wStitchImages = widget_button(wButtonCol, /DYNAMIC_RESIZE, VALUE='Done!')

  widget_control, wBase, /realize

  ; Retrieve the newly-created Window object.

  widget_control, wDraw, get_value=oWin

  ;remember the original valu of !EXCEPT

  orig_except = !EXCEPT

  !EXCEPT=0

 

  info = {$

    wBase:wBase,$

    wDraw:wDraw,$

    wStatus:wStatus,$

    wList:wList,$

    wZoomOut:wZoomOut,$

    wZoomIn:wZoomIn,$

    wBringForward:wBringForward,$

    wSendBack:wSendBack,$

    wResetImage:wResetImage,$

    wResetZoom:wResetZoom,$

    wResetRotation:wResetRotation,$

    wSelectAll:wSelectAll,$

    wDeselectAll:wDeselectAll,$

    wDeleteImage:wDeleteImage,$

    wStitchImages:wStitchImages,$

    wRotCSmall:wRotCSmall,$

    wRotCCSmall:wRotCCSmall,$

    wRotCMedium:wRotCMedium,$

    wRotCCMedium:wRotCCMedium,$

    wRotCLarge:wRotCLarge,$

    wRotCCLarge:wRotCCLarge,$

    oWin:oWin,$

    oImageInfo:ImageInfo(oWin),$

    except_orig:orig_except,$

    files:files,$

    downsize:downsize,$

    max_width:max_width}

   

  ;set the uvalue

  widget_control, wBase, set_uvalue=info

  ;register the widget with our event handler

  ;keep this widget as blocking

  xmanager, 'IDLMosaic', wBase, /NO_BLOCK

end

Comments (0) Number of views (1620) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

17

Mar

2016

Spatiotemporal Analysis: Red is Fled, Blue is New!

Author: Zachary Norman

A great way to get additional information from imagery is to add changes over time to your analysis or workflow, and that is the focus of this blog. Spatiotemporal analysis has some potentially useful applications and one example is trying to determine when it is time to harvest a crop. Another use case is where spatiotemporal analysis can be used to detect where objects have appeared or disappeared in images. For this case, I'm going to outline the workflow I created to detect where airplanes appeared or disappeared from an airport.

The data that I had available was five Worldview 2 images over an airport in Rio De Janero. Here is a context map showing where the images are located:


Below is an animation showing what each image looks like in the data series. Note that you can see the buildings move on the left side of the image because of the change in the orientation of the satellite. This introduces some false-positives in the change detection workflow which can be seen in the results.



To perform the change detection on these images I used a pixel based change detection which is very similar to the Image Change Workflow, but it was written with the ENVI API and IDL. The reason I used the API to do this analysis is because there were a lot of steps that needed to be taken and it is a lot easier to create a workflow in IDL rather than use all of the separate tools in the ENVI Workbench for many images. Here was the approach that I took to perform the analysis.

1) Open Time One image and Time Two image for preprocessing using the following tasks:

RadiometricCalibration (for Top-of-Atmosphere Reflectance)
NNDiffusePanSharpening
RPCOrthorectification
SubsetRaster (with an ROI)

2) Register the two images together with the tasks:
GenerateTiePOintsByCrossCorreclation
FilterTiePOintsByGlobalTransform
ImageToImageRegistration

3) Find the intersection of the rasters
From steps 1 and 2 above, we can get some differences in the image sizes for Time 1 and Time 2. Although this difference change is small, the pixel based change detection cannot happen without the images having the exact same dimensions. To find the intersection of the two rasters and regrid each one to have the same dimensions, I followed the example outlined here which uses the intersection method for ENVIGridDefinition objects.

4) Perform the pixel-based change detection with the following tasks (taken from the Image Change Workflow)
RadiometricNormalization (Time 2 normalized to TIme 1)
ImageBandDifference
AutoChangeThresholdClassification (Kapur threshold method)
ClassificationSmoothing
ClassificationAggregation

After applying the changes to each pair of images, I produced 4 change detection images and the results are shown below. The red pixels correspond to pixel values decreasing and blue represents increases in a pixels value. An easy way to remember this is "red is fled, blue is new." Note how there are quite a few false-positives around the edges of the images due to the differences in the satellite's orientation. Apart from this, the change detection does a very good job of finding where planes have moved.

Comments (0) Number of views (2529) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

12

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation