14

Jul

2016

Coming in IDL 8.6 - Repeating Timers

Author: Jim Pendleton

A couple years ago I wrote a blog article on the topic of the new TIMER class in IDL 8.3. In the upcoming IDL 8.6 release, a new REPEAT keyword will be added to the Timer::Set method.

We like to write device control software in the Professional Services Group here at Harris so this new feature will come in handy.

The typical case for using a timer with REPEAT arises from the need to ensure processing at a regular interval, for example in the display of an animation at a specific frames-per-second rate, or in the consistent collection of data from a device.

This REPEAT keyword is a toggle to force the timer to fire at the specified interval. This relieves the callback code of the need for an explicit call to manually reset the timer during the processing of the preceding event. It produces a truly regular "heartbeat" of events.

It is most appropriate to use this style of timer when the processing time of the callback is less than the timer's interval. If the timer events cannot be drained at a rate greater than the interval at which they're added to the queue, this solution is not generally recommended, or the interval needs to be adjusted accordingly.

The IDL Code Profiler should be used to determine if there is a potential for conflict between the callback execution time and the sampling rate. Also be aware that your measurements will depend on the current platform and system load, so be judicious when deploying solutions like this to other computers with different performance characteristics from the device under test.

A timer callback can be thought of as a higher-priority interrupt to the main IDL interpreter loop. It will take precedence over the otherwise-executing code.

When the interpreter is between execution of lines of .pro code, if a timer event is found on the queue the timer's callback routine will be executed first, taking precedence over the next line of IDL code. This also applies to compiled code in IDL SAVE files.

When the callback is completed, the interpreter returns to its regularly-scheduled execution, which may include the processing of additional timer callbacks.

In other words, the IDL interpreter remains single-threaded. The timer callback will be executed as an interruption to the currently-executing context. In fact, a timer will interrupt even modal WIDGET_BASE dialogs. In many cases, it may be wise to use a mechanism such as TIMER's ::Block/::Unblock methods or a system semaphore to prevent the possibility of re-entrance.

Example

The following source code shows the behavior of the timer event queue when /REPEAT is set.

PRO myTimerCallback3, id, c
COMPILE_OPT IDL2
t = systime(1)
c.dt.add, t - c.t
c.t = t
c.counter++
wait, c.counter ge 20 && c.counter le 40 ? .51 : .001
print, 'in callback ', c.counter
done = c.counter eq 100
if (done) then begin
    p  = plot(c.dt.toarray(), ytitle = 'dt', xtitle = 'iteration', $
        title = 'Execution time per iteration')
    void = Timer.Cancel(id)
endif
END


PRO async_timer_example_repeat3
COMPILE_OPT IDL2
; Create a timer that will fire 5 times per second.
c = {counter : 0L, t : systime(1), dt : list()} 
id = Timer.Set( 0.2, 'myTimerCallback3', c, /REPEAT )
END

A new timer object is created that fires off events at an interval of 0.2 seconds, regardless of what else may be going on in the IDL main thread.

In the callback, the first 19 iterations are executed with just a 1-millisecond WAIT delay, simulating a light CPU load.

Iterations 20 through 40 are executed with a 0.51-second delay, over twice the timer object's repetition interval.  This simulates a heavier CPU load and helps to show that the timer continues to run and queue events at its own rate in the background.

The remaining steps revert to the shorter wait time.

Internally, the code keeps track of the clock time between callback execution at each step.

The plot of the result is shown below.

The labels indicate the dominant driver for the execution rate at each interval.

Initially, the timer's heartbeat keeps us on track with a regular 0.2 second interval between callbacks.

The next set of iterations are dominated by the 0.51 second WAIT inside each callback.  If we had been displaying a video playback, the frame rate would be decreased substantially during this time.

The third set of iterations shows the effect of emptying the queue of the accumulated timer events that could not be processed immediately at the time they were fired, due to the WAIT statements in the previous block. If this had been video playback, these frames would have all been displayed very rapidly.

After the queue is emptied, the regular heartbeat is attained again. Video playback would return to "real time".

If you were to remove the /REPEAT keyword and add a call to Timer::Set in the callback routine, you could compare the effects. Your choice in your own code will be based on your prioritization of "real time" versus "lagging but consistent" sampling.

Comments (0) Number of views (1448) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

7

Jul

2016

Upsampling images with Lanczos kernel

Author: Atle Borsholm

Resampling images is a very common operation in IDL, and it can happen both implicitly as well explicitly. Implicit resampling happens with IDLgrImage rendering. When the destination rendering area contains fewer pixels than the original image, then downsampling occurs. When the destination area is larger than the original image, then upsampling occurs. There are many options for upsampling algorithms. The simplest is a pure pixel replication to fill in the gaps. This is useful when there is a need to look closely at the original data. However, if the goal is to look for details in the scene that may be approaching the limits of the image resolution, then a more sophisticated resampling algorithm should be chosen instead. There are a few options that are commonly used. Bilinear interpolation and cubic spline interpolation are both options that are available with the CONGRID function in IDL. Lanczos and Lagrange resampling are two other options that are more computationally intensive. In the code below, I am showing an example comparing the Lanczos resampling kernel with bilinear and pixel replication. Lanczos resampling is often preferred because of its ability to preserve and even enhance local contrast, whereas bilinear tends have a blurring effect.

function lanczos, data
 
  xval = [-3:3:.25]
  lanc3 = 3*sin(!pi*xval)*(sin(!pi*xval/3d)/!pi/!pi/xval/xval)
  lanc3[where(xval eq 0)] = 1
  l2d = lanc3 # lanc3
  ; high resolution version
  msk = fltarr(data.dim*4)
  msk[0:*:4,0:*:4] = data
  hi = convol(msk, l2d, /edge_trunc)
  hi = byte(round(hi>0<255))
  return, hi
end
 
pro upsample_example
  compile_opt idl2,logical_predicate
 
  ; Read the original image data
  f = filepath('moon_landing.png', subdir=['examples','data'])
  data = read_png(f)
  dim = data.dim
 
  window, xsize=dim[0], ysize=dim[1], 0, title='Original full size'
  tv, data
 
  ; Define a zoomed in are on the flag.
  xs = 200
  ys = 165
  dx = 40
  dy = 40
 
  ; display upsampled 4x with pixel replication
  window, xsize=160, ysize=160, 1, title='CONGRID pixel-replication'
  tv, congrid(data[xs:xs+dx-1,ys:ys+dy-1],160,160)
 
  ; display upsampled 4x with bilinear interpretation
  window, xsize=160, ysize=160, 2, title='CONGRID linear'
  tv, congrid(data[xs:xs+dx-1,ys:ys+dy-1],160,160,/interp)
 
  ; display upsampled 4x with Lanczos convolution
  window, xsize=160, ysize=160, 3, title='Lanczos'
  tv, (lanczos(data))[xs*4:xs*4+dx*4-1,ys*4:ys*4+dy*4-1]
end

 

The results are shown here, starting with the original image, then the 4x zoomed area with pixel replication, then the 4x zoomed with bilinear interpolation, and finally the 4x zoomed with Lanczos convolution. The Lanczos convolution has the advantage of retaining good contrast while avoiding looking too pixelated.

   

Comments (0) Number of views (1397) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

6

Jul

2016

Ordering from the Marketplace Just Got Better

Author: Jon Coursey

In a blog post I wrote a few months ago, I introduced the Instant Satellite Imagery Portal section of the Harris Geospatial Marketplace. This service offers a faster, less expensive alternative to purchasing DigitalGlobe imagery directly from a sale’s representative. However, at the time, it could still take up to 48 hours for the image to be delivered following purchase. This is no longer the case. Thanks to some savvy engineering by our technical team, the wait time for imagery download has been reduced to just a few minutes.

Often times, someone will contact us with a request for imagery, but will have a deadline that would generally be impossible to meet. And even when ordering directly from a provider, the rush delivery options could still mean up to a 48 hour delivery time. This service was created with these types of requests in mind. If someone is new to purchasing imagery, they may not know the typical delivery times for satellite captures. Additionally, the need for an image could have been realized towards the very end of a project, with too little an amount of time to actually receive one. Whatever the case, our Instant Satellite Imagery Portal hopes to offer a solution for such issues.

While in the middle of automating the Portal, it occurred to our technical team that “hey, this might just work for other products”. This idea recently came to fruition, with many of our other datasets becoming immediately downloadable following purchase. Currently, the following products have been included:

-          NAIP Imagery

-          CityOrtho Imagery

-          GeoOrtho Imagery

-          SRTM Elevation Data

-          NED Elevation Data

-          Harris Gap Filled Elevation Data

These datasets can be ordered via the same process on the Portal. For our Aerial Imagery and Digital Elevation Model sections, you basically have access to the immediate download option for anything with a price listed.

Although only a few products have been set up for this service so far, we are working on making this available for any data that does not require custom processing. This will make the Harris Geospatial Marketplace one of the quickest and easiest sources for downloadable data. Vector data and topographic maps will be available very soon, with hopes of adding more sources for satellite imagery and elevation data by the end of 2016. For any project facing a deadline that requires some sort of geospatial data set, the Marketplace will soon be the solution.

 

Comments (0) Number of views (1379) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

30

Jun

2016

Brute Force PNG Image Sizing

Author: Austin Coates

Recently, I was working on a project that had some pretty strict requirements for the max size of the output .png file. This did not seem like a big deal initially, until I realized that it is extremely difficult to calculate the final size of a .png file prior to creating the image. Long story short, this led me to creating a brute force method for finding the optimal size of an output image.

As a side note, I know what you are thinking, “if it is so difficult, why don’t you just choose a couple of sizes, try those out and then create a linear model so that in the future you no longer have to deal with this?”  Well I tried it and it was a no-go. As you can see from figures 1 and 2 it is not a simple linear relationship, nor is it an easily predictable relationship. The blue lines in figures 1 and 2 represent the file’s size as compared to the number of pixels in the image; whereas the red line is a simple linear regression. It is possible to see a pretty distinct deviation from the trend line.

Figure 1

Figure 2

Back to the task at hand: the brute force method. The method is relatively simple:

1.   Save the image at its native size (if it is smaller than the max allowable size, you are done).

2.    Reduce the original image size by one half and check the file size .

a.       If it is larger than the allowable size, take one quarter of the original image dimensions, or in other words, one half of the distance between the last computed value that was too large (i.e. 50% of the initial image size) and the last smallest computed value (the smallest image size has not been computed yet, since this is the first iteration so just set it to zero).

b.      If it is smaller than the allowable size, take three quarters of the original file size, or one half the distance between the last largest image dimensions computed (in this case the original image dimensions) and the last smallest image dimensions computed (i.e. one half the original).

3.    Continue the pattern of taking one of the values either greater than or less than the halfway point that was just computed. Each time, make sure that the number of lines and samples is being computed using either integer or long values.

4.    After each iteration, check to see if the same image dimensions have already been tested and if so, that is your final image size and you have found the optimal solution.

The example below should shed a little bit more light on how this is being performed.

 ;Start the application

 e = ENVI()

 

 ;Open an input file

 File = Filepath('qb_boulder_msi', Subdir=['data'], $

   Root_Dir=e.Root_Dir)

 Raster = e.OpenRaster(File)

 

 ;Get the  task from the catalog ofENVITasks

 Task = ENVITask('ISODATAClassification')

 

 ;Define inputs

 Task.Input_Raster = Raster

 

 ;Run the task

 Task.Execute

 

 ;Pull out the new ISOData raster

 ISORaster = Task.OUTPUT_RASTER

 

 ;get the metadata

 metadata = ISORaster.Metadata

 

 ;get the information about the new file

 nb = ISORaster.nb

 Raster_ns = ISORaster.ns

 Raster_nl = ISORaster.nl

 

 ;set the filename for the output file

 output_file = 'C:\Output\PNG_Test.png'

 

 ;Set the desired output size in bytes

 output_size = 150000

 

 ;delete any old files with the same name

 FILE_DELETE, output_file,/ALLOW_NONEXISTENT

 

 ;Save the image out as our initial test

 

 ;Get the task from the catalog of ENVITasks

 ERTP_Task = ENVITask('ExportRasterToPNG')

 

 ;Define inputs

 ERTP_Task.INPUT_RASTER = ISORaster

 

 ;Define outputs

 ERTP_Task.OUTPUT_URI = output_file

 

 ;Run the task

 ERTP_Task.Execute

 

 ;Close the PNG Raster

 ERTP_Task.Output_Raster.close

 

 ;Get the new File Size

 file_size = ((file_info(output_file)).size) * 1.

 

 ;create a container for the maximum number of samples and lines

 max_ns = Raster_ns

 max_nl = Raster_nl

 

 ;create a container for the current number of samples and lines

 nl = Raster_nl

 ns = Raster_ns

 

 ;create a container for the minimum number of samples and lines

 min_ns = 0

 min_nl = 0

 

 ;Create a container for the 1D index value of the lower right corner

 ;of the image with regards to the maximum image size

 loc_1d = []

 

 ;check to make sure the image is currently too large

 if file_size gt output_size then begin

 

   ;check to see if the same image size has come up more than once.  if so

   ;then stop iterating

   while ((where(loc_1d eq (nl * Raster_ns +ns)))[0] eq -1do begin

 

     ;record the image width and height in 1D

     loc_1d = [loc_1d, nl * Raster_ns + ns]

 

     ;decide which side of the value should be divided in half

     if (file_size gt output_size) then side = 'right'

     if (file_size lt output_size) then side = 'left'

 

     case side of

 

       ;the image is too large and must be reduced in size

       'right' : begin

 

          ; determine the new size of the image (i.e.1/2 of the upper bound minus the current size)

          ns = max_ns - ((max_ns - min_ns)* .5)

          nl = max_nl - ((max_nl - min_nl)* .5)

 

          ; Delete the old image

          FILE_DELETE, output_file,/ALLOW_NONEXISTENT

 

          ; Shrink the original image

 

          ; Get the task from the catalog ofENVITasks

          DRR_Task=ENVITask('DimensionsResampleRaster')

 

          ; Define inputs

          DRR_Task.INPUT_RASTER =ISORaster

          DRR_Task.DIMENSIONS=[ns,nl]

 

          ; Run the task

          DRR_Task.Execute

 

          ns = DRR_Task.Output_Raster.ns

          nl = DRR_Task.Output_Raster.nl

 

          ; Get the task from the catalog ofENVITasks

          ERTP_Task = ENVITask('ExportRasterToPNG')

 

          ; Define inputs

          ERTP_Task.INPUT_RASTER =DRR_Task.Output_Raster

 

          ; Define outputs

          ERTP_Task.OUTPUT_URI =output_file

 

          ; Run the task

          ERTP_Task.Execute

 

          ; Close the Resampled Raster

          DRR_Task.Output_Raster.close

 

          ; Close the PNG raster

          ERTP_Task.OUTPUT_Raster.close

 

          ; Get the new File Size

          file_size = ((file_info(output_file)).size)* 1.

 

          ; Record the new dimensions as the upperbound

          max_ns = ns

          max_nl = nl

 

       end

 

       'left' : begin ; too small

 

          ; determine the new size of the image (i.e.1.5 of the upper bound minus the current size)

          ns = ((max_ns - min_ns)* 1.5) + max_ns

          nl = ((max_nl - min_nl)* 1.5 )+ max_nl

 

          ; Delete the old image

          FILE_DELETE, output_file,/ALLOW_NONEXISTENT

 

          ; Shrink the original image

 

          ; Get the task from the catalog ofENVITasks

          DRR_Task=ENVITask('DimensionsResampleRaster')

 

          ; Define inputs

          DRR_Task.INPUT_RASTER =ISORaster

          DRR_Task.DIMENSIONS=[ns,nl]

 

          ; Run the task

          DRR_Task.Execute

 

          ; Record the dimensions of the new image

          dims = size(test_img,/DIMENSIONS)

          ns = DRR_Task.OUTPUT_RASTER.ns

          nl = DRR_Task.OUTPUT_RASTER.nl

          ; Save the new image and test the size

 

          ; Get the task from the catalog ofENVITasks

          ERTP_Task = ENVITask('ExportRasterToPNG')

 

          ; Define inputs

          ERTP_Task.INPUT_RASTER =DRR_Task.Output_Raster

 

          ; Define outputs

          ERTP_Task.OUTPUT_URI =output_file

 

          ; Run the task

          ERTP_Task.Execute

 

          ; Close the Resampled Raster

          DRR_Task.Output_Raster.close

 

          ; Close the PNG raster

          ERTP_Task.OUTPUT_Raster.close

 

          ; Record the new file size

          file_size = ((file_info(output_file)).size)* 1.

 

          ; Record the old dimension as the lowerbound

          min_ns = max_ns

          min_nl = max_nl

 

          ; Record the new dimension as the upperbound

          max_ns = ns

          max_nl = nl

 

       end

     endcase

   endwhile

 endif

 

 ;Close the intial raster

  ISORaster.close


Comments (0) Number of views (988) Article rating: 5.0

Categories: IDL Blog | IDL Data Point

Tags:

24

Jun

2016

IDL Keyword Forwarding Perils

Author: Brian Griglak

You have to be careful when you add keywords to a routine because you want to forward them on to another routine that you invoke. It’s okay when the keywords are always present, but if they are optional output keywords, you may end up copying around data that the caller of the outermost routine never sees. Here is a short example of this:

pro layer2, FOO=foo, BAR=bar
  compile_opt idl2
 
  print, 'Layer2, N_ELEMENTS(foo) = ' + StrTrim(N_ELEMENTS(foo),2)
  print, 'Layer2, N_ELEMENTS(bar) = ' + StrTrim(N_ELEMENTS(bar),2)
  print, 'Layer2, ARG_PRESENT(foo) = ' + StrTrim(ARG_PRESENT(foo),2)
  print, 'Layer2, ARG_PRESENT(bar) = ' + StrTrim(ARG_PRESENT(bar),2)
end
 
pro layer1, FOO=foo, BAR=bar
  compile_opt idl2
 
  print, 'Layer1, N_ELEMENTS(foo) = ' + StrTrim(N_ELEMENTS(foo),2)
  print, 'Layer1, N_ELEMENTS(bar) = ' + StrTrim(N_ELEMENTS(bar),2)
  print, 'Layer1, ARG_PRESENT(foo) = ' + StrTrim(ARG_PRESENT(foo),2)
  print, 'Layer1, ARG_PRESENT(bar) = ' + StrTrim(ARG_PRESENT(bar),2)
 
  layer2, FOO=foo, BAR=bar
end

When I call layer1 with no keywords, we see how both FOO and BAR are not present in layer1, but they are in layer2:

IDL> layer1
Layer1, N_ELEMENTS(foo) = 0
Layer1, N_ELEMENTS(bar) = 0
Layer1, ARG_PRESENT(foo) = 0
Layer1, ARG_PRESENT(bar) = 0
Layer2, N_ELEMENTS(foo) = 0
Layer2, N_ELEMENTS(bar) = 0
Layer2, ARG_PRESENT(foo) = 1
Layer2, ARG_PRESENT(bar) = 1

This is because the interpreter only looks at the way layer1 invokes layer2, and has to assume that layer1 is using FOO and BAR as output keywords. While it is certainly possible that the interpreter could inspect how the foo and bar variables are used, and try to optimize them away when it realizes that these variables aren’t used elsewhere inside layer1 and weren’t passed into it, that would have a major impact on performance. Compiled languages can do this with an optimizing linker, but interpreted languages like IDL can afford to take the time to do that every time they run a line of code.

One way to interpret the values of N_ELEMENTS() and ARG_PRESENT() is shown in this table:

  N_ELEMENTS() EQ 0
N_ELEMENTS() GT 0
ARG_PRESENT() EQ 0
Keyword not used
Iput keyword
ARG_PRESENT() EQ 1
Output keyword
Int/out keyword

If I call layer1 with FOO and BAR set to a literal and an undefined variable, we see different output:

IDL> layer1, FOO=1, BAR=b
Layer1, N_ELEMENTS(foo) = 1
Layer1, N_ELEMENTS(bar) = 0
Layer1, ARG_PRESENT(foo) = 0
Layer1, ARG_PRESENT(bar) = 1
Layer2, N_ELEMENTS(foo) = 1
Layer2, N_ELEMENTS(bar) = 0
Layer2, ARG_PRESENT(foo) = 1
Layer2, ARG_PRESENT(bar) = 1

Here we see how FOO is an input to layer1, and BAR is an output. The more interesting part is in layer2, where FOO in an in/out keyword while BAR is still only an output keyword. The situation changes a little when I call layer1 with a defined variable for one of the keywords:

IDL> b2 = 2
IDL> layer1, FOO=1, BAR=b2
Layer1, N_ELEMENTS(foo) = 1
Layer1, N_ELEMENTS(bar) = 1
Layer1, ARG_PRESENT(foo) = 0
Layer1, ARG_PRESENT(bar) = 1
Layer2, N_ELEMENTS(foo) = 1
Layer2, N_ELEMENTS(bar) = 1
Layer2, ARG_PRESENT(foo) = 1
Layer2, ARG_PRESENT(bar) = 1

This time FOO is an input to layer1 and in/out to layer2, but BAR is an in/out keyword to both layer1 and layer2.

Now you may be wondering what the point of this whole analysis is. It matters when an output keyword takes a lot of time to build or space to store, and is incorrectly identified as being present. Imagine that BAR uses a gigabyte of memory, if the user calls layer1 without BAR, then layer2 will allocate that memory and return it to layer1, but it gets thrown away when layer1 returns to the caller.

How do we defensively implement the code to prevent this waste of time and space? Unfortunately the best solutions I’ve come up with are to add some logic to layer1 to conditionally forward keywords to layer2. It matters what type of keyword we are talking about here, input vs output vs in/out. Pure input keywords can be blindly forwarded, while output and in/out need the extra logic.

pro layer2, INPUT=input, OUTPUT=output, INOUT=inout
  compile_opt idl2
 
  print, 'Layer2, N_ELEMENTS(input) = ' + StrTrim(N_ELEMENTS(input),2)
  print, 'Layer2, N_ELEMENTS(output) = ' + StrTrim(N_ELEMENTS(output),2)
  print, 'Layer2, N_ELEMENTS(inout) = ' + StrTrim(N_ELEMENTS(inout),2)
  print, 'Layer2, ARG_PRESENT(input) = ' + StrTrim(ARG_PRESENT(input),2)
  print, 'Layer2, ARG_PRESENT(output) = ' + StrTrim(ARG_PRESENT(output),2)
  print, 'Layer2, ARG_PRESENT(inout) = ' + StrTrim(ARG_PRESENT(inout),2)
 
  output = 'output'
  inout = ISA(inout) ? inout+1 : 'new inout'
end
 
pro layer1, INPUT=input, OUTPUT=output, INOUT=inout
  compile_opt idl2
 
  print, 'Layer1, N_ELEMENTS(input) = ' + StrTrim(N_ELEMENTS(input),2)
  print, 'Layer1, N_ELEMENTS(output) = ' + StrTrim(N_ELEMENTS(output),2)
  print, 'Layer1, N_ELEMENTS(inout) = ' + StrTrim(N_ELEMENTS(inout),2)
  print, 'Layer1, ARG_PRESENT(input) = ' + StrTrim(ARG_PRESENT(input),2)
  print, 'Layer1, ARG_PRESENT(output) = ' + StrTrim(ARG_PRESENT(output),2)
  print, 'Layer1, ARG_PRESENT(inout) = ' + StrTrim(ARG_PRESENT(inout),2)
 
  if (ARG_PRESENT(output)) then begin
    if (ARG_PRESENT(inout)) then begin
      layer2, INPUT=input, OUTPUT=output, INOUT=inout
    endif else begin
      layer2, INPUT=input, OUTPUT=output
    endelse
  endif else begin
    if (ARG_PRESENT(inout)) then begin
      layer2, INPUT=input, INOUT=inout
    endif else begin
      layer2, INPUT=input
    endelse
  endelse
end

The big drawback with this approach, as you can expect, is that it is exponential in the number of keywords and quickly becomes untenable. We need a solution that is O(n) instead of O(2n), and I’ve come up with two completely different solutions which each have their pros and cons. First the simpler solution, which uses the EXECUTE () function:

pro layer1, INPUT=input, OUTPUT=output, INOUT=inout
  compile_opt idl2
 
  print, 'Layer1, N_ELEMENTS(input) = ' + StrTrim(N_ELEMENTS(input),2)
  print, 'Layer1, N_ELEMENTS(output) = ' + StrTrim(N_ELEMENTS(output),2)
  print, 'Layer1, N_ELEMENTS(inout) = ' + StrTrim(N_ELEMENTS(inout),2)
  print, 'Layer1, ARG_PRESENT(input) = ' + StrTrim(ARG_PRESENT(input),2)
  print, 'Layer1, ARG_PRESENT(output) = ' + StrTrim(ARG_PRESENT(output),2)
  print, 'Layer1, ARG_PRESENT(inout) = ' + StrTrim(ARG_PRESENT(inout),2)
 
  cmd = 'layer2'
 
  if (N_ELEMENTS(input)) then begin
    cmd += ', INPUT=input'
  endif
  if (ARG_PRESENT(output)) then begin
    cmd += ', OUTPUT=output'
  endif
  if (N_ELEMENTS(inout) || (ARG_PRESENT(inout)) then begin
    cmd += ', INOUT=inout'
  endif
 
  !null = EXECUTE(cmd)
end

This approach builds up a command string to execute, starting with the name of the layer2 routine it invokes. It then conditionally appends the keywords to the command string, using N_ELEMENTS() and/or ARG_PRESENT(), depending on what type of keyword it is. This doesn’t add a lot of new code, but it takes a performance hit because EXECUTE () has to compile the command string and then execute it. One might point out that using CALL_PROCEDURE() is more efficient, but it doesn’t support output and in/out keywords so we have to use EXECUTE().

The other approach avoids the performance hit of EXECUTE(), but it is a bit more involved, requiring the creation of a new wrapper function and the use of SCOPE_VARFETCH(), which some people are leery to use.

function layer2Wrapper, _REF_EXTRA=extra
  compile_opt idl2
 
  layer2, _EXTRA=extra
 
  if (N_ELEMENTS(extra) eq 0) then return, Hash()
 
  retVal = Hash()
  foreach key, extra do begin
    retVal[key] = SCOPE_VARFETCH(key, /REF_EXTRA)
  endforeach
  return, retVal
end
 
pro layer1, INPUT=input, OUTPUT=output, INOUT=inout
  compile_opt idl2
 
  print, 'Layer1, N_ELEMENTS(input) = ' + StrTrim(N_ELEMENTS(input),2)
  print, 'Layer1, N_ELEMENTS(output) = ' + StrTrim(N_ELEMENTS(output),2)
  print, 'Layer1, N_ELEMENTS(inout) = ' + StrTrim(N_ELEMENTS(inout),2)
  print, 'Layer1, ARG_PRESENT(input) = ' + StrTrim(ARG_PRESENT(input),2)
  print, 'Layer1, ARG_PRESENT(output) = ' + StrTrim(ARG_PRESENT(output),2)
  print, 'Layer1, ARG_PRESENT(inout) = ' + StrTrim(ARG_PRESENT(inout),2)
 
  keywords = Hash()
 
  if (N_ELEMENTS(input)) then begin
    keywords['input'] = input
  endif
  if (ARG_PRESENT(output)) then begin
    myOutput = 0
    keywords['output'] = myOutput
  endif
  if (N_ELEMENTS(inout)) then begin
    keywords['inout'] = inout
  endif else if (ARG_PRESENT(inout)) then begin
    myInout = 0
    keywords['inout'] = myInout
  endif
 
  outKeys = layer2Wrapper(_EXTRA=keywords.ToStruct())
  if (outKeys.HasKey('OUTPUT')) then output = outKeys['OUTPUT']
  if (outKeys.HasKey('INOUT')) then inout = outKeys['INOUT']
end

This version of layer1 conditionally adds the keywords to a Hash, which is converted to a struct when calling the new wrapper function. By assigning the struct to the _EXTRA keyword during invocation, the keywords are properly mapped. One will note that the variables put into the Hash depend on whether it is an input or output keyword. For input keywords, I just used the variables assigned to those keywords in layer1’s signature, but for output keywords I had to create a local variable and put that in the Hash. The reason for this is that IDL structs cannot contain !NULL or undefined values, so I had to assign them a value. In this case I used 0, but depending on the expectations of layer2 a special “not set” value may need to be used.

The new layer2Wrapper() function is worth examining more closely. It is declared using _REF_EXTRA, so that we get pass by reference semantics instead of pass by value. That _REF_EXTRA bag is simply passed along to layer2, using the _EXTRA keyword like we’re supposed to. After calling layer2, the wrapper then uses SCOPE_VARFETCH() with the /REF_EXTRA keyword to grab each keyword’s value and put it into a Hash that is returned to its caller. It’s this SCOPE_VARFETCH() trick that necessitates the creation of this wrapper routine. Once layer2Wrapper returns that Hash to layer1, we then copy any of the existing values out of the Hash into the appropriate output keyword variables.

Both solutions may seem like a lot of work for what seems like a minor inconvenience, but there are real world cases where we could waste a lot of time and/or memory on keywords that only exist for parts of the callstack and are never specified by the original caller. A prime example of this is in ENVIRaster::GetData(). This method always returns an array of pixel values, but there is the optional PIXEL_STATE keyword that can be used to retrieve a byte array of the same dimensions that indicates which pixels are good and which are bad. Without using a solution like the ones I’ve presented, we would end up with the PIXEL_STATE array being allocated and calculated every time a user calls GetData(), whether they specified that keyword or not.

Comments (0) Number of views (862) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

First3456789101112Last

MONTHLY ARCHIVE

«March 2017»
SunMonTueWedThuFriSat
2627281234
567891011
12131415161718
19202122232425
2627282930311
2345678

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation