28

Aug

2016

Upsampling images using a Lagrange polynomial method

Author: Atle Borsholm

A few weeks ago I posted about using the Lanczos kernel for resampling images to a higher resolution. This week I am continuing with the same example, but adding in the Lagrange resampling method. Both Lagrange and Lanczos have some similar characteristics in that they show better detail than a purely linear interpolation. Both methods can also be adapted to an irregularly gridded dataset instead of the raster images used in my examples here. The code produces 4 upsampled images using different methods, and the results are shown below.
 
function lanczos, data
 
  xval = [-3:3:.25]
  lanc3 = 3*sin(!pi*xval)*(sin(!pi*xval/3d)/!pi/!pi/xval/xval)
  lanc3[where(xval eq 0)] = 1
  l2d = lanc3 # lanc3
  ; high resolution version
  msk = fltarr(data.dim*4)
  msk[0:*:4,0:*:4] = data
  hi = convol(msk, l2d, /edge_trunc)
  hi = byte(round(hi>0<255))
  return, hi
end
 
 
function lagrange, a, x, y
  compile_opt idl2, logical_predicate
 
  xf = floor(x)
  yf = floor(y)
  x1 = x - xf
  y1 = y - yf
  off = [-1,0,1,2]
  retval = replicate(0., size(x, /DIM))
  weightx = replicate(1., [size(x1, /DIM),4])
  weighty = replicate(1., [size(x1, /DIM),4])
  for i=0,3 do begin
    for j=0,3 do begin
      if i ne j then begin
        weightx[*,*,i] *= (x1 - off[j]) / (off[i] - off[j])
        weighty[*,*,i] *= (y1 - off[j]) / (off[i] - off[j])
      endif
    endfor
  endfor
  for i=0,3 do begin
    for j=0,3 do begin
      retval += weightx[*,*,j] * weighty[*,*,i] * a[xf+off[j], yf+off[i]]
    endfor
  endfor
  return, retval
end
 
pro upsample_example
  compile_opt idl2,logical_predicate
 
  ; Read the original image data
  f = filepath('moon_landing.png', subdir=['examples','data'])
  data = read_png(f)
  dim = data.dim
 
  window, xsize=dim[0], ysize=dim[1], 0, title='Original full size'
  tv, data
 
  ; Define a zoomed in are on the flag.
  xs = 120
  ys = 105
  dx = 60
  dy = 100
 
  ; display upsampled 4x with pixel replication
  window, xsize=4*dx, ysize=4*dy, 1, title='CONGRID pixel-replication'
  tv, congrid(data[xs:xs+dx-1,ys:ys+dy-1],4*dx,4*dy)
  write_png,'moon-pixel-replication.png',tvrd()
 
  ; display upsampled 4x with bilinear interpretation
  window, xsize=4*dx, ysize=4*dy, 2, title='CONGRID linear'
  tv, congrid(data[xs:xs+dx-1,ys:ys+dy-1],4*dx,4*dy,/interp)
  write_png,'moon-bilinear.png',tvrd()
 
  ; display upsampled 4x with Lanczos convolution
  window, xsize=4*dx, ysize=4*dy, 3, title='Lanczos'
  tv, (lanczos(data))[xs*4:xs*4+dx*4-1,ys*4:ys*4+dy*4-1]
  write_png,'moon-lanczos.png',tvrd()
 
  ; Lagrange
  window, xsize=4*dx, ysize=4*dy, 4, title='Lagrange'
  xcoord = [float(xs):xs+dx:0.25]
  ycoord = [float(ys):ys+dy:0.25]
  tv, byte(lagrange(float(data), $
    xcoord # replicate(1,1,ycoord.length), $
    replicate(1,xcoord.length) # ycoord)>0<255)
  write_png,'moon-lagrange.png',tvrd()
end
 
 
 Pixel replication

 

Bi-linear interpolation

Lanczos resampling

Lagrange resampling

Comments (0) Number of views (1215) Article rating: 4.0

Categories: IDL Blog | IDL Data Point

Tags:

18

Aug

2016

Minimum Area Bounding Box

Author: Austin Coates

I find myself drawing bounding boxes around things a lot. I don’t know why I do it so much, but for whatever reason I do, and as of late I wanted to up my bounding box game. In the past, I have simply used the global min and max in both the x and y directions to get the coordinates to form the bounding box; however, this is not always the most elegant solution. For example, when my data follows somewhat of a linear trend, I am left with ample white space not filled by any valuable information

Figure 1: Simple Bounding Box

Figure 2: Minimum Area Bounding Box

This got me thinking, why am I not simply drawing a bounding box around only the data? Sounds great, right? The only problem was I had no idea how to do this. Luckily, there is this thing called the internet and it has vast stores of information and ideas to pull from. I found a very elegant solution by Jesse Buesking on stackoverflow.com which was posted on November 9, 2015. The solution was written in Python which I converted to IDL. My goal in posting this is to show an awesome way to draw a bounding box and also an example of translating between IDL and Python.

function bounding_box, pts = pts, plot_results = plot_results

 compile_opt IDL2

;Get the x and y coordinates

xs = pts[0,*]

ys = pts[1,*]

 

;Find the bounding points

Triangulate, xs, ys, triangles, hull, CONNECTIVITY=CONNECTIVITY



;order hull points in a [2,n] array   

 hull_points = [[xs[hull]]##1,[ys[hull]]##1]

;calculate edge angles

edges = hull_points[*,1:-1] - hull_points[*,0:-2]

angles = atan(edges[1,*], edges[0,*])

pi2 = !DPI/2.

 

angles = abs(angles - floor(angles / pi2) * pi2)

angles = angles[sort(angles)]

angles = angles[UNIQ(angles)]



;find rotation matrices 

rotations = transpose([[cos(angles)],[cos(angles-pi2)],[cos(angles+pi2)],[cos(angles)]])

rotations = REFORM(rotations, [2,2,n_elements(angles)])

 

;apply rotations to the hull 

rot_points = fltarr( n_elements(hull_points)/2, 2, n_elements(angles))

size_rot = size(rotations)

for group = 0 , size_rot[3]-1 do begin   

for row = 0 , size_rot[2]-1 do begin

rot_points[*,row,group] = TRANSPOSE(rotations[*,row,group]) # hull_points

endfor

endfor

;find the bounding points

min_x min(rot_points[*,0,*],DIMENSION=1, /NAN)

max_x max(rot_points[*,0,*],DIMENSION=1, /NAN)

min_y min(rot_points[*,1,*],DIMENSION=1, /NAN)

max_y max(rot_points[*,1,*],DIMENSION=1, /NAN)

;find the box with the best area

areas = (max_x - min_x) * (max_y - min_y)

min_val = min(areas, best_idx)

;return the best box

x1 = max_x[best_idx]

x2 = min_x[best_idx]

y1 = max_y[best_idx]

y2 = min_y[best_idx]

r = rotations[*,*,best_idx]

rval = fltarr(2,4)

rval[*,0] = TRANSPOSE(TRANSPOSE([x1, y2]) # transpose(r))

rval[*,1] = TRANSPOSE(TRANSPOSE([x2, y2]) # transpose(r))

rval[*,2] = TRANSPOSE(TRANSPOSE([x2, y1]) # transpose(r))

rval[*,3] = TRANSPOSE(TRANSPOSE([x1, y1]) # transpose(r))

 

;display results 

if KEYWORD_SET(plot_results) then begin

p = SCATTERPLOT(xs,ys, SYM_COLOR='Red', SYM_FILL_COLOR='Red', SYM_FILLED=1,$

XRANGE=[min(rval[0,*])-1,max(rval[0,*])+1], YRANGE=[min(rval[1,*])-1,max(rval[1,*])+1])

p = POLYGON(rval, /FILL_BACKGROUND, $

FILL_COLOR="light steel blue", PATTERN_ORIENTATION=45, $

PATTERN_SPACING=4, /DATA)

endif



return, rval



end

Source of original Python code : http://stackoverflow.com/questions/13542855/python-help-to-implement-an-algorithm-to-find-the-minimum-area-rectangle-for-gi/33619018#33619018 

Comments (0) Number of views (1067) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

12

Aug

2016

Compressing hyperspectral data using Motion JPEG2000

Author: Brian Griglak

As I’ve blogged about before, you have to pay attention to a raster’s interleave so that you can iterate over the data efficiently.  That post was ENVI specific, this time I’m here to present a way to store a data cube for efficient BIL mode access that is pure IDL.

As more hyperspectral (HSI) sensors are produced, data cubes with hundreds or even thousands of bands will become more common.  These sensors are usually pushbroom scanners, with a fixed number of samples and bands and a highly variable number of lines collected.  The collection modality lends itself to the BIP and BIL storage interleaves, so each scanline of pixels can be written to disk without having to use large amounts of memory to hold many lines for BSQ or tiled output.  If you are using HSI data, you are probably also interested in most if not all of the bands for your analysis, so BIP and BIL interleaves are more efficient for those data access patterns.

The challenge of HSI data is that it can be large, due to the high number of bands.  Some form of data compression would be useful, as long as it is lossless – why spend the money collecting all those bands if you are going to lose the details with compression or interpolation artifacts?

There a few options for compressing a datacube in IDL:

The zip option is not attractive in that you have to unzip the entire file to use it, so there isn’t any space savings, in fact it’s worse than using the original ENVI file.  The IDL support for the TIFF format allows for 4 different compression modes:
  • None
  • LZW
  • PackBits
  • JPEG

The None option is pointless, since it doesn’t perform any compression.  The JPEG option is lossy and would introduce artifacts, but it can’t be used for this data in the first place.  Firstly, it only supports 8-bit values, so any other datatypes will fail.  Secondly, if you send more than 3 bands of data into the method, you get back the curious error message “Bogus input colorspace”.  This message actually comes from the libTIFF library, trying to tell you it can’t JPEG compress more than 3 band data.

The IDLffMJPEG2000 object is IDL’s interface to the Motion JPEG2000 compression format.  The Motion JPEG2000 standard is an extension to JPEG2000, the wavelet based successor to the venerable JPEG format.  The JPEG2000 format supports both lossy and lossless compression, and we’ll use the latter here to avoid introducing any artifacts to the data.  Unlike make video formats, Motion JPEG2000 only performs intraframe compression, no interframe compression.  While this likely results in lower compression ratios than those other formats, it also means that decompressing a single frame of the animation is faster because it doesn’t need to load other frames for the interframe interpolation.  There are a few degrees of freedom available when creating Motion JPEG2000 files – BIT_RATE, N_LAYERS, N_LEVELS, PROGRESSION, and TILE_DIMENSIONS.  I did not experiment with how changing these values effects the compression ratio or time needed to decompress, so playing with these one your own may be worthwhile.

First I’ll present the code I used to perform the creation of the different compressed formats , and then show the compression ratios and time to compress for a couple different datacubes.

pro compress_to_zip, raster, file   compile_opt idl2
 
  FILE_GZIP, raster.URI, file
end
 
pro compress_to_tiff, raster, file
  compile_opt idl2
 
  dat = raster.GetData()
  WRITE_TIFF, file, dat, /SIGNED, /SHORT, COMPRESSION=1
end
 
pro compress_to_mj2k, raster, file
  compile_opt idl2
 
  dat = raster.GetData()
  signed = !True
  if (raster.Data_Type eq 'byte') then begin
    depth = 8
  endif else if (raster.Data_Type eq 'int') then begin
    depth = 16
  endif else if (raster.Data_Type eq 'long') then begin
    depth = 24
  endif else if (raster.Data_Type eq 'uint') then begin
    depth = 16
    signed = !False
  endif else if (raster.Data_Type eq 'ulong') then begin
    depth = 24
    signed = !False
  endif else begin
    Message, 'Unsupported pixel datatype : ' + raster.Data_Type
  endelse
  mj2k = IDLffMJPEG2000(file, /WRITE, /REVERSIBLE, $
                        BIT_DEPTH=depth, SIGNED=signed, $
                        DIMENSIONS=[raster.nColumns, raster.nBands])
  for row = 0, raster.nRows-1 do begin
    !null = mj2k.SetData(Reform(dat[*, *, row]))
  endfor
  print, mj2k.Commit(30000)
end

 

The IDLffMJPEG2000 class requires you to add data one frame at a time (or subset of a frame), hence the for loop over the number of rows in the raster.  I did have to stick a Reform() call in there, to collapse the 3D slice down to 2D, since the last dimension is always 1.  This object is actually threaded, so the SetData() calls don’t block, they queue up the data to compressed into the output file, so the Commit() function has an optional timeout argument, which is how many milliseconds to wait before returning.  In my testing it doesn’t take much time at all to do the commit, the processor can keep up with the compression requests.

Now the interesting part, compression performance:


Sensor Rows Size GZIP ZIP TIFF LZW TIFF PackBits MJP2
Hyperion
254 cols
242 bands
1310
161046160
96668737
60.0%
7.2 sec
96668847
60.0%
10.6 sec
118720382
73.7 %
4.8 sec
133119274
82.7 %
2.8 sec
64289348
39.9 %
13.2 sec
AVIRIS
614 cols
224 bands
1024
281674956
213266080
75.7 %
15.2 sec
213266228
75.7 %
24.0 sec
271876484
96.5 %
7.8 sec
277500090
98.5 %
3.8 %
129922225
60.9 %
Sec
AVIRIS
952 cols
224 bands
3830
1633479680
784610303
48.0 %
60.5 sec
784610437
48.0 %
92.3 sec
1015098818
62.1 %
44.8 sec
1076409756
65.9 %
25.4 sec
486091874
29.8 %
109.1 sec

 

It isn’t surprising that the GZIP and ZIP results are almost identical, though GZIP is way faster.  It also isn’t surprising that the LZW mode for TIFF is more efficient than the PackBits mode, though slower.  It is somewhat surprising that the LZW compression was so much work than GZIP and ZIP, and that Motion JPEG2000 is a bit better than GZIP and ZIP.

Now let’s say you wanted to read one of the Motion JPEG2000 compressed files back, but only need a spatial and/or spectral subset of it.  Here is a function that will do just that:

function read_mj2k, file, SUB_RECT=subRect, BANDS=bands
  compile_opt idl2
 
  mj2k = IDLffMJPEG2000(file)
 
  mj2k.GetProperty, DIMENSIONS=dims, N_FRAMES=nFrames, $
                    BIT_DEPTH=depth, SIGNED=signed
 
  if (~ISA(subRect)) then begin
    subRect = [0, 0, dims[0]-1, nFrames-1]
  endif
  if (~ISA(bands)) then begin
    bands = INDGEN(dims[1])
  endif
 
  startFrame = subRect[1]
  endFrame = subRect[3]
  leftCol = subRect[0]
  rightCol = subRect[2]
  topRow = MIN(bands)
  bottomRow = MAX(bands)
 
  if (depth le 8) then begin
    type = 1 ; Byte
  endif else if (depth le 16) then begin
    type = signed ? 2 : 12 ; Int or UInt
  endif else begin
    type = signed ? 3 : 13 ; Long or ULong
  endelse
 
  outData = MAKE_ARRAY(rightCol-leftCol+1, N_ELEMENTS(bands), $
                       endFrame-startFrame+1, TYPE=type)
  region = [leftCol, 0, rightCol-leftCol+1, dims[1]]
 
  for frame = 0, endFrame-startFrame do begin
    dat = mj2k.GetData(frame+startFrame, REGION=region)
    outData[*, *, frame] = dat[*, bands]
  endfor
 
  return, outData
end

 

I used the same keyword interpretation as ENVIRaster::GetData():

  • SUB_RECT = [x1, y1, x2, y2]
  • BANDS – any array of nonrepeating band indices

Comments (0) Number of views (988) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

9

Aug

2016

Basic feature extraction with ENVI ROIs

Author: Barrett Sather

The ENVI feature extraction module allows a user to extract certain features from an image. The first step is to segment the image in to regions. Once this is done, you can use spatial and spectral qualities of the regions to pull out targets.

For this example, let’s find the traffic lights in this image taken from Wikipedia’s page on “traffic light”. https://en.wikipedia.org/wiki/Traffic_light

The first step is to do the segmentation with ENVI’s FXSegmention task. I’ve set a merge value of 85 and a segment value of 95, which I determined to be good values for this dataset through manual inspection in ENVI.

Once these segments are created, they can be converted to ENVI ROIs. In this case, I used the ImageThresholdToROI task.

From here, now we can look at qualities of each region, and create a scoring algorithm for how likely each ROI is the feature we are looking for. I felt like looking for the color red was not viable, as then this algorithm breaks down when the light changes. So instead, because our target regions we want to find are circular, let’s look for that.

There are two things we know about circles that we can check for. We know that they are symmetrical, and we know the relationship of the area and circumference as the radius of the circle increases. To start, I’ve set up the pixel coordinates of each ROI so that the origin is the average x/y location for that region. I’ve also calculated the distance away from the origin at each pixel.

x_avg = total(addr[0,*])/n_pix

y_avg = total(addr[1,*])/n_pix

x_coords = reform(addr[0,*] - x_avg)

y_coords = reform(addr[1,*] - y_avg)

power = (x_coords^2 +y_coords^2)

To test for symmetry, take the min and max of x and y, and add them together. The closer to zero that this value is, the more symmetric the ROI is.

abs(max(x_coords)+min(x_coords))+ abs(max(y_coords)+min(y_coords))

To test for how circular the area is, we can test the slope of how the distance from the origin increases. Since we know how this relationship between area and circumference behaves, we can find what slope we are looking for.

Slope =  = 1/3

Because a high score for symmetry is zero, let’s use the same scoring system for this measure.

line = linfit(lindgen(n_elements(power)),power[sort(power)])

score = abs(line[1]-.3333)

The final step is to assign weights (I used weights of 1) and calculate an overall score. The full code for extracting traffic lights (or any other circles) can be found at the bottom of this post.

This method for feature extraction takes a while to develop and perfect, which leads me to some exciting news for those that need to quickly develop feature extraction models. There is another method that we have been developing here in house to do feature extraction called MEGA, which is available through our custom solutions group. This is a machine learning tool that takes in examples of a feature you are looking for, and then generates a heat map of where is it likely that that feature is located in an image.

Stay tuned for an in depth look at how this new method compares to classic feature extraction techniques like the one I’ve presented above.

 

pro traffic_light_fx

 compile_opt idl2

 

 file = 'C:\Blogs\FX_to_ROIs\Stevens_Creek_Blvd_traffic_light.jpg'

 e=envi()

 view = e.GetView()

 ;File was created from FX segmentation only. High Edge and Merge settings.

 t0 = ENVITask('FXSegmentation')

 raster = e.Openraster(file)

 t0.INPUT_RASTER = raster

 layer = view.CreateLayer(raster)

 t0.OUTPUT_RASTER_URI = e.GetTemporaryFilename('.dat')

 t0.MERGE_VALUE = 85.0

 t0.SEGMENT_VALUE = 90.0

 t0.Execute

 

 t1 = envitask('ImageThresholdToROI')

 t1.INPUT_RASTER = t0.OUTPUT_RASTER

 t1.OUTPUT_ROI_URI = e.GetTemporaryFilename('.xml')

 loop_max = max((t0.OUTPUT_RASTER).GetData(), MIN=loop_min)

 num_areas = loop_max-loop_min+1

 t1.ROI_NAME = 'FX_Area_' + strtrim(indgen(num_areas)+1, 2)

 t1.ROI_COLOR = transpose([[bytarr(num_areas)],[bytarr(num_areas)],[bytarr(num_areas)+255]])

 arr = indgen(num_areas) + loop_min

 t1.THRESHOLD = transpose([[arr],[arr],[intarr(num_areas)]])

 t1.execute

 

 allROIs = t1.OUTPUT_ROI

 c_scores = []

 c_loc = []

 for i=0, n_elements(allROIs)-1 do begin

   addr = (allROIs[i]).PixelAddresses(raster)

   n_pix = n_elements(addr)/2

   if n_pix gt 60 then begin

     x_avg = total(addr[0,*])/n_pix

     y_avg = total(addr[1,*])/n_pix

     x_coords = reform(addr[0,*] - x_avg)

     y_coords = reform(addr[1,*] - y_avg)

     power = (x_coords^2 + y_coords^2)

     line = linfit(lindgen(n_elements(power)),power[sort(power)])

     this_score = abs(max(x_coords) + min(x_coords)) + max(y_coords) + min(y_coords)) + abs(line[1]-.3333)

     c_scores = [c_scores, this_score]

     c_loc = [c_loc, i]

   endif

 endfor

 idx = sort(c_scores)

 sort_scores = c_scores[idx]

 sort_locs = c_loc[idx]

 for i=0, 3 do begin

   lightROI = allROIs[sort_locs[i]]

    !null = layer.AddRoi(lightROI)

  endfor

end

Comments (0) Number of views (2054) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

4

Aug

2016

Beware of the Diamond Relationship: The danger of multiple class inheritances

Author: Benjamin Foreback

When developing object-oriented IDL programs, a commonly used class is IDL_Object. This convenient superclass is used to provide the subclass that inherits it with IDL's _overload functionality. However, when multiple classes inherit IDL_Object, you can run into a problem if you try to write a class that inherits more than one superclasses that inherits IDL_Object.

Here are some class definition routines that illustrate the problem:

pro Superclass_A__define

  !null =
{Superclass_A, $
    inherits IDL_Object}
  
end

pro Superclass_B__define
  
  !null =
{Superclass_B, $
    inherits IDL_Object}

end

pro Subclass_S__Define
  
  !null =
{Subclass_S, $
    inherits Superclass_A, $
    inherits Superclass_B}
  
end

This can be called a diamond relationship because of the diamond that it makes in a class diagram.

Unfortunately, attempting to run this code would result in an error.

% Conflicting or duplicate structure tag definition: IDL_OBJECT_TOP.
% Execution halted at: SUBCLASS_A__DEFINE

Since there is more than one way to write code, there are a couple of alternatives to avoid this unwanted behavior resulting from the diamond relationship.

Only inherit base level classes (horizontal relationships)

If you want to avoid confusion as to what is inherited by inherited classes, then you could structure the classes so that you only inherit base classes, which are classes that don't inherit anything else. In this model, Superclass_A and Superclass_B would be the base classes. You wouldn't instantiate or call methods on them directly, but instead you would write new classes, Class_A and Class_B that would inherit both the base class and IDL_Object (also a base class). These new subclasses would then be the ones you would interact with.

 

Now Subclass_S would simply inherit the two base classes plus IDL_Object.

 

Use a linear inheritance structure (vertical relationships)

Another option is to use in inheritance structure that is completely linear. This means that each class inherits one and only one superclass.

This structure is simple and very safe in regards to avoiding conflicting superclasses. The drawback is that when you define Superclass_B to inherit Superclass_A, you are stating that B can be identified as A. In other words, Obj_Isa(B, 'Superclass_A') would return true. You may not necessarily want to associate the two together, in which case this would not be a suitable way to define your classes.

Other diamond relationships

In addition to IDL_Object, the diamond relationship can cause problems in many other cases. For instance, when the superclass at the top of the diamond contains a state or member variable, there are possible conflicts when the intermediate classes try to access or modify the single superclasse. In general, it's a relationship that you want to avoid.

These are not the only two solutions but just a couple of suggestions for how to build class relationships that avoid conflicts. If all else fails, draw a class diagram. After all, a picture is worth a thousand lines of code, right?

Comments (0) Number of views (1108) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

12345678910Last

MONTHLY ARCHIVE

«March 2017»
SunMonTueWedThuFriSat
2627281234
567891011
12131415161718
19202122232425
2627282930311
2345678

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation