Continuum Removal

Author: Austin Coates

Recently, I was given the chance to practice some spectroscopy and in preparation for the project, I realized that I did not have a simple way to visualize the variations in different absorption features between very discreet wavelengths. The method that I elected to employ for this task is called continuum removal  (Kokaly, Despain, Clark, & Livo, 2007). This method allows you to compare different spectra and essentially normalize the data so that they can be more easily compared with one another.

To use the algorithm, you first select the region that you are interested in (for me this was between 550 nm and 700 nm -this is the region of my spectra that deals with chlorophyll absorption and pigment). Once the region is selected then a linear model is fit between the two points and this is called the continuum. The continuum is the hypothetical background absorption feature, or artifact or other absorption feature, which acts as the baseline for the target features to be compared against (Clark, 1999). Once the continuum has been set then the continuum removal process can be performed on all spectra in question using the following equation (Kokaly, Despain, Clark, & Livo, 2007).

 RC is the resulting continuum removed spectra, RL is the continuum line and, Ro is the original reflectance value.

Figure 1: Original spectra of two healthy plants. The dotted line denotes the continuum line. The x axis shows wavelengths in nm and the y axis represents reflectance.

Figure 2: The continuum removal for wavelengths 550 nm - 700 nm.


The resulting code gives you a tool that will take in two spectral libraries, with one spectra per library, and return two plots similar to what is shown in Figure 1 and Figure 2.


pro Continuum_Removal

compile_opt IDL2


Spectra_File_1  =

Spectra_File_2 =


; Find the bounds for the feature

FB_left = 550

FB_right =700


; Open Spectra 1

oSLI1 = ENVISpectralLibrary(Spectra_File_1)

spectra_name = oSLI1.SPECTRA_NAMES

Spectra_Info_1 = oSLI1.GetSpectrum(spectra_name)


; Open Spectra 2

oSLI2 = ENVISpectralLibrary(Spectra_File_2)

spectra_name = oSLI2.SPECTRA_NAMES

Spectra_Info_2 = oSLI2.GetSpectrum(spectra_name)


; Get the wavelengths

wl = Spectra_Info_1.wavelengths


; Create Bad Bands List (this removes some regions of the spectra associated with water vapor absorption)

bb_range = [[926,970],[1350,1432],[1796,1972],[2349,2500]]

bbl = fltarr(n_elements(wl))+1

dims = size(bb_range, /DIMENSIONS)

for i = 0 , dims[1]-1 do begin

  range = bb_range[*,i]

  p1 = where(wl eq range[0])

  p2 = where(wl eq range[1])

  bbl[p1:p2] = !VALUES.F_Nan



;Plot oSLI1 / oSLI2

p = plot(wl, Spectra_Info_1.spectrum*bbl, xrange = [min(wl, /nan),max(wl, /nan)],$

  yrange=[0,max([Spectra_Info_1.spectrum*bbl,Spectra_Info_2.spectrum*bbl], /nan)], thick = 2, color = 'blue')


p = plot(wl, Spectra_Info_2.spectrum*bbl, /overplot, thick = 2, color = 'green')


; create the linear segment

Spectra_1_y1 = Spectra_Info_1.spectrum[where( wl eq FB_left)]

Spectra_1_y2 = Spectra_Info_1.spectrum[where( wl eq FB_right)]

pl_1 = POLYLINE([FB_left,FB_right], [Spectra_1_y1, Spectra_1_y2], /overplot, /data, thick = 2, LINESTYLE = '--')

Spectra_2_y1 = Spectra_Info_2.spectrum[where( wl eq FB_left)]

Spectra_2_y2 = Spectra_Info_2.spectrum[where( wl eq FB_right)]

pl_2 = POLYLINE([FB_left,FB_right], [Spectra_2_y1, Spectra_2_y2], /overplot, /data, thick = 2, LINESTYLE = '--')


; Get the equation of the line

LF_1 = LINFIT([FB_left,FB_right], [Spectra_1_y1, Spectra_1_y2])

LF_2 = LINFIT([FB_left,FB_right], [Spectra_2_y1, Spectra_2_y2])


; Get the values between the lower and upper bounds

x_vals = wl [ where(wl eq FB_left) : where(wl eq FB_right)]


; Compute the continuum line

RL_1 = LF_1[0] + LF_1[1]* x_vals

RL_2 = LF_2[0] + LF_2[1]* x_vals


; Perform Continuum Removal

Ro_1 = Spectra_Info_1.spectrum[ where(wl eq FB_left) : where(wl eq FB_right)]

RC_1 =  Ro_1 / RL_1

Ro_2 = Spectra_Info_2.spectrum[ where(wl eq FB_left) : where(wl eq FB_right)]

RC_2 = Ro_2 / RL_2


; Plot the new Continuum Removal Spectra

pl_RC_1 = plot(x_vals, RC_1, color = 'Blue', xrange = [min(x_vals, /NAN), max(x_vals, /NAN)] )

pl_RC_2 = plot(x_vals, RC_2, color = 'Green', /overplot)




Kokaly, R. F., Despain, D. G., Clark, R. N., & Livo, K. E. (2007). Spectral analysis of absorption features for mapping vegetation cover and microbial communities in Yellowstone National Park using AVIRIS data.

Clark, R. N. (1999). Spectroscopy of rocks and minerals, and principles of spectroscopy. Manual of remote sensing3, 3-58. 

Comments (0) Number of views (94) Article rating: No rating

Categories: IDL Blog | IDL Data Point





Just an analyst looking at the world

Author: James Lewis

In most cases as a geospatial analyst I use ENVI and IDL to look at the world in a rather flat and two dimensional aspect when it comes to data analytics. However, LiDAR, drones, advancements in photogrammetry, and temporal analytics within ENVI have given me the ability to analyze data in new ways that were previously too time consuming, difficult, or simply not available.  

I can use tools like the photogrammetry suite to create dense point clouds over areas that it would be difficult to attain the same fidelity, or even fly over with airborne platforms. As small satellites come online they will give us coverage and revisit times unlike anything we’ve had before!

What will we do with all that data?  

Utilizing the spatiotemporal tools in ENVI, we can gain information from days, weeks, months, or years’ worth of collects that classic change detection analytics would not support.

    Point cloud drone collect   Temporal analysis            


LiDAR data has become more available for analyst, and we need to be able to exploit it beyond looking at a bunch of points. Using the ENVI LiDAR tools, you can quickly and easily build robust elevation models to be used for everthing from creating an ortho-mosaic to a line-of-sight analysis. We can also extract 3D models for modeling and simulation, or building scenes for decsion making.


Collada models from ENVI LiDAR

DSM and building vectors from LiDAR point cloud

I can also take advantage of IDL to build out custom tools that enable me to become a more efficient analyst. For example, I can create a tool to ingest a point cloud that gives me multiple products, where before I would have needed to create them individually. I also won’t have to rewrite my code over again when I’m ready to move to the enterprise thanks to the new ENVI task system.

Custom tool combing multiple task

ENVI is not only allowing analyst to respond to various problem sets more efficiently, but also to ENVI enables analysts to do things that just weren’t possible a few years ago. Though the exploitation and types of data is forever changing at a rapid pace, ENVI is leading the charge to remain an industry standard for analytics.

Comments (0) Number of views (303) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks





Creating Animations and Videos in IDL

Author: Jim Pendleton

Oftentimes it is easier for us to perceive change in images if we animate the images in a video.

These comparisons could be between images of the same subject taken at different times, or images acquired contemporaneously but with different techniques or wavelengths.

We in the Harris Geospatial Solutions Custom Solutions Group frequently include video production and display tools in the applications and solutions we provide to our clients.

Let's say you've been monitoring for optical changes in the galactic cluster Abell 115, an image of which is located in your IDL examples/demo/demodata directory.

IDL> read_jpeg, filepath('abell115.jpg', subdir = ['examples','demo','demodata']), image1

IDL> i = image(image1)

Let's copy a region of interest from one place to another to simulate the discovery of a supernova.

IDL> image2[*, 250:260, 250:260] = image2[*, 265:275, 350:360]

IDL> i.setdata, image2

The change is fairly small, but you should see it centered in the image.

To make the "discovery" more apparent, we can animate the changes.

One display technique involves a fade operation in which images dissolve into one another over time. Each pixel transitions over time from one color to another. Our human visual systems are drawn to areas of change in animations. Depending on the effect that's desired for the viewer, all pixels may transition equally and in parallel, or some may transition more slowly or rapidly than others.

First, select a number of individual "frames" over which the change will occur. The number of frames defines how smoothly the transition takes place between images. A value of 2, for example, would produce a blinking effect with no interpolated transition.

IDL> nframes = 32

Next, loop over the frames, weighting the displayed image by proportional contributions between the two input images at each time step.

IDL> for f = 1., nframes do i.setdata, byte(image2*(f/nframes) + image1*(1-f/nframes))

(For the purpose of code compactness in this example, I'm looping using a floating point value but in general this is not recommended.)

Next you may wish to share your "discovery" with your colleagues in the form of a video. The IDL class IDLffVideoWrite lets you create video output files in a number of different formats.

There are three basic steps you will use to create a video. Create an object that is associated with an output file, create a video stream with the appropriate frame dimensions and frame rate for display, then add each image frame to the stream.

Get the dimensions of the image and define a frame rate, in units of frames per second.

IDL> d = size(image1, /dimensions)

IDL> fps = 30

The division of the number of frames by the frame rate will determine the elapsed time of your animation.

Create a video object. In this example, I will write a file to my temporary directory.

IDL> a = filepath(/tmp, 'abell115.avi')

IDL> v = idlffvideowrite(a)

The IDLffVideoWrite object determines the default format for the video output according to the file name extension, in this case ".avi". See the online documentation for alternative formats.

Create a video stream within the video object, specifying the X and Y dimensions of each image frame, along with the frame rate, using the IDLffVideoWrite::AddVideoStream method.

IDL> vs = v.addvideostream(d[1], d[2], fps)

Rather than animating to the graphics windows, we'll loop over the frames writing data to the video stream instead using the IDLffVideoWrite::Put method.

IDL> for f = 1., nframes do !null = v.put(vs, byte(image2*(f/nframes) + image1*(1-f/nframes)))

There is no explicit call to close the action of writing frames to the file. Instead, we signal the completion of writing the data by simply destroying the object reference.

IDL> obj_destroy, v

The video file is now closed and available for sharing with our colleagues. To test the contents, simply SPAWN the path to the video file, assuming the file name extension is known to the shell.

IDL> spawn, a, /hide, /nowait

In order to make the video more interesting, you might consider adding an annotation such as TEXT or an ARROW to your IMAGE display, then copying the display data as your frame input rather than displaying only the raw data directly.  Additionally, see the documentation for the method IMAGE::CopyWindow, used below.

You can copy and paste the following to your IDL command line to see the result.

IDL> v = idlffvideowrite(a)

IDL> vs = v.addvideostream(d[1], d[2], fps)

IDL> i = image(image1, dimensions = d[1:2])

IDL> ar = arrow([200, 250], [200, 250], color='blue', /data, /current)

IDL> t = text(150, 180, 'Eureka!', color = 'green', /data, /current)

IDL> for f = 1., nframes do begin & $

IDL>   i.setdata, byte(image2*(f/nframes) + image1*(1-f/nframes)) & $

IDL>   !null = v.put(vs, i.copywindow()) & $

IDL> endfor

IDL> obj_destroy, v

IDL> spawn, a, /nowait, /hide

The video file resulting from these steps can be found here.

Comments (0) Number of views (273) Article rating: No rating

Categories: IDL Blog | IDL Data Point






Could satellites be the secret to detecting water leaks?

Author: Cherie Muleh

As a Channel Manager for Harris Geospatial, I look after our distributors in Australia, Latin America, and Southeast Asia. Distributors sell our products in their respective regions, handle support, training, and also uncover new uses for our products. Esri Australia's Principal Consultant Remote Sensing and Imagery, Dipak Paudyal, is driving new insights with data, and has started investigating a new way that satellites could help detect water leaks.

Water utility companies routinely face millions of dollars in lost revenue with wasted water and leaks in their pipeline infrastructure. In a recent blog, Dipak explores if there is a way for satellite data and location analytics to help preserve water loss and also enable utility companies to better identify cracks in their system. As Dipak notes, “…water utilities that are willing to think outside of the box and investigate new technologies such as SAR imagery will be guaranteed to stay ahead of the game.”

Analyzing all of this data requires the use of specialty tools like ENVI SARscape to help users transform raw SAR data into an easy-to-interpret images for further analysis. Check out Dipak's blog and let us know what do you think? Can we help the water industry better map their resources with this type of technology?

Comments (0) Number of views (328) Article rating: 5.0




Minimizing Rounding Errors with IDL's TOTAL function

Author: Atle Borsholm

The idea for my blog topic this week came from a question from Ron Kneusel about the loss of precision using IDL’s TOTAL function. IDL functions are often implemented in a way that makes them run fast, since processing speed is important. This is certainly true for the TOTAL function. If you are not using any of the keywords and additional arguments, then it is basically adding to the sum in the input array order. The exception is if you have enough elements in the array to trigger the multi-threaded code path, which I will ignore for now.
The issue with loss of precision is therefore best seen if you start with the largest term and keep adding smaller and smaller terms. At some point, the small term gets so small compared to the large sum that the additional terms get lost in round off.
Here is an example of a sum that we know should approach 128 (with a deviation in the 52 decimal place):
IDL> data = (1-1d/128) ^ lindgen(15000)
IDL> 128 - total(data)
So, we are getting an error in the 13th decimal place in this case. This array starts with the largest term and ends with the smallest term, so that is the worst case for the TOTAL implementation. If we reverse the order of the terms, we get:
IDL> 128 - total(reverse(data))
The error gets 10 times smaller in this case. I decided to compare the algorithm that Ron suggested, which is called Kahan sum, with a divide-and-conquer scheme that I have previously used for execution on a massively parallel GPU (which runs fast with 10000’s of independent threads). The Kahan sum algorithm can be found on Wikipedia and the IDL code is pretty short, (but very slow):
; Kahan algorithm, from wikipedia
function KahanSum, data
  sum = 0d
  c = 0d
  for i=0, data.length-1 do begin
    y = data[i] - c
    t = sum + y
    c = (t - sum) - y
    sum = t
  return, sum
The massively parallel algorithm will run much faster than the Kahan Sum, and the IDL code is listed here:
; Divide and concure total,
; algorithm lends itself to massively parallel execution
function total_mp, data
  compile_opt idl2,logical_predicate
  n = ishft(1ull,total(ishft(1ull,lindgen(63)) lt n_elements(data),/integer))
  pad = make_array(n, type=size(data,/type))
  pad[0] = data[*]
  while n gt 1 do begin
    pad[0:n/2-1] += pad[n/2:n-1]
    n /= 2
  return, pad[0]
Here are the result with the decreasing magnitude terms:
IDL> 128 - KahanSum(data)
IDL> 128 - total_mp(data)
So, in this case the error is similar to the best case sorted input to TOTAL. As an independent test, I also tried randomly ordering the terms, and both KahanSum and TOTAL_MP, are still consistent on the order of 5e-14:
IDL> 128 - KahanSum(data[order])
IDL> 128 - total_mp(data[order])
IDL> 128 - total(data[order])
My conclusion is that the TOTAL_MP example is just as accurate as the Kahan sum, and has the added advantage of allowing for massively parallel execution, as opposed to the highly sequential execution needed for the Kahan algorithm.

Comments (0) Number of views (249) Article rating: No rating

Categories: IDL Blog | IDL Data Point


12345678910 Last






















© 2016 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation