Author: Austin Coates
Recently, I was given the chance to practice some
spectroscopy and in preparation for the project, I realized that I did not have
a simple way to visualize the variations in different absorption features
between very discreet wavelengths. The method that I elected to employ for this
task is called continuum removal
(Kokaly, Despain, Clark, & Livo, 2007). This method allows you to
compare different spectra and essentially normalize the data so that they can
be more easily compared with one another.
To use the algorithm, you first select the region that you
are interested in (for me this was between 550 nm and 700 nm -this is the
region of my spectra that deals with chlorophyll absorption and pigment). Once
the region is selected then a linear model is fit between the two points and
this is called the continuum. The continuum is the hypothetical background
absorption feature, or artifact or other absorption feature, which acts as the
baseline for the target features to be compared against (Clark, 1999). Once the
continuum has been set then the continuum removal process can be performed on
all spectra in question using
the following equation (Kokaly,
Despain, Clark, & Livo, 2007).
the resulting continuum removed spectra, RL is the continuum line
and, Ro is the original reflectance value.
Original spectra of two healthy plants. The dotted line denotes the continuum
line. The x axis shows wavelengths in nm and the y axis represents reflectance.
Figure 2: The
continuum removal for wavelengths 550 nm - 700 nm.
The resulting code gives you a tool that will take in two
spectral libraries, with one spectra per library, and return two plots similar
to what is shown in Figure 1 and Figure 2.
Find the bounds for the feature
FB_left = 550
Open Spectra 1
oSLI1 = ENVISpectralLibrary(Spectra_File_1)
Open Spectra 2
oSLI2 = ENVISpectralLibrary(Spectra_File_2)
Create Bad Bands List (this removes some regions of the spectra associated with
water vapor absorption)
bb_range = [[926,970],[1350,1432],[1796,1972],[2349,2500]]
bbl = fltarr(n_elements(wl))+1
dims = size(bb_range, /DIMENSIONS)
for i = 0
p1 = where(wl eq
p2 = where(wl eq
bbl[p1:p2] = !VALUES.F_Nan
oSLI1 / oSLI2
p = plot(wl, Spectra_Info_1.spectrum*bbl,
xrange = [min(wl, /nan),max(wl, /nan)],$
/nan)], thick = 2, color = 'blue')
p = plot(wl, Spectra_Info_2.spectrum*bbl,
/overplot, thick = 2, color = 'green')
create the linear segment
pl_1 = POLYLINE([FB_left,FB_right], [Spectra_1_y1,
Spectra_1_y2], /overplot, /data, thick = 2,
LINESTYLE = '--')
pl_2 = POLYLINE([FB_left,FB_right], [Spectra_2_y1,
Spectra_2_y2], /overplot, /data, thick = 2,
LINESTYLE = '--')
the equation of the line
LF_1 = LINFIT([FB_left,FB_right], [Spectra_1_y1,
LF_2 = LINFIT([FB_left,FB_right], [Spectra_2_y1,
the values between the lower and upper bounds
x_vals = wl [ where(wl eq
FB_left) : where(wl eq FB_right)]
Compute the continuum line
RL_1 = LF_1 + LF_1*
RL_2 = LF_2 + LF_2*
Perform Continuum Removal
eq FB_left) : where(wl eq
RC_1 = Ro_1 / RL_1
eq FB_left) : where(wl eq
RC_2 = Ro_2 / RL_2
Plot the new Continuum Removal Spectra
pl_RC_1 = plot(x_vals, RC_1, color = 'Blue', xrange = [min(x_vals,
/NAN), max(x_vals, /NAN)] )
pl_RC_2 = plot(x_vals, RC_2, color = 'Green', /overplot)
Kokaly, R. F., Despain, D. G., Clark, R. N., & Livo, K.
E. (2007). Spectral analysis of absorption features for mapping vegetation
cover and microbial communities in Yellowstone National Park using AVIRIS data.
Clark, R. N. (1999). Spectroscopy of rocks and minerals, and
principles of spectroscopy. Manual of remote sensing, 3,
Categories: IDL Blog | IDL Data Point
Author: James Lewis
In most cases as a geospatial analyst I use ENVI and IDL to
look at the world in a rather flat and two dimensional aspect when it comes to
data analytics. However, LiDAR, drones, advancements in photogrammetry, and
temporal analytics within ENVI have given me the ability to analyze data in new
ways that were previously too time consuming, difficult, or simply not available.
I can use tools like the photogrammetry suite to create
dense point clouds over areas that it would be difficult to attain the same
fidelity, or even fly over with airborne platforms. As small satellites come
online they will give us coverage and revisit times unlike anything we’ve had
What will we do with all that data?
Utilizing the spatiotemporal tools in ENVI, we can gain
information from days, weeks, months, or years’ worth of collects that classic
change detection analytics would not support.
Point cloud drone collect Temporal analysis
LiDAR data has become more available for analyst, and we need
to be able to exploit it beyond looking at a bunch of points. Using the ENVI
LiDAR tools, you can quickly and easily build robust elevation models to be
used for everthing from creating an ortho-mosaic to a line-of-sight analysis.
We can also extract 3D models for modeling and simulation, or building scenes
for decsion making.
Collada models from ENVI LiDAR
DSM and building vectors from LiDAR point cloud
I can also take advantage of IDL to build out custom tools
that enable me to become a more efficient analyst. For example, I can create a
tool to ingest a point cloud that gives me multiple products, where before I would
have needed to create them individually. I also won’t have to rewrite my code
over again when I’m ready to move to the enterprise thanks to the new ENVI task
Custom tool combing multiple task
ENVI is not only allowing analyst to respond to various
problem sets more efficiently, but also to ENVI enables analysts to do things
that just weren’t possible a few years ago. Though the exploitation and types
of data is forever changing at a rapid pace, ENVI is leading the charge to
remain an industry standard for analytics.
Categories: ENVI Blog | Imagery Speaks
Author: Jim Pendleton
Oftentimes it is easier for us to perceive change in images if we animate the images in a video.
These comparisons could be between images of the same subject taken at different times, or images acquired contemporaneously but with different techniques or wavelengths.
We in the Harris Geospatial Solutions Custom Solutions Group frequently include video production and display tools in the applications and solutions we provide to our clients.
Let's say you've been monitoring for optical changes in the galactic cluster Abell 115, an image of which is located in your IDL examples/demo/demodata directory.
IDL> read_jpeg, filepath('abell115.jpg', subdir = ['examples','demo','demodata']), image1
IDL> i = image(image1)
Let's copy a region of interest from one place to another to simulate the discovery of a supernova.
IDL> image2[*, 250:260, 250:260] = image2[*, 265:275, 350:360]
IDL> i.setdata, image2
The change is fairly small, but you should see it centered in the image.
To make the "discovery" more apparent, we can animate the changes.
One display technique involves a fade operation in which images dissolve into one another over time. Each pixel transitions over time from one color to another. Our human visual systems are drawn to areas of change in animations. Depending on the effect that's desired for the viewer, all pixels may transition equally and in parallel, or some may transition more slowly or rapidly than others.
First, select a number of individual "frames" over which the change will occur. The number of frames defines how smoothly the transition takes place between images. A value of 2, for example, would produce a blinking effect with no interpolated transition.
IDL> nframes = 32
Next, loop over the frames, weighting the displayed image by proportional contributions between the two input images at each time step.
IDL> for f = 1., nframes do i.setdata, byte(image2*(f/nframes) + image1*(1-f/nframes))
(For the purpose of code compactness in this example, I'm looping using a floating point value but in general this is not recommended.)
Next you may wish to share your "discovery" with your colleagues in the form of a video. The IDL class IDLffVideoWrite lets you create video output files in a number of different formats.
There are three basic steps you will use to create a video. Create an object that is associated with an output file, create a video stream with the appropriate frame dimensions and frame rate for display, then add each image frame to the stream.
Get the dimensions of the image and define a frame rate, in units of frames per second.
IDL> d = size(image1, /dimensions)
IDL> fps = 30
Create a video object. In this example, I will write a file to my temporary directory.
IDL> a = filepath(/tmp, 'abell115.avi')
IDL> v = idlffvideowrite(a)
The IDLffVideoWrite object determines the default format for the video output according to the file name extension, in this case ".avi". See the online documentation for alternative formats.
Create a video stream within the video object, specifying the X and Y dimensions of each image frame, along with the frame rate, using the IDLffVideoWrite::AddVideoStream method.
IDL> vs = v.addvideostream(d, d, fps)
Rather than animating to the graphics windows, we'll loop over the frames writing data to the video stream instead using the IDLffVideoWrite::Put method.
IDL> for f = 1., nframes do !null = v.put(vs, byte(image2*(f/nframes) + image1*(1-f/nframes)))
There is no explicit call to close the action of writing frames to the file. Instead, we signal the completion of writing the data by simply destroying the object reference.
IDL> obj_destroy, v
The video file is now closed and available for sharing with our colleagues. To test the contents, simply SPAWN the path to the video file, assuming the file name extension is known to the shell.
IDL> spawn, a, /hide, /nowait
In order to make the video more interesting, you might consider adding an annotation such as TEXT or an ARROW to your IMAGE display, then copying the display data as your frame input rather than displaying only the raw data directly. Additionally, see the documentation for the method IMAGE::CopyWindow, used below.
You can copy and paste the following to your IDL command line to see the result.
The video file resulting from these steps can be found here.
Author: Cherie Muleh
As a Channel Manager for Harris Geospatial, I look after our distributors in Australia, Latin America, and Southeast Asia. Distributors sell our products in their respective regions, handle support, training, and also uncover new uses for our products. Esri Australia's Principal Consultant Remote Sensing and Imagery, Dipak Paudyal, is driving new insights with data, and has started investigating a new way that satellites could help detect water leaks.
Water utility companies routinely face millions of dollars in lost revenue with wasted water and leaks in their pipeline infrastructure. In a recent blog, Dipak explores if there is a way for satellite data and location analytics to help preserve water loss and also enable utility companies to better identify cracks in their system. As Dipak notes, “…water utilities that are willing to think outside of the box and investigate new technologies such as SAR imagery will be guaranteed to stay ahead of the game.”
Analyzing all of this data requires the use of specialty tools like ENVI SARscape to help users transform raw SAR data into an easy-to-interpret images for further analysis. Check out Dipak's blog and let us know what do you think? Can we help the water industry better map their resources with this type of technology?
Tags: SAR, ENVI SARscape, utilies
Author: Atle Borsholm
Creating Animations and Videos in IDL
Just an analyst looking at the world
Sign Up for News & Updates: Stay informed with the latest news, events, technologies and special offers.