Author: James Lewis
In most cases as a geospatial analyst I use ENVI and IDL to
look at the world in a rather flat and two dimensional aspect when it comes to
data analytics. However, LiDAR, drones, advancements in photogrammetry, and
temporal analytics within ENVI have given me the ability to analyze data in new
ways that were previously too time consuming, difficult, or simply not available.
I can use tools like the photogrammetry suite to create
dense point clouds over areas that it would be difficult to attain the same
fidelity, or even fly over with airborne platforms. As small satellites come
online they will give us coverage and revisit times unlike anything we’ve had
What will we do with all that data?
Utilizing the spatiotemporal tools in ENVI, we can gain
information from days, weeks, months, or years’ worth of collects that classic
change detection analytics would not support.
Point cloud drone collect Temporal analysis
LiDAR data has become more available for analyst, and we need
to be able to exploit it beyond looking at a bunch of points. Using the ENVI
LiDAR tools, you can quickly and easily build robust elevation models to be
used for everthing from creating an ortho-mosaic to a line-of-sight analysis.
We can also extract 3D models for modeling and simulation, or building scenes
for decsion making.
Collada models from ENVI LiDAR
DSM and building vectors from LiDAR point cloud
I can also take advantage of IDL to build out custom tools
that enable me to become a more efficient analyst. For example, I can create a
tool to ingest a point cloud that gives me multiple products, where before I would
have needed to create them individually. I also won’t have to rewrite my code
over again when I’m ready to move to the enterprise thanks to the new ENVI task
Custom tool combing multiple task
ENVI is not only allowing analyst to respond to various
problem sets more efficiently, but also to ENVI enables analysts to do things
that just weren’t possible a few years ago. Though the exploitation and types
of data is forever changing at a rapid pace, ENVI is leading the charge to
remain an industry standard for analytics.
Categories: ENVI Blog | Imagery Speaks
Author: Jim Pendleton
Oftentimes it is easier for us to perceive change in images if we animate the images in a video.
These comparisons could be between images of the same subject taken at different times, or images acquired contemporaneously but with different techniques or wavelengths.
We in the Harris Geospatial Solutions Custom Solutions Group frequently include video production and display tools in the applications and solutions we provide to our clients.
Let's say you've been monitoring for optical changes in the galactic cluster Abell 115, an image of which is located in your IDL examples/demo/demodata directory.
IDL> read_jpeg, filepath('abell115.jpg', subdir = ['examples','demo','demodata']), image1
IDL> i = image(image1)
Let's copy a region of interest from one place to another to simulate the discovery of a supernova.
IDL> image2[*, 250:260, 250:260] = image2[*, 265:275, 350:360]
IDL> i.setdata, image2
The change is fairly small, but you should see it centered in the image.
To make the "discovery" more apparent, we can animate the changes.
One display technique involves a fade operation in which images dissolve into one another over time. Each pixel transitions over time from one color to another. Our human visual systems are drawn to areas of change in animations. Depending on the effect that's desired for the viewer, all pixels may transition equally and in parallel, or some may transition more slowly or rapidly than others.
First, select a number of individual "frames" over which the change will occur. The number of frames defines how smoothly the transition takes place between images. A value of 2, for example, would produce a blinking effect with no interpolated transition.
IDL> nframes = 32
Next, loop over the frames, weighting the displayed image by proportional contributions between the two input images at each time step.
IDL> for f = 1., nframes do i.setdata, byte(image2*(f/nframes) + image1*(1-f/nframes))
(For the purpose of code compactness in this example, I'm looping using a floating point value but in general this is not recommended.)
Next you may wish to share your "discovery" with your colleagues in the form of a video. The IDL class IDLffVideoWrite lets you create video output files in a number of different formats.
There are three basic steps you will use to create a video. Create an object that is associated with an output file, create a video stream with the appropriate frame dimensions and frame rate for display, then add each image frame to the stream.
Get the dimensions of the image and define a frame rate, in units of frames per second.
IDL> d = size(image1, /dimensions)
IDL> fps = 30
Create a video object. In this example, I will write a file to my temporary directory.
IDL> a = filepath(/tmp, 'abell115.avi')
IDL> v = idlffvideowrite(a)
The IDLffVideoWrite object determines the default format for the video output according to the file name extension, in this case ".avi". See the online documentation for alternative formats.
Create a video stream within the video object, specifying the X and Y dimensions of each image frame, along with the frame rate, using the IDLffVideoWrite::AddVideoStream method.
IDL> vs = v.addvideostream(d, d, fps)
Rather than animating to the graphics windows, we'll loop over the frames writing data to the video stream instead using the IDLffVideoWrite::Put method.
IDL> for f = 1., nframes do !null = v.put(vs, byte(image2*(f/nframes) + image1*(1-f/nframes)))
There is no explicit call to close the action of writing frames to the file. Instead, we signal the completion of writing the data by simply destroying the object reference.
IDL> obj_destroy, v
The video file is now closed and available for sharing with our colleagues. To test the contents, simply SPAWN the path to the video file, assuming the file name extension is known to the shell.
IDL> spawn, a, /hide, /nowait
In order to make the video more interesting, you might consider adding an annotation such as TEXT or an ARROW to your IMAGE display, then copying the display data as your frame input rather than displaying only the raw data directly. Additionally, see the documentation for the method IMAGE::CopyWindow, used below.
You can copy and paste the following to your IDL command line to see the result.
The video file resulting from these steps can be found here.
Categories: IDL Blog | IDL Data Point
Author: Cherie Muleh
As a Channel Manager for Harris Geospatial, I look after our distributors in Australia, Latin America, and Southeast Asia. Distributors sell our products in their respective regions, handle support, training, and also uncover new uses for our products. Esri Australia's Principal Consultant Remote Sensing and Imagery, Dipak Paudyal, is driving new insights with data, and has started investigating a new way that satellites could help detect water leaks.
Water utility companies routinely face millions of dollars in lost revenue with wasted water and leaks in their pipeline infrastructure. In a recent blog, Dipak explores if there is a way for satellite data and location analytics to help preserve water loss and also enable utility companies to better identify cracks in their system. As Dipak notes, “…water utilities that are willing to think outside of the box and investigate new technologies such as SAR imagery will be guaranteed to stay ahead of the game.”
Analyzing all of this data requires the use of specialty tools like ENVI SARscape to help users transform raw SAR data into an easy-to-interpret images for further analysis. Check out Dipak's blog and let us know what do you think? Can we help the water industry better map their resources with this type of technology?
Tags: SAR, ENVI SARscape, utilies
Author: Atle Borsholm
Author: Jon Coursey
Satellite Imagery is used for a wide range of projects, but
often times it is tough to figure out just which type of data is needed. In the
world of remote sensing, there are many different satellite sensors, all with
different resolutions and multispectral capabilities. In this blog, we will
explore some basics of remote sensing, what all is available, and common
applications of the data.
Remote Sensing 101
As great as a natural color image can look, it is not
collected in the same way as, say, the camera on your iPhone. The process of
collecting imagery is most commonly referred to as remote sensing, which
basically means objects on earth are detected and classified without direct
contact. This blog will focus on passive sensing, as this is what is used for
natural color and multispectral images (except radar). Passive sensors gather information
via radiation that is either emitted or reflected by whatever object the
satellite is trying to capture. This data will be captured with whatever
spectral bands a sensor can acquire (generally red, green, blue, and near
infrared), then sent down to a ground station for processing. From here, the
natural color or multispectral image you might have ordered would be put
A wide variety of sensors and band combinations are
available, and the best options will depend on what you are trying to view, and
the size of your project area. Below are a few situations one might face when
deciding how to go about selecting the most suitable captures.
What kind of imagery do I need for a base map?
This is generally our most common request. Engineering,
architecture, etc. firms will go for images to view ground conditions before
building on their project areas. Essentially, they just need an accurate and
high resolution picture. In these cases, natural color is the best option.
How much resolution do I need?
This will come down to how large your project area might be,
and what is important that you can view with quality. Say your project requires
you to obtain an image of California. For this, you wouldn’t want as high of a
resolution as if you needed an image of, say, an airport. This would be an
extreme amount of detail for such a large area, and would result in enormous
file sizes. Additionally, higher resolution sensors typically capture smaller
area sizes, so many would be required to fill such an area. And if full
coverage even existed, it would be from images accumulated over the course of
many months or years. Lower resolution options will give a better overview of
an area of this size, and generally use less images to complete the coverage.
One great option is to obtain
high resolution imagery of a project site, and lower resolution for the general
area. In the images below you will see captures from both the RapidEye & GeoEye
sensors. RapidEye captures at 5 meter (being the lower resolution), and GeoEye
captures at 0.5 meter resolution. Both show coverage of islands of Hawaii. As
you can see, RapidEye still has great detail for a larger city area, but GeoEye
has the ability to see specific features within such a city.
Figure 1: GeoEye image (0.5 meter
resolution) Figure 2: RapidEye
image (5 meter resolution)
When would I need the multispectral bands?
The most common uses for these are measuring crop health and
doing vegetative analysis. Most sensors available at MapMart will be equipped
with four bands, which are red, green, blue and near-infrared (or RGB &
NIR). An image made from these four bands within the ENVI software environment
is below. For crop health, one would measure relative reflectance along the
wavelengths of these bands. When crops are not healthy, one will notice a
decreased reflectance, mainly in the near-infrared reflectance plateau, but
also with the blue and green regions of the spectrum.
Multispectral bands can also be used to detect different
characteristics on the earth’s surface. Soil, plants, asphalt, etc. all have
different spectral signatures, which would be apparent through these multispectral
bands. Depending on how many bands you have available, and where they are along
the light spectrum, you can start to see more differences within each surface.
For example, an asphalt rooftop may appear as only one color with the standard
RGB & NIR bands available. If you were working with shortwave infrared (or
SWIR) as well, you would start to see different colors within your image. This
is because the differences in the components of asphalt are now being detected.
While many applications of this type of data exist, we have
covered the most common ones we are contacted for at MapMart. Fortunately, with
such a large archive of satellite imagery already existing, with the
opportunity to have more collected, a solution can be found for the most unique
type of geospatial project. If you need data or help figuring out what type to
use for a project, contact us at MapMart.com.
Using SAR Imagery for Road Maintenance
Sign Up for News & Updates: Stay informed with the latest news, events, technologies and special offers.