Just an analyst looking at the world

Author: James Lewis

In most cases as a geospatial analyst I use ENVI and IDL to look at the world in a rather flat and two dimensional aspect when it comes to data analytics. However, LiDAR, drones, advancements in photogrammetry, and temporal analytics within ENVI have given me the ability to analyze data in new ways that were previously too time consuming, difficult, or simply not available.  

I can use tools like the photogrammetry suite to create dense point clouds over areas that it would be difficult to attain the same fidelity, or even fly over with airborne platforms. As small satellites come online they will give us coverage and revisit times unlike anything we’ve had before!

What will we do with all that data?  

Utilizing the spatiotemporal tools in ENVI, we can gain information from days, weeks, months, or years’ worth of collects that classic change detection analytics would not support.

    Point cloud drone collect   Temporal analysis            


LiDAR data has become more available for analyst, and we need to be able to exploit it beyond looking at a bunch of points. Using the ENVI LiDAR tools, you can quickly and easily build robust elevation models to be used for everthing from creating an ortho-mosaic to a line-of-sight analysis. We can also extract 3D models for modeling and simulation, or building scenes for decsion making.


Collada models from ENVI LiDAR

DSM and building vectors from LiDAR point cloud

I can also take advantage of IDL to build out custom tools that enable me to become a more efficient analyst. For example, I can create a tool to ingest a point cloud that gives me multiple products, where before I would have needed to create them individually. I also won’t have to rewrite my code over again when I’m ready to move to the enterprise thanks to the new ENVI task system.

Custom tool combing multiple task

ENVI is not only allowing analyst to respond to various problem sets more efficiently, but also to ENVI enables analysts to do things that just weren’t possible a few years ago. Though the exploitation and types of data is forever changing at a rapid pace, ENVI is leading the charge to remain an industry standard for analytics.

Comments (0) Number of views (2683) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks





Creating Animations and Videos in IDL

Author: Jim Pendleton

Oftentimes it is easier for us to perceive change in images if we animate the images in a video.

These comparisons could be between images of the same subject taken at different times, or images acquired contemporaneously but with different techniques or wavelengths.

We in the Harris Geospatial Solutions Custom Solutions Group frequently include video production and display tools in the applications and solutions we provide to our clients.

Let's say you've been monitoring for optical changes in the galactic cluster Abell 115, an image of which is located in your IDL examples/demo/demodata directory.

IDL> read_jpeg, filepath('abell115.jpg', subdir = ['examples','demo','demodata']), image1

IDL> i = image(image1)

Let's copy a region of interest from one place to another to simulate the discovery of a supernova.

IDL> image2[*, 250:260, 250:260] = image2[*, 265:275, 350:360]

IDL> i.setdata, image2

The change is fairly small, but you should see it centered in the image.

To make the "discovery" more apparent, we can animate the changes.

One display technique involves a fade operation in which images dissolve into one another over time. Each pixel transitions over time from one color to another. Our human visual systems are drawn to areas of change in animations. Depending on the effect that's desired for the viewer, all pixels may transition equally and in parallel, or some may transition more slowly or rapidly than others.

First, select a number of individual "frames" over which the change will occur. The number of frames defines how smoothly the transition takes place between images. A value of 2, for example, would produce a blinking effect with no interpolated transition.

IDL> nframes = 32

Next, loop over the frames, weighting the displayed image by proportional contributions between the two input images at each time step.

IDL> for f = 1., nframes do i.setdata, byte(image2*(f/nframes) + image1*(1-f/nframes))

(For the purpose of code compactness in this example, I'm looping using a floating point value but in general this is not recommended.)

Next you may wish to share your "discovery" with your colleagues in the form of a video. The IDL class IDLffVideoWrite lets you create video output files in a number of different formats.

There are three basic steps you will use to create a video. Create an object that is associated with an output file, create a video stream with the appropriate frame dimensions and frame rate for display, then add each image frame to the stream.

Get the dimensions of the image and define a frame rate, in units of frames per second.

IDL> d = size(image1, /dimensions)

IDL> fps = 30

The division of the number of frames by the frame rate will determine the elapsed time of your animation.

Create a video object. In this example, I will write a file to my temporary directory.

IDL> a = filepath(/tmp, 'abell115.avi')

IDL> v = idlffvideowrite(a)

The IDLffVideoWrite object determines the default format for the video output according to the file name extension, in this case ".avi". See the online documentation for alternative formats.

Create a video stream within the video object, specifying the X and Y dimensions of each image frame, along with the frame rate, using the IDLffVideoWrite::AddVideoStream method.

IDL> vs = v.addvideostream(d[1], d[2], fps)

Rather than animating to the graphics windows, we'll loop over the frames writing data to the video stream instead using the IDLffVideoWrite::Put method.

IDL> for f = 1., nframes do !null = v.put(vs, byte(image2*(f/nframes) + image1*(1-f/nframes)))

There is no explicit call to close the action of writing frames to the file. Instead, we signal the completion of writing the data by simply destroying the object reference.

IDL> obj_destroy, v

The video file is now closed and available for sharing with our colleagues. To test the contents, simply SPAWN the path to the video file, assuming the file name extension is known to the shell.

IDL> spawn, a, /hide, /nowait

In order to make the video more interesting, you might consider adding an annotation such as TEXT or an ARROW to your IMAGE display, then copying the display data as your frame input rather than displaying only the raw data directly.  Additionally, see the documentation for the method IMAGE::CopyWindow, used below.

You can copy and paste the following to your IDL command line to see the result.

IDL> v = idlffvideowrite(a)

IDL> vs = v.addvideostream(d[1], d[2], fps)

IDL> i = image(image1, dimensions = d[1:2])

IDL> ar = arrow([200, 250], [200, 250], color='blue', /data, /current)

IDL> t = text(150, 180, 'Eureka!', color = 'green', /data, /current)

IDL> for f = 1., nframes do begin & $

IDL>   i.setdata, byte(image2*(f/nframes) + image1*(1-f/nframes)) & $

IDL>   !null = v.put(vs, i.copywindow()) & $

IDL> endfor

IDL> obj_destroy, v

IDL> spawn, a, /nowait, /hide

The video file resulting from these steps can be found here.

Comments (0) Number of views (1280) Article rating: No rating

Categories: IDL Blog | IDL Data Point





Could satellites be the secret to detecting water leaks?

Author: Cherie Muleh

As a Channel Manager for Harris Geospatial, I look after our distributors in Australia, Latin America, and Southeast Asia. Distributors sell our products in their respective regions, handle support, training, and also uncover new uses for our products. Esri Australia's Principal Consultant Remote Sensing and Imagery, Dipak Paudyal, is driving new insights with data, and has started investigating a new way that satellites could help detect water leaks.

Water utility companies routinely face millions of dollars in lost revenue with wasted water and leaks in their pipeline infrastructure. In a recent blog, Dipak explores if there is a way for satellite data and location analytics to help preserve water loss and also enable utility companies to better identify cracks in their system. As Dipak notes, “…water utilities that are willing to think outside of the box and investigate new technologies such as SAR imagery will be guaranteed to stay ahead of the game.”

Analyzing all of this data requires the use of specialty tools like ENVI SARscape to help users transform raw SAR data into an easy-to-interpret images for further analysis. Check out Dipak's blog and let us know what do you think? Can we help the water industry better map their resources with this type of technology?

Comments (0) Number of views (2511) Article rating: 5.0




Minimizing Rounding Errors with IDL's TOTAL function

Author: Atle Borsholm

The idea for my blog topic this week came from a question from Ron Kneusel about the loss of precision using IDL’s TOTAL function. IDL functions are often implemented in a way that makes them run fast, since processing speed is important. This is certainly true for the TOTAL function. If you are not using any of the keywords and additional arguments, then it is basically adding to the sum in the input array order. The exception is if you have enough elements in the array to trigger the multi-threaded code path, which I will ignore for now.
The issue with loss of precision is therefore best seen if you start with the largest term and keep adding smaller and smaller terms. At some point, the small term gets so small compared to the large sum that the additional terms get lost in round off.
Here is an example of a sum that we know should approach 128 (with a deviation in the 52 decimal place):
IDL> data = (1-1d/128) ^ lindgen(15000)
IDL> 128 - total(data)
So, we are getting an error in the 13th decimal place in this case. This array starts with the largest term and ends with the smallest term, so that is the worst case for the TOTAL implementation. If we reverse the order of the terms, we get:
IDL> 128 - total(reverse(data))
The error gets 10 times smaller in this case. I decided to compare the algorithm that Ron suggested, which is called Kahan sum, with a divide-and-conquer scheme that I have previously used for execution on a massively parallel GPU (which runs fast with 10000’s of independent threads). The Kahan sum algorithm can be found on Wikipedia and the IDL code is pretty short, (but very slow):
; Kahan algorithm, from wikipedia
function KahanSum, data
  sum = 0d
  c = 0d
  for i=0, data.length-1 do begin
    y = data[i] - c
    t = sum + y
    c = (t - sum) - y
    sum = t
  return, sum
The massively parallel algorithm will run much faster than the Kahan Sum, and the IDL code is listed here:
; Divide and concure total,
; algorithm lends itself to massively parallel execution
function total_mp, data
  compile_opt idl2,logical_predicate
  n = ishft(1ull,total(ishft(1ull,lindgen(63)) lt n_elements(data),/integer))
  pad = make_array(n, type=size(data,/type))
  pad[0] = data[*]
  while n gt 1 do begin
    pad[0:n/2-1] += pad[n/2:n-1]
    n /= 2
  return, pad[0]
Here are the result with the decreasing magnitude terms:
IDL> 128 - KahanSum(data)
IDL> 128 - total_mp(data)
So, in this case the error is similar to the best case sorted input to TOTAL. As an independent test, I also tried randomly ordering the terms, and both KahanSum and TOTAL_MP, are still consistent on the order of 5e-14:
IDL> 128 - KahanSum(data[order])
IDL> 128 - total_mp(data[order])
IDL> 128 - total(data[order])
My conclusion is that the TOTAL_MP example is just as accurate as the Kahan sum, and has the added advantage of allowing for massively parallel execution, as opposed to the highly sequential execution needed for the Kahan algorithm.

Comments (0) Number of views (1142) Article rating: No rating

Categories: IDL Blog | IDL Data Point





Choosing the right type of imagery for your project

Author: Jon Coursey

Satellite Imagery is used for a wide range of projects, but often times it is tough to figure out just which type of data is needed. In the world of remote sensing, there are many different satellite sensors, all with different resolutions and multispectral capabilities. In this blog, we will explore some basics of remote sensing, what all is available, and common applications of the data.

Remote Sensing 101

As great as a natural color image can look, it is not collected in the same way as, say, the camera on your iPhone. The process of collecting imagery is most commonly referred to as remote sensing, which basically means objects on earth are detected and classified without direct contact. This blog will focus on passive sensing, as this is what is used for natural color and multispectral images (except radar). Passive sensors gather information via radiation that is either emitted or reflected by whatever object the satellite is trying to capture. This data will be captured with whatever spectral bands a sensor can acquire (generally red, green, blue, and near infrared), then sent down to a ground station for processing. From here, the natural color or multispectral image you might have ordered would be put together.

A wide variety of sensors and band combinations are available, and the best options will depend on what you are trying to view, and the size of your project area. Below are a few situations one might face when deciding how to go about selecting the most suitable captures.

What kind of imagery do I need for a base map?

This is generally our most common request. Engineering, architecture, etc. firms will go for images to view ground conditions before building on their project areas. Essentially, they just need an accurate and high resolution picture. In these cases, natural color is the best option.

How much resolution do I need?

This will come down to how large your project area might be, and what is important that you can view with quality. Say your project requires you to obtain an image of California. For this, you wouldn’t want as high of a resolution as if you needed an image of, say, an airport. This would be an extreme amount of detail for such a large area, and would result in enormous file sizes. Additionally, higher resolution sensors typically capture smaller area sizes, so many would be required to fill such an area. And if full coverage even existed, it would be from images accumulated over the course of many months or years. Lower resolution options will give a better overview of an area of this size, and generally use less images to complete the coverage.

One great option is to obtain high resolution imagery of a project site, and lower resolution for the general area. In the images below you will see captures from both the RapidEye & GeoEye sensors. RapidEye captures at 5 meter (being the lower resolution), and GeoEye captures at 0.5 meter resolution. Both show coverage of islands of Hawaii. As you can see, RapidEye still has great detail for a larger city area, but GeoEye has the ability to see specific features within such a city. 

         Figure 1: GeoEye image (0.5 meter resolution)                   Figure 2: RapidEye image (5 meter resolution)


When would I need the multispectral bands?

The most common uses for these are measuring crop health and doing vegetative analysis. Most sensors available at MapMart will be equipped with four bands, which are red, green, blue and near-infrared (or RGB & NIR). An image made from these four bands within the ENVI software environment is below. For crop health, one would measure relative reflectance along the wavelengths of these bands. When crops are not healthy, one will notice a decreased reflectance, mainly in the near-infrared reflectance plateau, but also with the blue and green regions of the spectrum.

Multispectral bands can also be used to detect different characteristics on the earth’s surface. Soil, plants, asphalt, etc. all have different spectral signatures, which would be apparent through these multispectral bands. Depending on how many bands you have available, and where they are along the light spectrum, you can start to see more differences within each surface. For example, an asphalt rooftop may appear as only one color with the standard RGB & NIR bands available. If you were working with shortwave infrared (or SWIR) as well, you would start to see different colors within your image. This is because the differences in the components of asphalt are now being detected.

While many applications of this type of data exist, we have covered the most common ones we are contacted for at MapMart. Fortunately, with such a large archive of satellite imagery already existing, with the opportunity to have more collected, a solution can be found for the most unique type of geospatial project. If you need data or help figuring out what type to use for a project, contact us at MapMart.com.

Comments (0) Number of views (2143) Article rating: 2.5

Categories: ENVI Blog | Imagery Speaks




«March 2017»





















© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation