Getting Electric with ENVI and IDL

Author: Zachary Norman

A few days ago we were lucky to have a pretty awesome lightning storm East of where I live (by Boulder, CO). The storm was far enough East that we weren't getting any rain, but the distance also provided an great view to the amazing lightning show that was going on in the clouds. Here is an animation of some pictures I took of the storm:

From my vantage point, you could see tons of lighting (one or two each second), so I decided to grab my camera and snap a few great pictures. After I had my pictures taken, they then became my data for this blog post. Because there was so much lightning, I wondered if I would be able to take advantage of ENVI's analytics to go through my images and detect the lightning in them.

This turned out to be pretty easy to code up with the use of three tasks and a raster series. To find the lightning, I decided to use anomaly detection which works really well to find features in images depending on what you are looking for (similar to feature extraction). With anomaly detection I found local areas that stand out from their surroundings which was perfect for this scenario because lightning is bright compared to the clouds behind it. Once you find your anomalies you then just have to turn the anomalous raster to a classification rater and perform classification cleanup if you want to. The only cleanup I performed was classification sieving to throw out single pixels which were determined to be anomalies. With all the processing, the tasks I ended up using were:




In order to make the results more friendly to visualize, I took one additional step to combine the images from two raster series into one raster series. I did this by taking the data from each raster (original images and the lightning pixels) and producing one raster which had an array containing both datasets. This allows you to visualize the original data and processed data side by side so you can see where the lightning should be in the images.

After I had that extra piece of code, it just came down to running it, which took a bit of time, but produced some cool output. Here is what the lightning-detector produced for a final product:

From the animation, it does a pretty good job of finding the lightning. If the clouds are really bright behind the lightning, then the lightning isn't always found, but the lightning detector works great when the clouds are dark.

In case anyone wants to take a stab at this on their own, below is the IDL code that I used to generate the raster series used to create the images. You can give it a whirl with your own data, then you just need to create a raster series ahead of time which is the INPUT_RASTER_SERIES for the procedure find_lightning (used to find the lightning). There are three procedures I used to generate the output above. First is the procedure that I used to run my two other procedures which is called run_find_lightning. To use this, just modify the code that says 'C:\path\to\raster\series' for the path to your raster series. 

pro run_find_lightning
  compile_opt idl2
  ;start ENVI if it hasn't opened already
  e = envi(/current)
  if (e eq !NULL) then begin
    e = envi(/headless)
  ;check if we have opened our raster series yet
  if ISA(series, 'ENVIRASTERSERIES') then begin
    ;return to first raster if opened already
  endif else begin
    series = ENVIRasterSeries('C:\path\to\raster\series')
  ;find where the lightning is in the images
    INPUT_RASTER_SERIES = series,$
    OUTPUT_RASTER_SERIES = output_raster_series
  ;return both raster series to their first raster in case they arent already
  ;combine our raster series iamges together to produce a mega raster series!
    INPUT_SERIES_1 = series,$
    INPUT_SERIES_2 = output_raster_series,$
    OUTPUT_COMBINED_SERIES = output_combined_series


There are two more programs needed to call the run_find_lightning procedure. Here is the first, find lightning.

;find lightning!
pro find_lightning, $
  INPUT_RASTER_SERIES = input_raster_series,$
  OUTPUT_RASTER_SERIES = output_raster_series
  compile_opt idl2
  ;get current session of ENVI, start UI if not open
  e = envi(/current)
  if (e eq !NULL) then begin
    e = envi()
  ;create list to hold our lightning raster files
  lightning_raster_uris = list()
  ;iterate over each raster series
  nrasters = input_raster_series.COUNT
  foreach raster, input_raster_series, count do begin
    ;perform anomaly detection
    AnomTask = ENVITask('RXAnomalyDetection')
    AnomTask.INPUT_RASTER = raster
    AnomTask.MEAN_CALCULATION_METHOD = 'local'
    AnomTask.KERNEL_SIZE = 15
    ;open output
    anom_raster = e.openraster(AnomTask.OUTPUT_RASTER_URI)
    ;threshold anomalies to classification
    ThreshTask = ENVITask('PercentThresholdClassification')
    ThreshTask.INPUT_RASTER = anom_raster
    ThreshTask.THRESHOLD_PERCENT = .05
    ;open output
    thresh_raster = e.openraster(ThreshTask.OUTPUT_RASTER_URI)
    ;sieve the results
    SieveTask = ENVITask('ClassificationSieving')
    SieveTask.INPUT_RASTER = thresh_raster
    SieveTask.MINIMUM_SIZE = 40
    ;open output
    sieve_raster = e.openraster(SieveTask.OUTPUT_RASTER_URI)
    ;save result
    lightning_raster_uris.add, sieve_raster.URI
    ;close intermediate rasters
    ;print some info about how many images we have processed
    print, string(9b) + 'Completed lightning finder on ' + strtrim(count+1,2) + ' of ' + strtrim(nrasters,2) + ' rasters'
  ;convert lightning raster uris to an array
  lightning_raster_uris = lightning_raster_uris.toarray()
  ;create a raster series
  SeriesTask = ENVITask('BuildRasterSeries')
  SeriesTask.INPUT_RASTER_URI = lightning_raster_uris
  output_raster_series = ENVIRasterSeries(SeriesTask.OUTPUT_RASTERSERIES_URI)

Here is the last procedure, combine_series, which combines the two raster series together in one image.

pro combine_series,$
  INPUT_SERIES_1 = input_series_1,$
  INPUT_SERIES_2 = input_series_2,$
  OUTPUT_COMBINED_SERIES = output_combined_series
  compile_opt idl2
  e = envi(/current)
  if (e eq !NULL) then begin
    e = envi()
  ;save combined image URIs
  combined_uris = list()

  ;combine the images together into one MEGA series
  nrasters = input_series_1.COUNT
  for i=0, nrasters-1 do begin
    ;get rasters
    image_raster = input_series_1.RASTER
    lightning_raster = input_series_2.RASTER
    lightning_meta = lightning_raster.metadata.dehydrate()
    lightning_colors = lightning_meta['CLASS LOOKUP']

    ;pre-allocate a byte array to hold the data from each raster
    if (i eq 0) then begin
      dims = [image_raster.NCOLUMNS, image_raster.NROWS]
      data = make_array(dims[0], 2*dims[1], 3, TYPE=1, VALUE=0)

    ;get image data
    image_data = image_raster.GetData(INTERLEAVE='BSQ')

    ;get ligtning data
    lightning_data = lightning_raster.GetData()
    ;convert lightning data to RGB
    temp = reform(lightning_colors[*, lightning_data], 3, dims[0], dims[1])
    ;change interleave
    lightning_data = transpose(temp, [1, 2, 0])

    ;fill preallocated array with data
    data[*,0:dims[1]-1,*] = image_data
    data[*,dims[1]:*,*] = lightning_data

    ;make a new raster
    outraster = ENVIRaster(data)

    ;save output raster URI
    combined_uris.add, outraster.URI

    ;step to next rasters
    print, string(9b) + 'Combined ' + strtrim(i+1,2) + ' of ' + strtrim(nrasters,2) + ' total rasters'


  ;convert list to array
  combined_uris = combined_uris.toarray()

  ;create another raster series with our combined images
  SeriesTask = ENVITask('BuildRasterSeries')
  SeriesTask.INPUT_RASTER_URI = combined_uris

  ;open the output raster series
  output_combined_series = ENVIRasterSeries(SeriesTask.OUTPUT_RASTERSERIES_URI)


Happy lightning hunting!

Comments (0) Number of views (130) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks





TWI Finish Line

Author: Guss Wright

Hello Everyone,

This is officially my last blog as an Army Training with Industry (TWI) Student, although you will certainly hear from me again. Can you believe a year has passed already? When I came on board at Harris Geospatial last August the goal was to facilitate mutually improved understanding, strengthen the partnership and better learn ENVI to ultimately enhance the US Army’s combat effectiveness. With all that has been learned, shared and documented in the last 12 months, I think we’ve accomplished what we set out to do and more. I was made to feel as though I was a part of the Harris Geospatial team. To reciprocate this hospitality, a few Army Challenge Coins have been passed out this week. If you are a Soldier, then you know what that means. Harris is a part of the team, so when you see a member at the upcoming ENVI Analytics Symposium or any other conference or encounter, challenge them to show you their coin or beverages are on them; just kidding about the beverages J.


With respect to the past twelve months, I’d say it has been an absolute marathon of learning. When I first arrived, my experience with ENVI was novice at best. I had successfully implemented solutions such as anomaly detection and change detection during tours in Operation Iraqi Freedom and Operation New Dawn. However, like many other Defense & Intelligence users, I was still heavily reliant on other software suites to perform certain workflows, such as mosaicking, orthorectification and producing specialized Compressed Arc-Digitized Raster Graphics (CADRG). This wasn’t because ENVI couldn’t accomplish these tasks, but rather because Soldiers like me just didn’t know how to using ENVI. 

I’m confident enough to now say that this knowledge gap has been bridged for the D&I community with the help of the ENVI Pocket Guides, VOLUME 1 | BASICS and the recently finished VOLUME 2 | INTERMEDIATE.

Volume 1 provides succinct instructions on how to perform the following tasks using ENVI:

1.       Understand the Interface

2.       Mosaic data

3.       Subset data

4.       Orthorectify data

5.       Pan Sharpen data

6.       Perform change detection

7.       Perform anomaly detection

8.       Produce a terrain categorization (TERCAT)

9.       Export data to CADRG

Volume 2 builds on the basics by providing succinct steps on how to perform the following tasks using ENVI and IDL:

1.       Add Grid reference & count features

2.       Perform Band Math

3.       Layer Stack images

4.       Exploit Shortwave Infrared (SWIR) bands

5.       Perform Spectral Analysis in general

6.       Perform Image Calibration/ Atmospheric Correction

7.       Extract features from LiDAR

8.       Batch Processing using IDL

Keep an eye on this blog for a hyperlink to VOLUME 2 of the Pocket Guide. It’s currently being formatted and printed.

TWI has been an honor and a privilege. I strongly recommend continuation of this program by both Harris Geospatial and the Army. I can certainly say that Army Technicians’ and Noncommissioned Officers’ development yearns for such opportunities. There is absolutely no way I could have learned enough to compile the Pocket Guides in any other setting. Again, it has been a marathon, but I’d do it again in a heartbeat. Thanks for the hospitality and opportunity from the bottom of my heart to the good folks at ENVI.

Chief Wright~ Out for Now.

Comments (0) Number of views (320) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks





Ordering from the Marketplace Just Got Better

Author: Jon Coursey

In a blog post I wrote a few months ago, I introduced the Instant Satellite Imagery Portal section of the Harris Geospatial Marketplace. This service offers a faster, less expensive alternative to purchasing DigitalGlobe imagery directly from a sale’s representative. However, at the time, it could still take up to 48 hours for the image to be delivered following purchase. This is no longer the case. Thanks to some savvy engineering by our technical team, the wait time for imagery download has been reduced to just a few minutes.

Often times, someone will contact us with a request for imagery, but will have a deadline that would generally be impossible to meet. And even when ordering directly from a provider, the rush delivery options could still mean up to a 48 hour delivery time. This service was created with these types of requests in mind. If someone is new to purchasing imagery, they may not know the typical delivery times for satellite captures. Additionally, the need for an image could have been realized towards the very end of a project, with too little an amount of time to actually receive one. Whatever the case, our Instant Satellite Imagery Portal hopes to offer a solution for such issues.

While in the middle of automating the Portal, it occurred to our technical team that “hey, this might just work for other products”. This idea recently came to fruition, with many of our other datasets becoming immediately downloadable following purchase. Currently, the following products have been included:

-          NAIP Imagery

-          CityOrtho Imagery

-          GeoOrtho Imagery

-          SRTM Elevation Data

-          NED Elevation Data

-          Harris Gap Filled Elevation Data

These datasets can be ordered via the same process on the Portal. For our Aerial Imagery and Digital Elevation Model sections, you basically have access to the immediate download option for anything with a price listed.

Although only a few products have been set up for this service so far, we are working on making this available for any data that does not require custom processing. This will make the Harris Geospatial Marketplace one of the quickest and easiest sources for downloadable data. Vector data and topographic maps will be available very soon, with hopes of adding more sources for satellite imagery and elevation data by the end of 2016. For any project facing a deadline that requires some sort of geospatial data set, the Marketplace will soon be the solution.


Comments (0) Number of views (225) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks





Crop Counting on Mars!?

Author: Zachary Norman

Believe it or not, there is a new market for precision agriculture on our big red neighbor. However, this is not your conventional agriculture analysis, but rather an innovative use of the crop counter tool in the Precision Ag Toolkit

Our crop of choice that we will be counting is not alive, but rather lifeless scars on the surface of Mars. Because Mars has such a thin atmosphere, about about 0.6% of Earth's mean sea level atmospheric pressure, smaller meteorites don't necessarily burn up in the atmosphere like they do in our night sky. Instead, the surface or Mars is covered in small, round craters which present ideal objects to find with the crop counter tool. These craters range in size from a few meters in diameters to kilometers in size. For this case, I'm focusing on caters that are about 40-50 meters in diameter.

Step One: Getting the Data

The coolest thing about all of the international space missions is that most of the data collected is easy to find and freely available. For Mars, I actually decided to use a DEM (Digital Elevation Model) that was derived from stereo pairs with the HiRISE (High Resolution Imaging Science Experiment) sensor which is on board the Mars Reconnaissance Orbiter. The HiRISE sensor takes images with up to .3 meter spatial resolution, which allows for lots of detail in the images that are captured. The resulting DEM I used had .99 meters per pixel for spatial resolution. The exact DEM I decided to go with can be found here

Here is a screenshot of a portion of the image that I used to count craters:

I chose this image because it had lots of uniform, round craters. After exploring the craters a bit, I decided to look for craters between 35 and 45 meters in size. Before I could count the craters, I first needed to figure out how I wanted to count them.

Step Two: Preprocessing to get the Data Analysis Ready

For the crop counter, the best way to get accurate results is to preprocess your data and do some thresholding. The first reason for this is that thresholding helps prevent false positives from being counted. Secondly, thresholding drastically increases the speed for finding crop centers. This speed increase is because, when pixels are Nan's, the crop counter skips those pixels and decreases processing time. For crater counting, this preprocessing step just becomes a matter of how to get the craters to pop out of the image.

After some thinking, I decided to isolate the crater pits themselves I would use some smoothing, band math, and then thresholding. Here are the exact steps I took to count craters:

1) Smooth the data with a kernel that is the twice the size as the largest crater size we are looking for.

The kernel size for smoothing the data was 91 because we were looking for craters between 35 and 45 meters in diameter. Below you can see the smoothed DEM with a similar color table as above:

2) Take the difference between the smoothed data set and the actual DEM. 

With the smoothed DEM, the crater features we are looking for should be pretty much erased. This means that taking the difference between our smoothed image and our original image will return the approximate crater depth. Because the crop counting tool works with bright images we can then use the crater depth image with the plant counter after one more step is applied. Below you will see the the difference between the DEM and the smoothed image. The green areas have a higher value (height in meters) and the red areas have a lower value.

3) Threshold our crater depth image

This step is very important to reduce false positives for counting craters because our difference image is quite noisy in some places. This step eliminates smaller craters and noise in our image by saying the crater depth needs to be greater than 'x' meters. I used .75 meters for this step which really isolates the craters I was looking for. To illustrate how this helps isolate the craters we are looking for, you can see the image below. This screenshot is of the original DEM with the image difference shown after a threshold has been applied.At this point you can see that we have isolated most of the craters with a few spots of noise here and there.

Step Three: Using the Crop Counter!

Now that we have preprocessed our data and have a great image to count craters in, it's time to use the crop counter. Here are the setting I used in the crop counter to find the crater locations:

CropCountTask = ENVITask('AgCropCount')
CropCountTask.INPUT_RASTER = ThresholdRaster
CropCountTask.WIDTH_IN_METERS = 0
CropCountTask.PERCENT_OVERLAP = .01
CropCountTask.SMOOTH_DATA = 1
CropCountTask.SEARCH_WIDTH = [35,45,1]
CropCountTask.MAX_TILE_SIZE = [400,400]

After running the task, here is what our results look like. This image is a grayscale version of the DEM with the crop locations shown as cyan circles.

As you can see, the crop counting tool does a pretty good job of finding the actual crater locations!

Comments (0) Number of views (448) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks





Coming soon...EAS

Author: Stuart Blundell

With the first day of Summer 2016 in the books, my thoughts are turning towards the ENVI Analytics Symposium (EAS) and our shared community goals regarding the geospatial analytics marketplace.  As a remote sensing scientist, I know that the 2016 EAS will deliver a vigorous and lively debate on new sensors, collection platforms, cloud technology, data science and algorithms.  We will have that dialogue with world-class research scientists and industry thought leaders discussing how geospatial technology (such as spatial-temporal analytics and Small Sat platforms), is meeting the growing challenges of National Security, Global Climate Change, Health Analytics and Precision Agriculture.

Of particular interest will be an in-depth discussion, led by Dr. Michael Steinmayer of SulSoft, on de-forestation monitoring in the Amazon rain forest using airborne SAR and spectral data.  I am also honored to announce that former NGA Director Robert B. Murrett, a professor on the faculty of the Maxwell School of Citizenship and Public Affairs at Syracuse University, and Deputy Director of the Institute for National Security and Counterterrorism (INSCT), will once again join me on the EAS stage as a moderator for our panel discussions.

This year’s EAS will also explore the business drivers behind the growth in geo-solutions across a wide spectrum of consumer applications.  Our track on Cloud Platforms and Geospatial Markets will feature a diverse group of companies that are investing in online marketplaces and leveraging cloud technology.  Industry leaders such as DigitalGlobe, Airbus Defence and Space, Cloud EO, Harris Geospatial and Amazon will discuss their approaches, success and challenges in getting to the next big growth ring – commercial customers that want answers to business problems. We are adding emphasis to this commercial business theme by including an Industry Solutions track featuring innovative companies, such as exactEarth, FLoT Systems and Highland Agriculture, who will discuss their analytical approaches to maritime, utility and agricultural markets. 

Our line-up of workshops has something for everyone including Deep Learning approaches for geospatial data, an introduction to Geiger-mode LIDAR, SAR Lunch and Learn, Small UAS data processing and many other topics of interest.  Be sure to take a look at the workshops and agenda topics for the event.  I hope you will join us this August 23rd and 24th in Boulder, Colorado, for the next chapter of the ENVI Analytics Symposium!

Comments (0) Number of views (425) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks


12345678910 Last



















© 2016 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation