24

Jun

2016

Crop Counting on Mars!?

Author: Zachary Norman

Believe it or not, there is a new market for precision agriculture on our big red neighbor. However, this is not your conventional agriculture analysis, but rather an innovative use of the crop counter tool in the Precision Ag Toolkit

Our crop of choice that we will be counting is not alive, but rather lifeless scars on the surface of Mars. Because Mars has such a thin atmosphere, about about 0.6% of Earth's mean sea level atmospheric pressure, smaller meteorites don't necessarily burn up in the atmosphere like they do in our night sky. Instead, the surface or Mars is covered in small, round craters which present ideal objects to find with the crop counter tool. These craters range in size from a few meters in diameters to kilometers in size. For this case, I'm focusing on caters that are about 40-50 meters in diameter.



Step One: Getting the Data

The coolest thing about all of the international space missions is that most of the data collected is easy to find and freely available. For Mars, I actually decided to use a DEM (Digital Elevation Model) that was derived from stereo pairs with the HiRISE (High Resolution Imaging Science Experiment) sensor which is on board the Mars Reconnaissance Orbiter. The HiRISE sensor takes images with up to .3 meter spatial resolution, which allows for lots of detail in the images that are captured. The resulting DEM I used had .99 meters per pixel for spatial resolution. The exact DEM I decided to go with can be found here

Here is a screenshot of a portion of the image that I used to count craters:


I chose this image because it had lots of uniform, round craters. After exploring the craters a bit, I decided to look for craters between 35 and 45 meters in size. Before I could count the craters, I first needed to figure out how I wanted to count them.



Step Two: Preprocessing to get the Data Analysis Ready

For the crop counter, the best way to get accurate results is to preprocess your data and do some thresholding. The first reason for this is that thresholding helps prevent false positives from being counted. Secondly, thresholding drastically increases the speed for finding crop centers. This speed increase is because, when pixels are Nan's, the crop counter skips those pixels and decreases processing time. For crater counting, this preprocessing step just becomes a matter of how to get the craters to pop out of the image.

After some thinking, I decided to isolate the crater pits themselves I would use some smoothing, band math, and then thresholding. Here are the exact steps I took to count craters:

1) Smooth the data with a kernel that is the twice the size as the largest crater size we are looking for.

The kernel size for smoothing the data was 91 because we were looking for craters between 35 and 45 meters in diameter. Below you can see the smoothed DEM with a similar color table as above:

2) Take the difference between the smoothed data set and the actual DEM. 

With the smoothed DEM, the crater features we are looking for should be pretty much erased. This means that taking the difference between our smoothed image and our original image will return the approximate crater depth. Because the crop counting tool works with bright images we can then use the crater depth image with the plant counter after one more step is applied. Below you will see the the difference between the DEM and the smoothed image. The green areas have a higher value (height in meters) and the red areas have a lower value.

3) Threshold our crater depth image

This step is very important to reduce false positives for counting craters because our difference image is quite noisy in some places. This step eliminates smaller craters and noise in our image by saying the crater depth needs to be greater than 'x' meters. I used .75 meters for this step which really isolates the craters I was looking for. To illustrate how this helps isolate the craters we are looking for, you can see the image below. This screenshot is of the original DEM with the image difference shown after a threshold has been applied.At this point you can see that we have isolated most of the craters with a few spots of noise here and there.

Step Three: Using the Crop Counter!

Now that we have preprocessed our data and have a great image to count craters in, it's time to use the crop counter. Here are the setting I used in the crop counter to find the crater locations:

CropCountTask = ENVITask('AgCropCount')
CropCountTask.INPUT_RASTER = ThresholdRaster
CropCountTask.WIDTH_IN_METERS = 0
CropCountTask.PERCENT_OVERLAP = .01
CropCountTask.SMOOTH_DATA = 1
CropCountTask.SEARCH_WIDTH = [35,45,1]
CropCountTask.MAX_TILE_SIZE = [400,400]


After running the task, here is what our results look like. This image is a grayscale version of the DEM with the crop locations shown as cyan circles.

As you can see, the crop counting tool does a pretty good job of finding the actual crater locations!

Comments (0) Number of views (1811) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

21

Jun

2016

Coming soon...EAS

Author: Stuart Blundell

With the first day of Summer 2016 in the books, my thoughts are turning towards the ENVI Analytics Symposium (EAS) and our shared community goals regarding the geospatial analytics marketplace.  As a remote sensing scientist, I know that the 2016 EAS will deliver a vigorous and lively debate on new sensors, collection platforms, cloud technology, data science and algorithms.  We will have that dialogue with world-class research scientists and industry thought leaders discussing how geospatial technology (such as spatial-temporal analytics and Small Sat platforms), is meeting the growing challenges of National Security, Global Climate Change, Health Analytics and Precision Agriculture.

Of particular interest will be an in-depth discussion, led by Dr. Michael Steinmayer of SulSoft, on de-forestation monitoring in the Amazon rain forest using airborne SAR and spectral data.  I am also honored to announce that former NGA Director Robert B. Murrett, a professor on the faculty of the Maxwell School of Citizenship and Public Affairs at Syracuse University, and Deputy Director of the Institute for National Security and Counterterrorism (INSCT), will once again join me on the EAS stage as a moderator for our panel discussions.

This year’s EAS will also explore the business drivers behind the growth in geo-solutions across a wide spectrum of consumer applications.  Our track on Cloud Platforms and Geospatial Markets will feature a diverse group of companies that are investing in online marketplaces and leveraging cloud technology.  Industry leaders such as DigitalGlobe, Airbus Defence and Space, Cloud EO, Harris Geospatial and Amazon will discuss their approaches, success and challenges in getting to the next big growth ring – commercial customers that want answers to business problems. We are adding emphasis to this commercial business theme by including an Industry Solutions track featuring innovative companies, such as exactEarth, FLoT Systems and Highland Agriculture, who will discuss their analytical approaches to maritime, utility and agricultural markets. 

Our line-up of workshops has something for everyone including Deep Learning approaches for geospatial data, an introduction to Geiger-mode LIDAR, SAR Lunch and Learn, Small UAS data processing and many other topics of interest.  Be sure to take a look at the workshops and agenda topics for the event.  I hope you will join us this August 23rd and 24th in Boulder, Colorado, for the next chapter of the ENVI Analytics Symposium!

Comments (0) Number of views (1195) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

16

Jun

2016

I love it when a plan comes together….

Author: Amanda O'Connor

In the last few weeks, I’ve been working with a lot of precision agriculture data of ALL kinds. No one project is the same and if I reveal anything about them, well, I’d have to kill you. Except for one. Our Airbus DS co-marketing activities, those I can discuss. I recently delivered a webinar on using Pleiades imagery in our ENVI Toolkit for Precision agriculture. The webinar focused on plant counting and using USDA Cropland data layers to find fields of interest. Sky Rubin at Airbus Co presented as well, and we’ll be showing demos of this at the ESRI UC so be sure to find us (or we’ll find you).

Anyways, when I first got my hands on the plant counter it was an earlier version and I, being new to it, was perhaps not keying in the best parameters as seen below. But I was still getting 95% accuracy on a grape vineyard. The plant centers aren’t quite perfect, but still decent to get a count.

Enter the new and improved 1.0 version—I dare you to find a missed plant. Austin Coates, one of our consultants and Plant Finder auteur, tightened up a few things to get these results.

Then Zach Norman, Sales engineer extraordinaire, asked what if I could pull an average vegetation index value. Drops mic.

When you do things like this and people see your work, you realize the value of being in a collaborative environment—you can feed off of each, motivate each other to ask questions, and find different ways of doing things. Here we go from ok counts, to better counts, to water/fertilize/do something to that grape plant right now! In boutique crops like viticulture where the value of each plant is so high, a look like this can be incredible useful for plant monitoring. For a lower value crop like corn, more regional information is fine and useful because management occurs more from the field level than the plant.

And the ENVI Toolkit for Precision Agriculture is available. It’s a library that runs with ENVI+IDL or ENVI Services Engine. Cost is $3,500. You can use it yourself or our services team can integrate it into your existing cloud or ArcGIS for Server environment. Drop me a line with questions.

Follow me @asoconnor

Comments (0) Number of views (2247) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

9

Jun

2016

Accessing Features Only Available to 32-bit IDL from 64-bit IDL

Author: Jim Pendleton

Not all functionality available to IDL and ENVI in 32-bit mode is available in 64-bit mode, and vice versa.

There are multiple tables in our on-line documentation that list support for various platforms.

If you're on a 64-bit platform, you have the option of launching IDL in either 32- or 64-bit mode. But that doesn't really solve the problem.

For example, let's say you have a main application that executes in 64-bit IDL, but you want to have access to data in DXF-format files. If you attempt to create an instance of an IDLffDXF object that parses this file format, you'll get an error:

IDL> heart = obj_new('idlffdxf', filepath('heart.dxf', subdir = ['data']))
% OBJ_NEW: Dynamically loadable module is unavailable on this platform: DXF.
% Execution halted at: $MAIN$          

We could fire up a second command line or Workbench session of IDL in 32-bit to parse the file, but a more convenient method, and the way we would want to implement a solution within a compiled routine, is through an IDL_IDLBridge object. There's a special keyword named OPS (technically, "out-of-process server") which allows us to set whether the bridge process should run in 32- or 64-bit mode.

Here, we'll start a 32-bit IDL process from our 64-bit IDL session.

IDL> b = idl_idlbridge(ops = 32)
% Loaded DLM: IDL_IDLBRIDGE.

Obviously, if you're on a 32-bit platform (still?!) you cannot simply create a 64-bit process via the magic of an IDL keyword.

We can construct a command to be executed in our 32-bit process to read the data.

IDL> command = "heart = obj_new('idlffdxf', filepath('heart.dxf', subdir = ['examples','data']))"
IDL> b->execute, command

Now we can proceed with an example from the documentation for the IDLffDXF::GetEntity method's documentation, transferring the data back to our main process for display.

IDL> b->execute,  "heartTypes = heart->getcontents()"
IDL> b->execute, "tissue = heart->getentity(heartTypes[1])"
IDL> b->execute, "connectivity = *tissue.connectivity"
IDL> b->execute, "vertices = *tissue.vertices"
IDL> vertices = b.getvar('vertices')
IDL> connectivity = b.getvar('connectivity')

Now that we have local copies in our 64-bit process of the vertices and connectivity list data from the 32-bit process, we can display the result.

IDL> poly = idlgrpolygon(vertices, poly = connectivity, style = 2, color = !color.red)
IDL> xobjview, poly

Comments (0) Number of views (1048) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

7

Jun

2016

Fusing Point-Cloud Data With Imagery

Author: Adam O'Connor

By now we've all seen the power of multi-sensor data fusion to facilitate situational awareness which enhances our ability to understand and interpret a specific environment. Taking the most valuable components of disparate data sources then fusing them together can enrich contextual analysis and help us make better decisions based on the extraction of meaningful information from the fused data. When working with geospatial data such as LiDAR point-clouds and high-resolution imagery a relatively simple yet powerful technique is to utilize the georeferencing spatial reference metadata to encode each 3D point with corresponding image pixel values based on data geopositioning. This enables more realistic 3D visualization of the point-cloud data since the points can be displayed using colors derived from an alternate raster data source.

Fortunately the LAS format specification provides the ability to store RGB color information for every point stored in a *.las file. However, when a LiDAR data collection project is performed it does not always include cotemporal image acquisition so the process of coloring a point-cloud may need to be executed at a later time using raster data from a variety of sensors (e.g. EO/IR, SAR, etc.). For example, some of the Elevation Source Data (3DEP) available for download from The National Map does not include the RGB color information so it can be beneficial to also download corresponding High-Resolution Orthoimagery (HRO) then fuse the two datasets together.

With this in mind we have been working diligently on a new "Color Point Cloud" tool (and corresponding programmatic API task) within the upcoming ENVI 6.0 software version planned for release later this year. The new "Color Point Cloud" tool+task will allow users to process 3D point-cloud data along with any geographically overlapping raster dataset to generate a new output LAS 1.2 format *.las file which is RGB encoded with pixel values from user-selected image bands. This new processing capability also allows the user decide how to handle points that fall outside the spatial extent of the raster imagery by either removing the data from the generated output *.las result or simply coloring them all black (RGB=0,0,0):


Screenshot of ENVI's new "Color Point Cloud" Tool

Consider the USGS LiDAR Point-Cloud (LPC) source data that can be downloaded from The National Map for San Francisco, CA. Since the LAS datasets do not include RGB encoding a 3D point-cloud visualization will typically involve a simple colormap based on height attributes perhaps with shading based on intensity. While specific features are clearly visible in this style of data visualization it can be difficult to visually interpret the point-cloud:


Data downloaded from The National Map courtesy of USGS

Fusing this point-cloud data with the 1-foot resolution imagery also available for this region yields a much more realistic visual representation:


Data downloaded from The National Map courtesy of USGS

Keep in mind there's no rule that says the point-cloud RGB encoding must come Red | Green | Blue image channels which is why ENVI's "Color Point Cloud" tool+task is very flexible and allows the user to select any 3 bands from any raster dataset. For example, users can also utilize infrared bands from multispectral or hyperspectral datasets to obtain more complex coloring of the point-cloud data such as a CIR representation:



Data downloaded from The National Map courtesy of USGS

Moving forward we plan to support other point-cloud storage formats such as BPF (Binary Point File) and SIPC (Sensor Independent Point Cloud) that provide the ability to store even more per-point auxiliary attribute data that will enable not just visualization but also specialized algorithm development for automated analysis of fused 3D data products.

Comments (0) Number of views (2041) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

First45678910111213Last

MONTHLY ARCHIVE

«March 2017»
SunMonTueWedThuFriSat
2627281234
567891011
12131415161718
19202122232425
2627282930311
2345678

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation