22

Feb

2017

Machine Learning Training for Automatic Target Detection

Author: Pedro Rodriguez

This blog offers a deeper dive into the machine learning training process for performing automatic target detection. Samples of automatic target detection were recently presented at the Machine Learning: Automate Remote Sensing Analytics to Gain a Competitive Advantage webinar.

Machine learning (ML) applications, from object recognition and caption generation, to automatic language translation and driverless cars, have increased dramatically over the last few years, powered mainly by the increase of computing power (using GPUs), reduced cost of storage, wider availability of training data, and development of new training techniques for the machine learning models.

In the last five years, Harris Corporation has made a multi-million dollar investment into applying machine learning to solve customer challenges using remote sensing data. In response to the increased interest from our customers in evaluating how machine learning can solve their problems using geospatial data, I set out to train some of my coworkers on how to build a ML model to perform automatic feature detection on 2D overhead imagery. This training was crucial for our Solutions Engineers (SEs) to be able to prototype custom solutions for our customers and/or to integrate Machine Learning with our other powerful image analytics software like ENVI/IDL.

Figure 1: From left to right, Jeff McKissick, Zach Norman, Pedro Rodriguez, Rebecca Lasica, and Dan Platt are pictured here in the lobby of the Harris Broomfield, CO office

 

For this particular automatic target detection training, I chose to build a ML classifier to identify all the crosswalks in a subset image of São Paulo, Brazil. São Paulo is notoriously known for congested streets and woefully unfriendly streets for pedestrians, often lacking zebra-type crosswalk markings. This application demonstrates how city officials can use ML to automatically find all the crosswalks in the city for urban planning purposes. As an example, by knowing where crosswalks are it can be used to determine the number of missing crosswalks and accurately gauge the amount of labor and material required in order to increase pedestrian safety.

In just a few hours, and with each trainee using a small Red Hat Virtual Machine of 4GB RAM and 2 CPUs, we were able complete the entire process, from gathering the training data, building the ML model, and finally classifying a subset of the selected raster dataset.

For the raster dataset we used a high resolution satellite image from DigitalGlobe, Inc. (0.3 GSD, 4-band (RGBN), WorldView-3) from São Paulo, Brazil. To gather the training data (positives and negatives), we used a custom ENVI extension to chip 35x35 pixels samples and augment the training data as shown in Figure 2 below.

Figure 2: “Chip From Points” ENVI extension to gather training data

For data augmentation, we simply rotated each image chip 90 degrees (4 rotations). For the sake of time, we only selected 100 positives and 200 negatives, which after the data augmentation we had 1,200 training chips (400 positives and 800 negatives). From the 1,200 training chips 10% were used for validation, 20% for testing, and 70% for doing the actual training of ML model.  As seen in the heatmap shown in Figure 3 below, the ML classifier resulting from the limited training dataset (1,200 samples x 70% training = 840 training samples) performed very poorly as it contained many false negatives (missed detections) and some false positives (wrong detections).

Figure 3: Heatmap from a ML classifier using 100 positives and 200 negatives with 4 rotations

 In order to highlight the true potential of our ML technology, I decided to train the crosswalk classifier with a larger training data set. For this, I increased the training data by 5 times, so instead of just having 100 positives and 200 negatives, the new training set had 500 positives and 1,000 negatives. I also rotated each image chip by 10 degrees (36 rotations) instead of just every 90 degrees (4 rotations) which augmented the total image samples to 54,000. Table 1 below summarizes the data set used in both cases.

Table 1: Date Set Characteristics

Data Set

Positives

Negatives

Rotations

Total Samples

Training Samples (70%)

Small

100

200

4

1,200

840

Large

500

1,000

36

54,000

37,800

 

The next step was to determine the number of iterations (mini-batch updates) that were needed to complete an epoch. One epoch consists of one full training cycle on the training data set. To calculate iterations per epoch we use the following formula:

 where,

TS = Training Samples and,

BS = Batch Size

 

 

It’s difficult to prescribe a minimum number of epochs for training a new model since it will vary depending on the difficulty of the problem, quality of the data, chosen network architecture, etc. As a starting point, I began with 42 epochs and to calculate the total number of iterations I used the following formula:

The required number of epochs can be determined by watching the validation accuracy as the training proceeds with increasing number of iterations. This validation accuracy can be plotted with IDL as Receiver Operating Characteristic (ROC) curves as seen in Figure 4 below:

 

Figure 4: ROC Curves with 10k, 20k, and 30k iterations

ROC curves feature false positive rate on the X axis and true positive rate on the Y axis. This means that the top left corner of the plot represents the “ideal” Machine Learning classifier, which has a false positive rate of zero, and a true positive rate of one. It can be seen from Figure 4 above, that the accuracy of the crosswalk classifier increased with the number of iterations. At 30,000 iterations (about 126 epochs), the ROC indicated that enough training was achieved with an overall accuracy (ACC) of 98.84%. Figures 5 and Figure 6 below show the results of the 30k iteration classifier in a dense urban scene and in a highway scene, respectively. This crosswalk classifier proved to be very robust against confusers, like other similar street marking, and occlusions, like partially hidden crosswalks in the shadows. I challenge you to find the crosswalks manually in the “before” (left) scenes of Figures 5 and Figure 6. You can later validate your answers in the “after”(right) scene that was analyzed using ML. Can you imagine manually identifying all the crosswalks in the city of Sao Paulo?


Figure 5: Urban Scene, before and after crosswalk detection


Figure 6: Highway Scene, before and after crosswalk detection


Automatic target detection is one of our most basic ML solutions, which usually involves searching for particular features in a large dataset, therefore applicable to many real world challenges. This type of solution is even more relevant with the “Big Data” surge in which studies indicate that only 0.5% of all data generated gets ever used or analyzed (1). It is clear that future business advantages in every industry will arise when companies are able to automatically analyze this surge of data. Machine Learning is not meant to replace industry professionals, but to off load some of the tedious task to the computer, so they can focus their expert attention to analysis and not on “snailing” large datasets searching for particular features. ML can also run 24/7 and is highly scalable to available computing resources.

I want to emphasize that at Harris Corp. we are not merely delivering software on disk, but an end-to-end solution to deliver answers to specific industry problems. To answers questions like, “How many utility poles need servicing?” “Which blades in a wind farm have damage?” or “How are the road conditions near me?” All of these are questions we have been able to accurately answer for our customers.

If you would like to know more about how we have implemented Machine Learning to address other real-world problems, watch the webinar that my co-worker Will Rorrer and I hosted in January 2017: Machine Learning: Automate Remote Sensing Analytic s to Gain a Competitive Advantage

Download a printer-friendly PDF of this blog here.

For any other questions, please contact our Software Sales Manager:

Kevin Wells

Kevin.wells@harris.com

Office: 303-413-3954

 

References:

(1)    https://www.technologyreview.com/s/530371/big-data-creating-the-power-to-move-heaven-and-earth/

Comments (0) Number of views (544) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

25

Jan

2017

Don’t forget to Stretch! Using ENVI’s stretch tools to see things our eyes can’t.

Author: Jeff McKissick

Living in Boulder, we mountain people out here like to do a lot of physical activities whether it’s hiking, skiing, or yoga. Everyone knows the first thing you have to do before any physical activity is STRETCH! This also applies in ENVI as well! Over the past few months I have worked on various projects where, had I applied one of our stretches in ENVI first, I would have saved a lot of time for myself. This example today was a dataset of a large grass field in which the user was looking for an invasive species weed within this field.

You can see from the figure above that EVERYTHING LOOKS GREEN! How can you pick out a weed when everything looks like grass? With a little help from the customer, we were able to get access to a shapefile they provided that showed us areas in the scene that actually were the weed we were looking for. Still, even with these shapefiles everything looks the same color. This is where, before you start any of your preprocessing or classification workflows, you stretch!

ENVI has some really great stretch tools to choose from, but seeing them isn’t actually helping you know what they mean. For this example we used a few different linear percent stretches to help accentuate some of our features. What these percent stretches do is trim the X% of extreme values at the beginning and end of the histogram.

So for example, if you look at our three images with the histogram stretch plot shown, you can see in the first image with no stretch that our pixel values are 0-255 which is standard. If we look at our Linear 2% and 5% stretched images respectively you see the pixel values get trimmed on each end of each color band.

From here we were easily able to identify the invasive weed in our scene and compare it to the shapefiles provided for us so that we could run a classification workflow and extract the features that we wanted. Our shapefiles, not shown here, were all around the areas in the scene above that were a very dark green. These stretches allowed us to make more accurate ROIs  (Regions of Interest) for our classification which in turn gave us a more accurate result.

So remember, DON’T FORGET TO STRETCH!

Comments (0) Number of views (698) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

14

Nov

2016

Using Deep Learning for Feature Extraction

Author: Barrett Sather

In August, I talked about how to pull features out of images using known spatial properties about an object. Specifically, in that post, I used rule-based feature extraction to pull stoplights out of an image.

Today, I’d like to look in to a new way of doing feature extraction using deep learning technology. With our deep learning tools developed here in house, we can use examples of target data in order to find other similar objects in other images.

In order to train the system, we will need 3 different kinds of examples for the deep learning network to learn what to look for. This will be target, non-target, and confusers. These examples are patches cut out of similar images, and the patches will all be the same size. In my case for this exercise, I've picked a size of 50 by 50 pixels.

The first patch type is actual target data – I’ll be looking for illuminated traffic lights. For the model to work well, we’ll need different kinds of traffic signals, lightning conditions, and camera angles. This will help the network generalize what the object looks like.

Next, we’ll need negative data, or data that does not contain the object. This will be the areas surrounding the target, and other features that will possibly appear in the background of the image. In our case for traffic lights, this will include cars, streetlights, road signs, foliage, and others.

For the final patch type, I went through some images and marked things that may confuse the system. These are called confusers, and will be objects with a similar size, color, and/or shape of the target. In our case, this could be other signals like red arrows or a “don’t walk” hand. I’ve also included some bright road signs and a distant stop sign.

Once we have all of these patches, we can use our machine learning tool known as MEGA to train a neural network that can be used to identify similar objects in other images. 

Do note that I have many more patches created than just the ones displayed. With more examples, and more diverse examples, MEGA has a better chance of accurately classifying target vs non-target in an image.

In our case here, we’ll only have three possible outcomes as we look though the image. This will be lights, not lights, and looks-like-a-light classes. If you have many different objects in your scene, you can even get something more like a classification image, as MEGA can be used to identify as many objects in an image as you like. If we wanted to extend this idea here, we could look for red lights, green lights, street lights, lane markers, or other cars. (This is a simple example of how deep learning would be used in autonomous cars!)

To learn more about MEGA and what it can do in your analysis stack, contact our custom solutions group for more details! For my next post, we’ll look at the output from the trained neural network, and analyze the results.

Comments (0) Number of views (2293) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

27

Oct

2016

Customizing the Geospatial Services Framework with Node.js

Author: Daniel Platt

One of the greatest features of the Geospatial Services Framework (GSF), in my opinion, is the fact that it is built upon Node.js. There are many reasons why this is great, but I wanted to talk about one in particular which is the amount of customization this provides.

Below I will show a simple but powerful example of this customization, but before I get there, I want to give a quick overview of both GSF and Node.js. 

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. With that said, what is important about Node.js in the context of this blog post is that it is a powerful, scalable backend for a web application that is written in the same language almost every website uses, JavaScript.

We have improved ENVI Service Engine by adding GSF – a lightweight, but powerful framework based on Node.js that can provide scalable geospatial intelligence for any size organization. I like to describe it as a “messenger” system that provides a way to communicate between the web and our various products or “engines” that provide analytics such as ENVI, IDL and more. 

GSF by design has its Node.js code exposed. This allows the product to be customized infinitely to fit whatever architecture that it needs to reside in. It is a modular system, and has different modules that can be toggled on/off or even duplicated and extended upon. This makes customization easy and safe.

One of these modules is called a Request Handler. This module serves up endpoints for GSF. This can be simply REST based calls, a webpage or even better, both. 

While developing an ENVI web client demo that takes Amazon S3 hosted raster data and passes it to GSF to run analytics on, I found that I didn’t have a way to simply list what data is available in my S3 storage. While exploring ways to solve this problem I came to a realization that I can simply use the power of Node.js to accomplish this task.

After importing the aws-sdk package that is already installed with GSF into my request handler, I just wrote a simple function to use that package to list any .dat files in my S3 storage data and return that information to my front end web application to be ingested and displayed.

Here is the slightly modified request handler code with comments explaining each part.


//Import express, used by node.js to setup rest endpoints
var express = require('express');
//Import AWS, used to interact with Amazon Web Services including S3 storage
var aws = require('aws-sdk');
//Extra tools that should be included in request handlers
var extend = require('util')._extend;
var defaultConfig = require('./config.json');

/**
 * Dynamic Request Handler
 */

function DynUiHandler() {
    var s3Bucket, s3;
    //Setup config options
    var config = {};
    extend(config, defaultConfig);
    //Grab information for S3 from config.json
    s3Workspace = config.S3Root;
    s3Bucket = config.S3Bucket;

    // Setup S3
    s3 = new aws.S3(config);


    /**
     * List files in the s3 bucket.
     * @param {object} req - The Express request object.
     * @param {object} res - The Express response object.
     */
    function listS3Data(req, res) {
    //Take the parameter passed from the REST call and set that as the bucket to be accessed in the call to s3.
        var params = {
            Bucket: req.params.bucket
        }; 
        //Call the S3 package's built in listObjects function to return what is avaiable in the specified bucket

        s3.listObjects(params, function(err, data) {
        //check if error occured and if so, halt and report it to client.
            if (err) {
                var code = err.code || 500;
                var message = err.message || err;
                res.status(code).send({
                    error: message
                });
            } 
            // If no error, push every object found in the response with a '.dat' extension to an array that will be returned to client 
            else {
                var files = [];
                //Look at each file contained in data returned by s3.listObjects()
                data.Contents.forEach(function(file) {
                    //Searches for files with .dat in the bucket requested
                    if(file.Key.endsWith('.dat')){
                        //If found, store that file information in the files array.
                        files.push(file);
                    }
                });
            }
            //send the files array containing metadata of all .dat files found.
            res.send(files);
        });
    };

    //Initialize request handler, run when GSF starts up.
    this.init = function(app) {
        // Set up request handler to host the html subdirectory as a webpage.
        app.use('/dynui/', require('express').static(__dirname + '/html/'));
        // Set up a rest call that runs listS3Data and supplies the accomponying bucket parameter.
        app.get('/s3data/:bucket', listS3Data);
    };
}
//
module.exports = DynUiHandler;

After restarting the server I was able to hit my rest endpoint by pasting the following in my browser:
http://localhost:9191/s3data/mybucket

This would return me JSON of the contents of “mybucket”

 
Using this information I was able to give a user of my application real time information right inside of the interface about which data they have available to them. 
 

This is a simple example of how I customized a stock request handler to make my web demo more powerful. However, the ability to modify the source code of such a powerful system in a modular fashion provides a safe way to mold GSF into the exact form and shape you need to fit into your own architecture. 

It might just be me but I much prefer that over restructuring my own existing architecture to fit closed source code I know nothing about.


Comments (0) Number of views (2782) Article rating: No rating

25

Oct

2016

The Future of Weather Observation is Coming Soon! GOES-R

Author: Joey Griebel

If all goes according to plan, next month the GOES-R weather satellite will be launching and the next generation of weather forecasting, solar activity monitoring, and lightening detection will be here. This advanced satellite will change how quickly and accurately we are able to monitor and predict hazardous weather and help give those in harm’s way the time needed to prepare and evacuate. The GOES-R satellite will include:

Advanced Baseline Imager (ABI) – an advanced imager that has 3 times more channels, 4 times better resolution , and 5 times faster than before. All of this leads to better observation of severe storms, fire, smoke, aerosols, and volcanic ash.

Geostationary Lightening Mapper - The lightning mapper will allow mapping of lightning strikes on ground, as well as lightning in the atmosphere. Researchers have found that an increase in lightning activity may be a sign of tornadoes forming, thus providing the data to detect tornadoes faster.

Space Weather Observation - GOES-R will work with NOAA instruments to gather information on radiation hazards from sun that can interfere with communication and navigation systems, damage satellites, threaten power utilities.

 

Harris Corporation has been supporting NOAA with some aspects on the GOES-R Satellite construction and will be providing the Ground System Support, as well as the 16.4 Meter triband antenna needed to stay in touch with it.

Now what does this have to do with ENVI? Once GOES-R is operational and collecting data, ENVI will be working to support the Harris Weather groups WXconnect systems for data validation and visualization, supporting the ABI data directly in ENVI, as well as working with NOAA to help continue to create advanced products in the future. It is an exciting time for NOAA with this milestone launch and an exciting time to be working with ENVI to get to work the advanced data that will be coming down!

Below are the baseline products, as well as some future products that can be expected.

 

BASELINE PRODUCTS

FUTURE PRODUCTS

Advanced Baseline Imager (ABI)

Absorbed Shortwave Radiation: Surface

Aerosol Detection (Including Smoke and Dust)

Aerosol Particle Size

Aerosol Optical Depth (AOD)

Aircraft Icing Threat

Clear Sky Masks

Cloud Ice Water Path

Cloud and Moisture Imagery

Cloud Layers/Heights

Cloud Optical Depth

Cloud Liquid Water

Cloud Particle Size Distribution

Cloud Type

Cloud Top Height

Convective Initiation

Cloud Top Phase

Currents

Cloud Top Pressure

Currents: Offshore

Cloud Top Temperature

Downward Longwave Radiation: Surface

Derived Motion Winds

Enhanced "V" / Overshooting Top Detection

Derived Stability Indices

Flood/Standing Water

Downward Shortwave Radiation: Surface

Ice Cover

Fire/Hot Spot Characterization

Low Cloud and Fog

Hurricane Intensity Estimation

Ozone Total

Land Surface Temperature (Skin)

Probability of Rainfall

Legacy Vertical Moisture Profile

Rainfall Potential

Legacy Vertical Temperature Profile

Sea and Lake Ice: Age

Radiances

Sea and Lake Ice: Concentration

Rainfall Rate / QPE

Sea and Lake Ice: Motion

Reflected Shortwave Radiation: TOA

Snow Depth (Over Plains)

Sea Surface Temperature (Skin)

SO2 Detection

Snow Cover

Surface Albedo

Total Precipitable Water

Surface Emissivity

Volcanic Ash: Detection and Height

Tropopause Folding Turbulence Prediction

Geostationary Lightning Mapper (GLM)

Upward Longwave Radiation: Surface

Lightning Detection: Events, Groups & Flashes

Upward Longwave Radiation: TOA

Space Environment In-Situ Suite (SEISS)

Vegetation Fraction: Green

Energetic Heavy Ions

Vegetation Index

Magnetospheric Electrons & Protons: Low Energy

Visibility

Magnetospheric Electrons & Protons: Med & High Energy

Solar & Galactic Protons

Magnetometer (MAG)

Geomagnetic Field

Extreme Ultraviolet and X-ray Irradiance Suite (EXIS)

Solar Flux: EUV

Solar Flux: X-ray Irradiance

Solar Ultraviolet Imager (SUVI)

Solar EUV Imagery

 

Comments (0) Number of views (2172) Article rating: 4.7

Categories: ENVI Blog | Imagery Speaks

Tags:

12345678910Last

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation