14

Nov

2016

Using Deep Learning for Feature Extraction

Author: Barrett Sather

In August, I talked about how to pull features out of images using known spatial properties about an object. Specifically, in that post, I used rule-based feature extraction to pull stoplights out of an image.

Today, I’d like to look in to a new way of doing feature extraction using deep learning technology. With our deep learning tools developed here in house, we can use examples of target data in order to find other similar objects in other images.

In order to train the system, we will need 3 different kinds of examples for the deep learning network to learn what to look for. This will be target, non-target, and confusers. These examples are patches cut out of similar images, and the patches will all be the same size. In my case for this exercise, I've picked a size of 50 by 50 pixels.

The first patch type is actual target data – I’ll be looking for illuminated traffic lights. For the model to work well, we’ll need different kinds of traffic signals, lightning conditions, and camera angles. This will help the network generalize what the object looks like.

Next, we’ll need negative data, or data that does not contain the object. This will be the areas surrounding the target, and other features that will possibly appear in the background of the image. In our case for traffic lights, this will include cars, streetlights, road signs, foliage, and others.

For the final patch type, I went through some images and marked things that may confuse the system. These are called confusers, and will be objects with a similar size, color, and/or shape of the target. In our case, this could be other signals like red arrows or a “don’t walk” hand. I’ve also included some bright road signs and a distant stop sign.

Once we have all of these patches, we can use our machine learning tool known as MEGA to train a neural network that can be used to identify similar objects in other images. 

Do note that I have many more patches created than just the ones displayed. With more examples, and more diverse examples, MEGA has a better chance of accurately classifying target vs non-target in an image.

In our case here, we’ll only have three possible outcomes as we look though the image. This will be lights, not lights, and looks-like-a-light classes. If you have many different objects in your scene, you can even get something more like a classification image, as MEGA can be used to identify as many objects in an image as you like. If we wanted to extend this idea here, we could look for red lights, green lights, street lights, lane markers, or other cars. (This is a simple example of how deep learning would be used in autonomous cars!)

To learn more about MEGA and what it can do in your analysis stack, contact our custom solutions group for more details! For my next post, we’ll look at the output from the trained neural network, and analyze the results.

Comments (0) Number of views (742) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

27

Oct

2016

Customizing the Geospatial Services Framework with Node.js

Author: Daniel Platt

One of the greatest features of the Geospatial Services Framework (GSF), in my opinion, is the fact that it is built upon Node.js. There are many reasons why this is great, but I wanted to talk about one in particular which is the amount of customization this provides.

Below I will show a simple but powerful example of this customization, but before I get there, I want to give a quick overview of both GSF and Node.js. 

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. With that said, what is important about Node.js in the context of this blog post is that it is a powerful, scalable backend for a web application that is written in the same language almost every website uses, JavaScript.

We have improved ENVI Service Engine by adding GSF – a lightweight, but powerful framework based on Node.js that can provide scalable geospatial intelligence for any size organization. I like to describe it as a “messenger” system that provides a way to communicate between the web and our various products or “engines” that provide analytics such as ENVI, IDL and more. 

GSF by design has its Node.js code exposed. This allows the product to be customized infinitely to fit whatever architecture that it needs to reside in. It is a modular system, and has different modules that can be toggled on/off or even duplicated and extended upon. This makes customization easy and safe.

One of these modules is called a Request Handler. This module serves up endpoints for GSF. This can be simply REST based calls, a webpage or even better, both. 

While developing an ENVI web client demo that takes Amazon S3 hosted raster data and passes it to GSF to run analytics on, I found that I didn’t have a way to simply list what data is available in my S3 storage. While exploring ways to solve this problem I came to a realization that I can simply use the power of Node.js to accomplish this task.

After importing the aws-sdk package that is already installed with GSF into my request handler, I just wrote a simple function to use that package to list any .dat files in my S3 storage data and return that information to my front end web application to be ingested and displayed.

Here is the slightly modified request handler code with comments explaining each part.


//Import express, used by node.js to setup rest endpoints
var express = require('express');
//Import AWS, used to interact with Amazon Web Services including S3 storage
var aws = require('aws-sdk');
//Extra tools that should be included in request handlers
var extend = require('util')._extend;
var defaultConfig = require('./config.json');

/**
 * Dynamic Request Handler
 */

function DynUiHandler() {
    var s3Bucket, s3;
    //Setup config options
    var config = {};
    extend(config, defaultConfig);
    //Grab information for S3 from config.json
    s3Workspace = config.S3Root;
    s3Bucket = config.S3Bucket;

    // Setup S3
    s3 = new aws.S3(config);


    /**
     * List files in the s3 bucket.
     * @param {object} req - The Express request object.
     * @param {object} res - The Express response object.
     */
    function listS3Data(req, res) {
    //Take the parameter passed from the REST call and set that as the bucket to be accessed in the call to s3.
        var params = {
            Bucket: req.params.bucket
        }; 
        //Call the S3 package's built in listObjects function to return what is avaiable in the specified bucket

        s3.listObjects(params, function(err, data) {
        //check if error occured and if so, halt and report it to client.
            if (err) {
                var code = err.code || 500;
                var message = err.message || err;
                res.status(code).send({
                    error: message
                });
            } 
            // If no error, push every object found in the response with a '.dat' extension to an array that will be returned to client 
            else {
                var files = [];
                //Look at each file contained in data returned by s3.listObjects()
                data.Contents.forEach(function(file) {
                    //Searches for files with .dat in the bucket requested
                    if(file.Key.endsWith('.dat')){
                        //If found, store that file information in the files array.
                        files.push(file);
                    }
                });
            }
            //send the files array containing metadata of all .dat files found.
            res.send(files);
        });
    };

    //Initialize request handler, run when GSF starts up.
    this.init = function(app) {
        // Set up request handler to host the html subdirectory as a webpage.
        app.use('/dynui/', require('express').static(__dirname + '/html/'));
        // Set up a rest call that runs listS3Data and supplies the accomponying bucket parameter.
        app.get('/s3data/:bucket', listS3Data);
    };
}
//
module.exports = DynUiHandler;

After restarting the server I was able to hit my rest endpoint by pasting the following in my browser:
http://localhost:9191/s3data/mybucket

This would return me JSON of the contents of “mybucket”

 
Using this information I was able to give a user of my application real time information right inside of the interface about which data they have available to them. 
 

This is a simple example of how I customized a stock request handler to make my web demo more powerful. However, the ability to modify the source code of such a powerful system in a modular fashion provides a safe way to mold GSF into the exact form and shape you need to fit into your own architecture. 

It might just be me but I much prefer that over restructuring my own existing architecture to fit closed source code I know nothing about.


Comments (0) Number of views (1114) Article rating: No rating

25

Oct

2016

The Future of Weather Observation is Coming Soon! GOES-R

Author: Joey Griebel

If all goes according to plan, next month the GOES-R weather satellite will be launching and the next generation of weather forecasting, solar activity monitoring, and lightening detection will be here. This advanced satellite will change how quickly and accurately we are able to monitor and predict hazardous weather and help give those in harm’s way the time needed to prepare and evacuate. The GOES-R satellite will include:

Advanced Baseline Imager (ABI) – an advanced imager that has 3 times more channels, 4 times better resolution , and 5 times faster than before. All of this leads to better observation of severe storms, fire, smoke, aerosols, and volcanic ash.

Geostationary Lightening Mapper - The lightning mapper will allow mapping of lightning strikes on ground, as well as lightning in the atmosphere. Researchers have found that an increase in lightning activity may be a sign of tornadoes forming, thus providing the data to detect tornadoes faster.

Space Weather Observation - GOES-R will work with NOAA instruments to gather information on radiation hazards from sun that can interfere with communication and navigation systems, damage satellites, threaten power utilities.

 

Harris Corporation has been supporting NOAA with some aspects on the GOES-R Satellite construction and will be providing the Ground System Support, as well as the 16.4 Meter triband antenna needed to stay in touch with it.

Now what does this have to do with ENVI? Once GOES-R is operational and collecting data, ENVI will be working to support the Harris Weather groups WXconnect systems for data validation and visualization, supporting the ABI data directly in ENVI, as well as working with NOAA to help continue to create advanced products in the future. It is an exciting time for NOAA with this milestone launch and an exciting time to be working with ENVI to get to work the advanced data that will be coming down!

Below are the baseline products, as well as some future products that can be expected.

 

BASELINE PRODUCTS

FUTURE PRODUCTS

Advanced Baseline Imager (ABI)

Absorbed Shortwave Radiation: Surface

Aerosol Detection (Including Smoke and Dust)

Aerosol Particle Size

Aerosol Optical Depth (AOD)

Aircraft Icing Threat

Clear Sky Masks

Cloud Ice Water Path

Cloud and Moisture Imagery

Cloud Layers/Heights

Cloud Optical Depth

Cloud Liquid Water

Cloud Particle Size Distribution

Cloud Type

Cloud Top Height

Convective Initiation

Cloud Top Phase

Currents

Cloud Top Pressure

Currents: Offshore

Cloud Top Temperature

Downward Longwave Radiation: Surface

Derived Motion Winds

Enhanced "V" / Overshooting Top Detection

Derived Stability Indices

Flood/Standing Water

Downward Shortwave Radiation: Surface

Ice Cover

Fire/Hot Spot Characterization

Low Cloud and Fog

Hurricane Intensity Estimation

Ozone Total

Land Surface Temperature (Skin)

Probability of Rainfall

Legacy Vertical Moisture Profile

Rainfall Potential

Legacy Vertical Temperature Profile

Sea and Lake Ice: Age

Radiances

Sea and Lake Ice: Concentration

Rainfall Rate / QPE

Sea and Lake Ice: Motion

Reflected Shortwave Radiation: TOA

Snow Depth (Over Plains)

Sea Surface Temperature (Skin)

SO2 Detection

Snow Cover

Surface Albedo

Total Precipitable Water

Surface Emissivity

Volcanic Ash: Detection and Height

Tropopause Folding Turbulence Prediction

Geostationary Lightning Mapper (GLM)

Upward Longwave Radiation: Surface

Lightning Detection: Events, Groups & Flashes

Upward Longwave Radiation: TOA

Space Environment In-Situ Suite (SEISS)

Vegetation Fraction: Green

Energetic Heavy Ions

Vegetation Index

Magnetospheric Electrons & Protons: Low Energy

Visibility

Magnetospheric Electrons & Protons: Med & High Energy

Solar & Galactic Protons

Magnetometer (MAG)

Geomagnetic Field

Extreme Ultraviolet and X-ray Irradiance Suite (EXIS)

Solar Flux: EUV

Solar Flux: X-ray Irradiance

Solar Ultraviolet Imager (SUVI)

Solar EUV Imagery

 

Comments (0) Number of views (942) Article rating: 4.7

Categories: ENVI Blog | Imagery Speaks

Tags:

17

Oct

2016

Just an analyst looking at the world

Author: James Lewis

In most cases as a geospatial analyst I use ENVI and IDL to look at the world in a rather flat and two dimensional aspect when it comes to data analytics. However, LiDAR, drones, advancements in photogrammetry, and temporal analytics within ENVI have given me the ability to analyze data in new ways that were previously too time consuming, difficult, or simply not available.  

I can use tools like the photogrammetry suite to create dense point clouds over areas that it would be difficult to attain the same fidelity, or even fly over with airborne platforms. As small satellites come online they will give us coverage and revisit times unlike anything we’ve had before!

What will we do with all that data?  

Utilizing the spatiotemporal tools in ENVI, we can gain information from days, weeks, months, or years’ worth of collects that classic change detection analytics would not support.

    Point cloud drone collect   Temporal analysis            

 

LiDAR data has become more available for analyst, and we need to be able to exploit it beyond looking at a bunch of points. Using the ENVI LiDAR tools, you can quickly and easily build robust elevation models to be used for everthing from creating an ortho-mosaic to a line-of-sight analysis. We can also extract 3D models for modeling and simulation, or building scenes for decsion making.

 

Collada models from ENVI LiDAR

DSM and building vectors from LiDAR point cloud











I can also take advantage of IDL to build out custom tools that enable me to become a more efficient analyst. For example, I can create a tool to ingest a point cloud that gives me multiple products, where before I would have needed to create them individually. I also won’t have to rewrite my code over again when I’m ready to move to the enterprise thanks to the new ENVI task system.

Custom tool combing multiple task






ENVI is not only allowing analyst to respond to various problem sets more efficiently, but also to ENVI enables analysts to do things that just weren’t possible a few years ago. Though the exploitation and types of data is forever changing at a rapid pace, ENVI is leading the charge to remain an industry standard for analytics.

Comments (0) Number of views (947) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

11

Oct

2016

Could satellites be the secret to detecting water leaks?

Author: Cherie Muleh

As a Channel Manager for Harris Geospatial, I look after our distributors in Australia, Latin America, and Southeast Asia. Distributors sell our products in their respective regions, handle support, training, and also uncover new uses for our products. Esri Australia's Principal Consultant Remote Sensing and Imagery, Dipak Paudyal, is driving new insights with data, and has started investigating a new way that satellites could help detect water leaks.

Water utility companies routinely face millions of dollars in lost revenue with wasted water and leaks in their pipeline infrastructure. In a recent blog, Dipak explores if there is a way for satellite data and location analytics to help preserve water loss and also enable utility companies to better identify cracks in their system. As Dipak notes, “…water utilities that are willing to think outside of the box and investigate new technologies such as SAR imagery will be guaranteed to stay ahead of the game.”

Analyzing all of this data requires the use of specialty tools like ENVI SARscape to help users transform raw SAR data into an easy-to-interpret images for further analysis. Check out Dipak's blog and let us know what do you think? Can we help the water industry better map their resources with this type of technology?

Comments (0) Number of views (901) Article rating: 5.0
12345678910 Last

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2016 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation