14

Nov

2016

Using Deep Learning for Feature Extraction

Author: Barrett Sather

In August, I talked about how to pull features out of images using known spatial properties about an object. Specifically, in that post, I used rule-based feature extraction to pull stoplights out of an image.

Today, I’d like to look in to a new way of doing feature extraction using deep learning technology. With our deep learning tools developed here in house, we can use examples of target data in order to find other similar objects in other images.

In order to train the system, we will need 3 different kinds of examples for the deep learning network to learn what to look for. This will be target, non-target, and confusers. These examples are patches cut out of similar images, and the patches will all be the same size. In my case for this exercise, I've picked a size of 50 by 50 pixels.

The first patch type is actual target data – I’ll be looking for illuminated traffic lights. For the model to work well, we’ll need different kinds of traffic signals, lightning conditions, and camera angles. This will help the network generalize what the object looks like.

Next, we’ll need negative data, or data that does not contain the object. This will be the areas surrounding the target, and other features that will possibly appear in the background of the image. In our case for traffic lights, this will include cars, streetlights, road signs, foliage, and others.

For the final patch type, I went through some images and marked things that may confuse the system. These are called confusers, and will be objects with a similar size, color, and/or shape of the target. In our case, this could be other signals like red arrows or a “don’t walk” hand. I’ve also included some bright road signs and a distant stop sign.

Once we have all of these patches, we can use our machine learning tool known as MEGA to train a neural network that can be used to identify similar objects in other images. 

Do note that I have many more patches created than just the ones displayed. With more examples, and more diverse examples, MEGA has a better chance of accurately classifying target vs non-target in an image.

In our case here, we’ll only have three possible outcomes as we look though the image. This will be lights, not lights, and looks-like-a-light classes. If you have many different objects in your scene, you can even get something more like a classification image, as MEGA can be used to identify as many objects in an image as you like. If we wanted to extend this idea here, we could look for red lights, green lights, street lights, lane markers, or other cars. (This is a simple example of how deep learning would be used in autonomous cars!)

To learn more about MEGA and what it can do in your analysis stack, contact our custom solutions group for more details! For my next post, we’ll look at the output from the trained neural network, and analyze the results.

Comments (0) Number of views (3322) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

9

Aug

2016

Basic feature extraction with ENVI ROIs

Author: Barrett Sather

The ENVI feature extraction module allows a user to extract certain features from an image. The first step is to segment the image in to regions. Once this is done, you can use spatial and spectral qualities of the regions to pull out targets.

For this example, let’s find the traffic lights in this image taken from Wikipedia’s page on “traffic light”. https://en.wikipedia.org/wiki/Traffic_light

The first step is to do the segmentation with ENVI’s FXSegmention task. I’ve set a merge value of 85 and a segment value of 95, which I determined to be good values for this dataset through manual inspection in ENVI.

Once these segments are created, they can be converted to ENVI ROIs. In this case, I used the ImageThresholdToROI task.

From here, now we can look at qualities of each region, and create a scoring algorithm for how likely each ROI is the feature we are looking for. I felt like looking for the color red was not viable, as then this algorithm breaks down when the light changes. So instead, because our target regions we want to find are circular, let’s look for that.

There are two things we know about circles that we can check for. We know that they are symmetrical, and we know the relationship of the area and circumference as the radius of the circle increases. To start, I’ve set up the pixel coordinates of each ROI so that the origin is the average x/y location for that region. I’ve also calculated the distance away from the origin at each pixel.

x_avg = total(addr[0,*])/n_pix

y_avg = total(addr[1,*])/n_pix

x_coords = reform(addr[0,*] - x_avg)

y_coords = reform(addr[1,*] - y_avg)

power = (x_coords^2 +y_coords^2)

To test for symmetry, take the min and max of x and y, and add them together. The closer to zero that this value is, the more symmetric the ROI is.

abs(max(x_coords)+min(x_coords))+ abs(max(y_coords)+min(y_coords))

To test for how circular the area is, we can test the slope of how the distance from the origin increases. Since we know how this relationship between area and circumference behaves, we can find what slope we are looking for.

Slope =  = 1/3

Because a high score for symmetry is zero, let’s use the same scoring system for this measure.

line = linfit(lindgen(n_elements(power)),power[sort(power)])

score = abs(line[1]-.3333)

The final step is to assign weights (I used weights of 1) and calculate an overall score. The full code for extracting traffic lights (or any other circles) can be found at the bottom of this post.

This method for feature extraction takes a while to develop and perfect, which leads me to some exciting news for those that need to quickly develop feature extraction models. There is another method that we have been developing here in house to do feature extraction called MEGA, which is available through our custom solutions group. This is a machine learning tool that takes in examples of a feature you are looking for, and then generates a heat map of where is it likely that that feature is located in an image.

Stay tuned for an in depth look at how this new method compares to classic feature extraction techniques like the one I’ve presented above.

 

pro traffic_light_fx

 compile_opt idl2

 

 file = 'C:\Blogs\FX_to_ROIs\Stevens_Creek_Blvd_traffic_light.jpg'

 e=envi()

 view = e.GetView()

 ;File was created from FX segmentation only. High Edge and Merge settings.

 t0 = ENVITask('FXSegmentation')

 raster = e.Openraster(file)

 t0.INPUT_RASTER = raster

 layer = view.CreateLayer(raster)

 t0.OUTPUT_RASTER_URI = e.GetTemporaryFilename('.dat')

 t0.MERGE_VALUE = 85.0

 t0.SEGMENT_VALUE = 90.0

 t0.Execute

 

 t1 = envitask('ImageThresholdToROI')

 t1.INPUT_RASTER = t0.OUTPUT_RASTER

 t1.OUTPUT_ROI_URI = e.GetTemporaryFilename('.xml')

 loop_max = max((t0.OUTPUT_RASTER).GetData(), MIN=loop_min)

 num_areas = loop_max-loop_min+1

 t1.ROI_NAME = 'FX_Area_' + strtrim(indgen(num_areas)+1, 2)

 t1.ROI_COLOR = transpose([[bytarr(num_areas)],[bytarr(num_areas)],[bytarr(num_areas)+255]])

 arr = indgen(num_areas) + loop_min

 t1.THRESHOLD = transpose([[arr],[arr],[intarr(num_areas)]])

 t1.execute

 

 allROIs = t1.OUTPUT_ROI

 c_scores = []

 c_loc = []

 for i=0, n_elements(allROIs)-1 do begin

   addr = (allROIs[i]).PixelAddresses(raster)

   n_pix = n_elements(addr)/2

   if n_pix gt 60 then begin

     x_avg = total(addr[0,*])/n_pix

     y_avg = total(addr[1,*])/n_pix

     x_coords = reform(addr[0,*] - x_avg)

     y_coords = reform(addr[1,*] - y_avg)

     power = (x_coords^2 + y_coords^2)

     line = linfit(lindgen(n_elements(power)),power[sort(power)])

     this_score = abs(max(x_coords) + min(x_coords)) + max(y_coords) + min(y_coords)) + abs(line[1]-.3333)

     c_scores = [c_scores, this_score]

     c_loc = [c_loc, i]

   endif

 endfor

 idx = sort(c_scores)

 sort_scores = c_scores[idx]

 sort_locs = c_loc[idx]

 for i=0, 3 do begin

   lightROI = allROIs[sort_locs[i]]

    !null = layer.AddRoi(lightROI)

  endfor

end

Comments (0) Number of views (2051) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

2

Jun

2016

The Perks of Being a Beta Tester

Author: Barrett Sather

As part of the Commercial Services Group here at Harris, I have a unique opportunity of using our latest and greatest tech, often before it even goes public. (Making me a bit of a beta tester sometimes.) Over the past couple months, I’ve had the pleasure of getting to work with some of our newest tech, and it’s very exciting. The two pieces of technology I’ve had my hands on are our Geospatial Framework (GSF), and our machine learning tool known as MEGA.

The Geospatial Framework has given me a lot of power in setting up large amounts of processing either locally, or on a server. It allows for clustering, many different processing engines, and even lets you hook it into custom websites. From what I’ve seen, it’s very flexible and powerful tech.

MEGA, which I’m also extremely excited about, is a new way to tackle feature extraction problems. Though I don’t understand the underlying development of artificial intelligence, I have had hands on experience training this deep learning system. While the code is complex, the idea is not – first, feed the system examples of what you are looking for in imagery. Once the system is trained, the software will tell you locations and confidences for where those objects of interest are.

Image: http://stats.stackexchange.com/questions/tagged/deep-learning

Using these two technologies together, I’ve been classifying large amounts of image data quickly through a deep learning system, and the sky really is the limit. With GSF, the ability to scale up just depends on how much hardware you can allocate to a task.

The more I use and learn about these technologies, the more excited I become about them. The best part is that they are very new, so they have room to become more and more powerful and robust. Can’t wait!!

Comments (0) Number of views (1691) Article rating: 4.0

Categories: ENVI Blog | Imagery Speaks

Tags:

11

Feb

2016

Using Reports in a Cloud Based Environment

Author: Barrett Sather

One hurdle that I often find challenging when working in the cloud, is the question "Where is the data?" This question is not just for inputs and the processing chain, but also the output data and how a person can (and should) access it.

When in a server environment, it is not common that a person who is accessing that server will want to do processing, and then download the full final files for inspection due to their size. Instead, it's much simpler to generate a smaller file that retains the information necessary. This way, the smaller file can be downloaded, or even just viewed on the server with a web client.

This was our motivation in the Professional Services Group while constructing the report generator for the Precision AG Toolkit. This system takes in parameters in XML format as key value pairs (ie stretch="2% linear", or color_table="39") and with these parameters, the reports from the tool are not only highly customizable, but also easy to change or generate new ones.

Once results are generated from the server, the image is placed in to a document (it currently only supports PDF) and the extra graphics are added in. This can be objects like text, colorbars, and scaling. Below is an example of this that we generated using the Precision AG Toolkit's Hotspot Analysis Tool, and a template made specifically for reporting these hotspots.

The Hotspot Analysis tool uses a scoring system to determine the most important "Hotspots" in a image that is generated from a vegetation index. In this case we used NDVI for plant health. The large green region at the top of the image is a golf course.

This portable format is viewable very quickly, since the document will be a fraction of the actual file size. Because of this, reporting is going to be much faster than traditional methods including downloading and analyzing the data. It's also an extra perk that the files are small enough to send in an e-mail! (Imagine that....)

Total size of the above example? 57 KB.

Comments (0) Number of views (3226) Article rating: No rating

15

Dec

2015

Precision Agriculture with a Scalable System

Author: Barrett Sather

Often times, in the world of remote sensing, there is a need to have multiple algorithms run on a single file. It is also advantageous to be able to run the same algorithms on many different files. A lot of the algorithms and workflows can take a few minutes, so an analyst has to wait for each step to finish before proceeding to the next task. This however is not the case when using a server environment like ENVI Services Engine.

When using a server configuration, you can make a request to the server to start some processing, and that job will be set up in a queue. When the machine that the server is on has the resources to do that job, it will begin processing. This allows a user to set up a whole bunch of different tasks on different files while the processing runs separately from the client.

Using the ENVI Services Engine, our Services Group built a tool to take in imagery, and run multiple different algorithms and vegetation indices on it. It has a web page as a client so that you can input different parameters for different files, and click go! It also has an option to perform atmospheric correction (QUAC) on the data before running the algorithms. Once a job has completed, the web page notifies you that the task is finished, and has a link to view the results in PDF format. Here's what the version I have right now looks like in Mozilla Firefox:

Depending on the amount of processing you need to do, this system is very easy to scale up and down. If you need to process more files in less time, simply adding resources for the server will increase the number of jobs that you can run at a time.This ends up shortening the time that jobs will spend in the queue, and allows for a faster processing chain.

This is one example of tool that our Services Group is able to build. ENVI Services Engine is a very flexible tool that allows you to incorporate any algorithm that you can write, and the controller for the algorithm has an infinite number of ways to set it up - just like webpages across the internet. If there is a custom workflow that you find cumbersome and clunky, let our experts know at consult@exelisvis.com. We work to provide you with the tool that you need to run your processing with less wait, and less hassle. If you need even more speed, check out this article on GPU acceleration! Also, to see more of our custom web tools in action, check out our demo page!

Comments (0) Number of views (4893) Article rating: 5.0
1234

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors



© 2017 Exelis Visual Information Solutions, Inc., a subsidiary of Harris Corporation