>  Docs Center  >  Using ENVI  >  Neural Net
ENVI

Neural Net

Neural Net

Use Neural Net to apply a layered feed-forward neural network classification technique. The Neural Net technique uses standard backpropagation for supervised learning. You can select the number of hidden layers to use and you can choose between a logistic or hyperbolic activation function. Learning occurs by adjusting the weights in the node to minimize the difference between the output node activation and the output. The error is backpropagated through the network and weight adjustment is made using a recursive method. You can use Neural Net classification to perform non-linear classification.

References

Richards, J.A. Remote Sensing Digital Image Analysis Berlin: Springer-Verlag (1999), 240 pp.

Rumelhart, D., and J. McClelland. "Learning Internal Representation by Error Propagation" in Parallel Distributed Processing, Vol. 1, edited by Rumelhart, Hinton, and Williams, MIT Press (1987).

  1. Use the ROI Tool to define training regions for each class. The more pixels and classes, the better the results will be.
  2. Use the ROI Tool to save the ROIs to an .roi file.
  3. Display the input image you will use for Neural Net classification, along with the ROI file.
  4. From the Toolbox, select Classification > Supervised Classification > Neural Net Classification. The Classification Input File dialog appears.
  5. Select the input file and perform optional spatial and spectral subsetting, and/or masking, then click OK. The Neural Net Parameters dialog appears.
  6. In the Select Classes from Regions list, select ROIs and/or vectors as training classes. The ROIs listed are derived from the available ROIs in the ROI Tool dialog. The vectors listed are derived from the open vectors in the Available Vectors List.
  7. Select the activation method from one of the Activation radio buttons.
  8. In the Training Threshold Contribution field, enter a value from 0 to 1.0. The training threshold contribution determines the size of the contribution of the internal weight with respect to the activation level of the node. It is used to adjust the changes to a node’s internal weight. The training algorithm interactively adjusts the weights between nodes and optionally the node thresholds to minimize the error between the output layer and the desired response. Setting the Training Threshold Contribution to zero does not adjust the node’s internal weights. Adjustments of the nodes internal weights could lead to better classifications but too many weights could also lead to poor generalizations.
  9. In the Training Rate field, enter a value from 0 to 1.0. The training rate determines the magnitude of the adjustment of the weights. A higher rate will speed up the training, but will also increase the risk of oscillations or non-convergence of the training result.
  10. In the Training Momentum field, enter a value from 0 to 1.0. Entering a momentum rate greater than zero allows you to set a higher training rate without oscillations. A higher momentum rate trains with larger steps than a lower momentum rate. Its effect is to encourage weight changes along the current direction.
  11. In the Training RMS Exit Criteria field, enter the RMS error value at which the training should stop.

    If the RMS error, which is shown in the plot during training, falls below the entered value, the training will stop, even if the number of iterations has not been met. The classification will then be executed.

  12. Enter the Number of Hidden Layers to use. For a linear classification, enter a value of 0. With no hidden layers the different input regions must be linearly separable with a single hyperplane. Non-linear classifications are performed by setting the Number of Hidden Layers to a value of 1 or greater. When the input regions are linearly inseparable and require two hyperplanes to separate the classes you must have a least one hidden layer to solve the problem. Two hidden layers are used to classify input space where the different elements are neither contiguous or connected.
  13. Enter the Number of Training Iterations.
  14. To enter a minimum output activation threshold, enter a value in the Min Output Activation Threshold field. If the activation value of the pixel being classified is less than this threshold value, then that pixel will be labeled unclassified in the output.
  15. Select classification output to File or Memory.
  16. Use the Output Rule Images? toggle button to select whether or not to create rule images. Use rule images to create intermediate classification image results before final assignment of classes. You can later use rule images in the Rule Classifier to create a new classification image without having to recalculate the entire classification.
  17. If you selected Yes to output rule images, select output to File or Memory.
  18. Click OK. ENVI adds the resulting output to the Layer Manager. During the training, a plot window appears showing the RMS error at each iteration. The error should decrease and approach a steady low value if proper training occurs. If the errors are oscillating and not converging, try using a lower training rate value or different ROIs. ENVI lists the resulting neural net classification image, and rule images if output, in the Layer Manager.



© 2019 Harris Geospatial Solutions, Inc. |  Legal
My Account    |    Store    |    Contact Us