5183 Rate this article:
4.5

An Overview of LiDAR and Hyperspectral Data Fusion

Jason Wolfe

We often hear about the concept of data fusion in remote sensing, but what does it mean? In general, it refers to the process of combining data from multiple sensors to produce a dataset that contains more detailed information than each of the individual sources. A sensor by itself offers a unique perspective and is designed for a specific purpose. However, complementary data from multiple sensors can significantly improve the accuracy and interpretation of features.

Three common levels of data fusion exist (Zhang, 2010):

  • Pixel-level fusion: Combines raw pixel data from multiple source images into a single image.
  • Feature-level fusion: Extracts different objects from multiple data sources to yield feature maps for subsequent processing in change detection, image segmentation, etc.
  • Decision-level fusion: Fuses the results of multiple algorithms to yield a final decision, using statistical or fuzzy logic methods.

Let’s say that I want to study different tree species in an urban environment. I could take a field trip to identify each tree in person, but this would be impossible in a large city. Many studies have shown that fusing together multiple airborne or satellite data sources is an efficient way to approach this problem. See the References section at the end of this article for some examples.

Light detection and ranging (LiDAR) data provides information on the 3D structure of trees such as canopy height and volume. Because of its high sampling rate, LiDAR data can be used to estimate these metrics at individual tree levels.

NEON LiDAR point clouds, colored by class in ENVI LiDAR

Cross section of NEON point cloud data, colored by height in ENVI LiDAR

Imaging spectroscopy (also referred to as hyperspectral imagery) provides information about the spectral characteristics of different objects. It can help to distinguish urban features from vegetation and even to discriminate between different tree species. Urban planners can use image spectroscopy to detect objects covered with different roofing materials, streets, and open spaces.

NEON hyperspectral image with spectral profile and 3D cube created in ENVI

By combining LiDAR and hyperspectral data, you can:

  • Create more accurate classification images of urban features
  • Identify individual tree species
  • Estimate forest biomass

I wanted to learn more about these techniques myself, so I acquired some sample LiDAR and hyperspectral imagery of the city of Fruita, Colorado from the National Ecological Observatory Network (NEON). These datasets work well with data fusion techniques:

  • Airborne imaging spectrometer reflectance data in 428 bands extending from 380 to 2510 nanometers, with a spectral sampling of 5 nanometers. Spatial resolution is approximately 1 meter.
  • Airborne LiDAR point-cloud data, along with derived gridded products such as digital surface models, canopy height models, slope, aspect, etc.

From the reflectance image I can derive raster products such as:

  • Principal components
  • Classification image created with the Support Vector Machine (SVM) algorithm and in-scene spectra (since ground-truth data are not yet available for this area)
  • Spectral indices to indicate vegetation health or to separate man-made features from the rest of the scene

Multi-view display in ENVI showing hyperspectral- and LiDAR-derived products

The next step would be to create a raster layer stack in ENVI that consists of various images derived from the LiDAR and hyperspectral data. I would need to co-register the images before creating a layer stack. I could open the layer stack in ENVI Feature Extraction to extract objects that meet certain criteria. If I wanted to extract buildings above a certain height, I could create a rule that identifies a specific range of height values from the digital surface model along with low NDVI values to exclude vegetation. Or, I could use the hyperspectral reflectance image along with the canopy height raster to identify different tree species. The possibilities are endless!

This article provided a quick overview of a complex subject, but hopefully it gives you some ideas for potential applications.

References

Abbasi, B., H. Arefi, B. Bigdeli, M. Motagh, and S. Roessner. “Fusion of Hyperspectral and LiDAR Data Based on Dimension Reduction and Maximum Likelihood.” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 36th International Symposium on Remote Sensing of Environment, Volume XL-7/W3 (2015).

Liu, L., Y. Pang, W. Fan, Z. Li, and M. Li. “Fusion of Airborne Hyperspectral and LiDAR Data for Tree Species Classification in the Temperate Forest of Northeast China.” 19th International Conference on Geoinformatics (2011), doi: 10.1109/GeoInformatics.2011.5981118.

Man, Q., P. Dong, H. Guo, G. Liu, and R. Shi. “Light Detection and Ranging and Hyperspectral Data for Estimation of Forest Biomass: A Review.” Journal of Applied Remote Sensing 8 (2014): 081598-1 through 081598-13.

National Ecological Observatory Network. 2014. Data accessed on 14 September 2015. Available on-line (http://data.neoninc.org/) from National Ecological Observatory Network, Boulder, CO, USA.

Ramdani, F. “Urban Vegetation Mapping from Fused Hyperspectral Image and LiDAR Data with Application to Monitor Urban Tree Heights.” Journal of Geographic Information System 5 (2013): 404-408.

Sugumaran, R., and M. Voss. “Object-Oriented Classification of LiDAR-Fused Hyperspectral Imagery for Tree Species Identification in an Urban Environment.” Urban Remote Sensing Joint Event (2007), doi: 10.1109/URS.2007.371845.

Zhang, J. “Multi-source Remote Sensing Data Fusion: Status and Trends.” International Journal of Image and Data Fusion 1 (2010): 5-24. doi: 10.1080/19479830903561035.

Please login or register to post comments.