Howard Oh – Species Level Classification from UAV Imagery using Object Based Image Analysis with eCognition

Currently working with revision @ 2013-06-25 14:20:40 by Greg Hosilyk. Current version

Contents

[ hide ]

    Howard Oh

    California State University Long Beach
    Species Level Classification from UAV Imagery using
    Object Based Image Analysis with eCognition

     

    Introduction

                Invasive, non-native plants pose a major threat to the native biodiversity and ecosystem processes of oceanic islands.  Due to the high rate of non-native plant species being introduced to the Hawaiian Islands, Hawaii is a central focus for the threat of invasive plants on the national and international levels (Kueffer & Loope, 2008).  For example, the invasive Australian tree fern (Cyathea cooperi) has a natural dominance over the native Hawaiian endemic tree fern (Cibotium spp) by efficiently utilizing soil nitrogen and producing more fertile fronds every month Kueffer & Loope, 2008).  Hawaii has one of the highest numbers of problematic invasive species and lacks the tools necessary for tracking and recording what species are present or incoming before they are naturalized.  Accurately mapping of vegetation cover and observation of dynamic changes provide the scientific foundation for biodiversity protection and restoration (Kueffer & Loope, 2008).  Remote sensing is a powerful tool that has been used to aid in the classification of vegetation for decades, however further research is required in order to accurately map vegetation at the detailed community level and eventually at the species level.

    This research is aimed at improving this imagery in regards to species level vegetation classification, specifically WorldView 2 eight band commercial satellite imagery, with imagery collected by Unmanned Aerial Vehicles (UAVs) to take advantage of the higher spatial resolution. The potential benefits of UAV imagery will allow for more detailed quantitative analysis of vegetation and biodiversity.  The goal of this research is to eventually produce high spatial resolution vegetation and land cover products that can contribute to Hawaii’s Gap Analysis Program (GAP), which deals with habitat protection.  The techniques derived from this research can be further applied to monitor the biodiversity of oceanic islands around the globe.  

    Worldview-2 doubles the amount of spectral bands and the possibility for more in-depth analysis on vegetation, but satellite technology still has inherent limits. Remote sensing from satellites continues to face the same challenges in research it has from the very beginning.  Satellites are restrained by high operational costs, limited spatial resolution, long revisit times, and environmental factors such as cloud cover (Moran et al., 2003; Lebourgeois et al., 2012).  Although satellite imagery has made  major improvements in the last decade, current satellite imagery provides an insufficient amount of data required to support in-depth vegetation analysis in areas around the world that are not constantly monitored (Berni et al., 2009).  Satellite sensors lack imagery with spatial, spectral and temporal resolutions for accurate vegetation analysis (Berni et al., 2009; Lebourgeois et al., 2012).  Since vegetation conditions fluctuate throughout the year between satellite revisit times, satellite images cannot accurately assess vegetation growth and health.

    In recent times, research has been conducted to decrease costs while increasing the quality of remote sensing through Unmanned Aerial Vehicles (UAVs). This new technology is powerful but still faces many challenges.  One of the main challenges these imaging systems face is the imprecise or poor divisions of the wavelength bands in modified commercial cameras, which are the standard sensors on most UAS. Commercial off-the-shelf (COTS) cameras, even those modified to collect infrared wavelengths, are not as finely tuned as the multi-spectral scanners on satellites such as the WorldView-2, which has eight discrete bands with precise cut off points for each band.  For this reason, attempts are being made to replicate the multi-spectral scanners used on satellites for use on UASs.  The research team at Montana State University was successful in creating a low-cost multispectral imaging system using multiple commercial cameras and filter combinations on tethered balloons (Shaw et al., 2012).  Like many recent experiments, in order to compensate for certain limitations with UAS, such as the difficulty to align time-sequential images in different spectral bands and the need to use interference filters, Shaw et al. used post-processing imaging software to overcome the image distortion and the spectral shift towards shorter wavelengths caused by interference filters (Shaw et al., 2012).  Other researchers have come to the same conclusions that modified commercial cameras result in narrower bands and reduced spectral sensitivity in the visible and near-infrared bands compared to that of satellites (Lebourgeois et al., 2012).  This issue hinders the amount of quantitative analysis that can be done with this imagery.

    During my time in Oahu, I had access to visible (RGB) and near infrared (NIR) imagery so I wanted to explore the potential for both. A multispectral sensor was flown over areas of the Ka’a’awa Valley, however due to problems in the field this imagery was not able to be acquired in time for analysis. Also, because of distortion issues with Photoscan stitched imagery, proper analysis was unable to be done to the generated mosaics. The distortion issues involve some and user error with using the Photoscan software, limited processing power to create geometry and texture on high, lack of overlapping coverage of the area involving the PENTAX cameras lowest interval setting at 10 seconds and rain limiting amount of flights able to be done on the field combined with strong winds affecting the ideal flight plans of the DJI Phantom Quadcopter. Due to these issues I decided to focus my classification using a single raw image from a fixed-wing UAV (X8). Working with a single image proved to also be beneficial by reducing the processing time from the multiresolution segmentation and texture tools in eCognition.

    The scope of this research is focused upon comparing the challenges that normal visible light RGB and NIR modified cameras face classifying individual species from mixed vegetation.  This research focuses on methods in eCognition to most accurately classify vegetation from the RGB and NIR. My focus was on separately classifying the Kuku’i plant but primarily on the native Hala plant. This is because Kuku’i is able to be seen through the WV-2 imagery while Hala is not. Hala typically has a smaller sized canopy cover and has less of a color distinction from other vegetation. The Hala plant is able to be seen on the UAV imagery and it is possible to accurately classify Hala from surrounding vegetation. Through a grant provided by the National Science Foundation, I had the opportunity to conduct my own field research from June 2nd to June 29th, 2013.

    Methodology

                A comparison of the RGB and NIR imagery from the X-8 UAV was done by selecting an RGB and NIR image that covered the same area of the valley. The RAW images were then georectified onto the WV-2 image before analysis was done in eCognition.

    Figure1. Shows the sample site selected in the Ka’a’awa Valley in the lower left corner.  

    The segmentation of the images included RGB and NIR layer weights to 1, scale parameter 200, shape to 0.1 and compactness 0.3. The process tree included feature extraction tools including Mean RGB and brightness, relative border to and texture (GLCM Homogeneity).

    Figure 2 (GLCM Homogeneity of Pasture) Screenshot of GLCM Homogeneity texture tool used to classify pasture from mixed vegetation. This texture tool values objects based on how homogenous textures are within each object. In this figure, pasture is shown with a higher value because the smooth texture of grass is different from the coarser texture of the mixed vegetation.

    Results

                Overall, using the RGB image proved to be better at classifying vegetation than the NIR image. During the segmentation process of extracting the Hala and Kuku’I from the mixed vegetation, the RGB image was able to properly extract both of these plant species separately from the rest of the mixed vegetation. The NIR image was not as successful since there was less of a NIR reflectance difference between mixed vegetation, Hala and Kuku’i but NIR showed better results in differentiating mixed vegetation and soil.

     

     

     

     

     

     

     

     

     

     

    Figure 3 The image on the left (RGB) shows an accruate segementation of the Kuku’I, whereas the image on the right (NIR) shows the Kuku’i segmentation pulling in surrounding mixed vegetation.

     

     

     

     

     

     

     

     

     

    Figure 4 The image on the left (RGB) shows segementation of the Hala pullin in areas of the surrounding soil, whereas the image on the right (NIR) shows a better segmentation of mixed vegetation from the soil. 

    Figure 5 Shows how the RGB classification was able to extract the plant species Hala and Kuku’i, whereas NIR was not as accurate over classifying the Kuku’i and unable to extract the Hala from the rest of the mixed vegetation.

     

    An accuracy assessment was done on the images showing RGB having a better accuracy overall of 93% and NIR having an overall accuracy of 85%.

    Conclusion

                    These initial steps of a species level classification show a potential to classify individual plants like Hala on a larger area. However, it is essential to have strong processing power to analyze high spatial resolution imagery in Photoscan and eCognition (using the multiresolution segmentation and texture tools). For future study, it would be interesting to view a full mosaic of the valley while using the same classification and methods to test this Kuku’i and Hala classification. Multispectral sensor payloads should be explored for an improved classification process.  An interesting study would be seeing the change detection over the decades while also increasing the detection of other types of plant species.