Peter Nasuti – Comparing Applications of UAV and Satellite Imagery Utilizing Object-Based Classification of Aleurites moluccana

Currently working with revision @ 2013-07-01 15:59:46 by Greg Hosilyk. Current version

Contents

[ hide ]

    Comparing Applications of UAV and Satellite Imagery Utilizing Object-Based Classification of Aleurites moluccana

    Peter Nasuti
    California State University, Long Beach
    Appalachian State University
    National Science Foundation

    I. Abstract

    Aleurites moluccana, commonly known as candlenut or kuku’i, can be found extensively throughout the Ka’a’awa Valley on the eastern shore of Oahu. The goal of this project was to develop a methodology for object-based species level classification from both global satellite observation platforms and site-specific UAV imaging technologies and evaluate the effectiveness of both for kuku’i identification. Ground truthing sites were collected throughout the valley to confirm the locations of kuku’i stands, which corresponded with the vegetation’s pale green signature on aerial images. Both WorldView 2 and UAV imagery of the valley was then classified using Definiens eCognition to build polygons containing the kuku’i species. A mosaic of UAV imagery spanning the center of the valley was utilized as a test site to evaluate both classification methods. The recent UAV orthophotos were digitized to create a control of the locations of this plant in the study area. The coefficient of areal correlation was then calculated to assess the effectiveness of each classification. It was found that in this test site, object based classification of UAV imagery with higher spatial and lower spectral resolution had a much stronger relationship with the actual locations compared to WorldView 2 satellite imagery with lower spatial and higher spectral resolution.

    II. Introduction

    The overall objective of this research is to compare capabilities of satellite imagery and small scale UAV imagery by building methods for object based identification of this plant species, and evaluating the effectiveness of this strategy by calculating areal correlation to the true locations. Object-based image classification is, “in many cases superior to traditional per-pixel methods,” that often generate excessive noise making comprehension near impossible. (Blaschke et al. 2001) This object oriented method grabs similar pixels to build polygons which are then assigned classes by the user via either statistical classification or feature extraction. In other locations with similar vegetation features on the landscape, the methodology from this study could be applied to quickly extract kuku’i from surrounding objects. While specific feature extraction parameters will change according many variables throughout the process, the general workflow could be utilized to aid other studies interested in finding or excluding the kuku’i from a remotely sensed image. Aleurites moluccana is the state tree of Hawaii, and often plays a dominant role in both the lowland wet forest and in channels descending from high elevation regions. With the advent of low cost UAV imaging technologies, extensive opportunities for applied solutions have opened to the public across a variety of disciplines. By examining the effectiveness of these technologies for various applications, such as kuku’i identification, the strengths and weaknesses of this new UAV image capture can be deciphered and applied to solve other problems.

    III. Methods

    Ground truthing points were collected throughout the valley using a Trimble GeoXH 6000, with each point representing either a stand of kuku’i, or a neighboring species utilized to highlight differentiation between the two. These points were post-processed and then plotted in ArcMap against WorldView 2 and also UAV mosaic imagery of the valley and their locations confirmed the distinct, bright green visual appearance of the kuku’i on aerial and satellite imagery. The test site was selected from a UAV flight on June 17, 2013, and the UAV platform was a Skywalker X-8 FPV which flew at roughly 100 meters carrying a Ricoh visible light (RGB) camera with a UV filter. The images collected from this mission were built into an orthophotos by processing in Agisoft Photoscan, and then regions of visual distortion were subsetted out of the orthophoto with Erdas Imagine.

    Before quantification could be completed in ArcMap, the classifications had to be built using the object-oriented software. Definiens eCognition was utilized for both the WorldView 2 satellite image and the Ricoh true color orthophoto from the UAV flight. Various scale parameters, shape and compactness values, and layer weights were analyzed in multiple classifications in order to build an optimal segmentation of the image objects of interest. Feature extraction was utilized to classify the kuku’i polygons, which was determined to be much more successful than statistical classification. Mean and standard deviation of the eight layer values were the primary tools for feature extraction, however, relational borders, rectangular fit, brightness, and pixel length were also utilized to tease out difficult polygons. The specific settings for extraction depend upon image subset size, segmentation parameters, times of image capture, image mosaic settings and more, therefore the identical settings could not be applied to other regions. However, the workflow and band differences used for differentiation could be utilized for similar projects with minor modifications.

    After building and optimizing these classifications, shapefiles were exported to ArcMap, and kuku’i polygons were extracted from the entire classification. These polygons were then clipped to the UAV study area of interest, and area was calculated with the calculate geometry toolset. Polygons on the UAV image were digitized to provide accurate locations of the vegetation of interest. An intersect was then performed between the digitized polygons, and the eCognition shapefile kuku’i outputs. Following this, a union was built between the eCognition shapefile and the intersect, and area was again calculated for these two new shapefiles. The coefficient of areal correlation was quantified by dividing the total area of the intersect by the total area of the union. This provided a measure of overlap between the eCognition output and the true locations of kuku’i, which assesses the relative effectiveness of the classification strategy. It also can be interpreted to reflect upon the value of the different imagery types for identification of this species. The overlaps can be seen in the figure one where the purple represents the hand-digitized UAV regions, the red is the intersect between the digitized UAV regions and the eCognition output, and the yellow is the union between the intersect of the UAV digitized regions and the eCognition output. The overall workflow is symbolized in figure two.

    Figure 1. ArcMap Quantifications of Accuracy

    Figure 2. Quantification Workflow

    IV. Results / Discussion

    It was found that the classification of the UAV image had a much higher areal correlation to the digitized kuku’i polygons than the WorldView 2 classification. The correlation value for this UAV image classification was .84, whereas the WorldView 2 classification provided a correlation of .35. This was visible before quantification, when observing the classifications in Definiens eCognition. With the WorldView 2 image, there were certain sections of pasture that overlapped for mean and standard deviation of value on all eight bands. Despite extensive attempts in every direction for methods of feature extraction, when attempting to isolate one region, there would be an accidental misclassification of a different region. After making all of the essential layer value extractions, the process moved towards working with relational border values to extract difficult to identify kuku’i stands. The following table provides the specific quantities of total area for each polygon.

    Table 1. Areal Correlation Results

    There are certain biases in this assessment that must be discussed prior to discussing the effectiveness of these different options. First of all, the layer intersect and subsequently the layer union are based off the digitized polygons of kuku’i from the X8 UAV image. This was a very recent image taken on the 17th compared to the WorldView 2 image which was captured in 2011. This means that inherently the UAV classification will have a somewhat higher amount of intersect. However, the digitized polygons of the UAV image were cross referenced with the WorldView 2 image and it was found that the only differences were very minor, and might have been able to be explained by two years vegetation change. Overall, the major stands of kuku’i are all found in the same places between the two years, however it is certain that there has been some change in things such as individual location or canopy extent over two years, even if the changes are minor.

    Despite these differences, UAV image classification was clearly a much more effective method of determining the kuku’i locations in the landscape. The major limitation to this technology is that it takes more training and time to plan and fly an aerial imaging mission compared to simply buying satellite imagery for the area of interest. Also, post processing must be completed to build mosaics from the images captured in flight in a program such as Agisoft Photoscan. This process is prohibitive in terms of hardware processing requirements. Also, there are limitations to this technology such as some vertical relief on edges of the image, and the potential for gaps or skewness in sections of the imagery due to variables such as wind or flight plan errors.

    V. Future Directions

    Unfortunately, there was not sufficient UAV imagery of the high elevation channels on the north mountain in the Ka’a’awa Valley. Imagery was captured of this area however it was with a NIR camera from the Gatewing X100 platform. Working in Definiens eCognition, the kuku’i stands were unable to be extracted from the surrounding landscape whatsoever without severe overlap into other vegetation classes. A future project could expand upon this work by changing many variables to expand upon the assessment of satellite and UAV applications to this type of vegetation identification. Another research project might include an assessment of a different satellite platform with different spatial and spectral resolutions to see what can be found. A focus could be placed on high elevation test sites, where remote sensing is of critical need at a location with sufficient slope that ground access is not feasible. It would also be feasible work with UAV imagery to utilize this kuku’i extraction methodology in order to build a foundation for study of connections between other processes in the landscape.

    VI. Conclusions

    Overall, it was found that UAV imagery was much more successful in object based identification of the kuku’i plant species than the WorldView 2 satellite platform. This goes against conventional remote sensing logic that higher spatial resolution is not always better when attempting to isolate objects or build any classification. Usually, the excess spatial resolution picks up smaller variations in the appearance of surface objects, and makes classification more difficult. Also, it is generally assumed that more bands are better than fewer when building feature extractions in eCognition. However, kuku’i in the three band RGB UAV orthophoto was quickly extracted with differentiation of the mean red band value and the standard deviation of the blue band. The satellite image took a prohibitive number of commands to classify, while still resulting in an unsuitable classification. Despite the shortcomings of the satellite image classification, it can clearly be seen that UAV imagery combined with object based classification represents a new dimension in the remote sensing field that can facilitate further scientific research.

    VII. References
    Blaschke, Thomas, and Josef Strobl. “What’s wrong with pixels? Some recent developments
    interfacing remote sensing and GIS.” GIS. (2001): 12-17.