Greg Hosilyk – Mapping Vertical Terrain

Currently working with revision @ 2013-09-16 11:13:16 by Greg Hosilyk. Current version

Mapping Vertical Terrain

By Greg Hosilyk
California State University, Long Beach in cooperation with University of Hawaii and Kualoa Ranch
Funded by the National Science Foundation

Abstract

There are limitations with current map systems in displaying vertical terrain data. This study focused on using a quadcopter with a geotagging camera mounted to face the vertical terrain in order to collect imagery for reconstruction to better represent vertical terrain in a map system. Using Agisoft PhotoScan the imagery was put together to produce a digital elevation model, an orthographic photo, and a Google Earth file. These products were used to merge the vertical terrain model with existing terrain models in order to produce a unified product. The results were somewhat successful, however, the produced models lacked good georeferencing and failed to reduce the melted effect due to software limitations.

Introduction

Representations of vertical terrain in current map systems is often flawed in a variety of ways. Typically, the best we can get from current map systems is an aerial photo stretched over the terrain. This usually results in a skewed representation of the actual terrain. Current technology, with a top down orientation cannot adequately capture vertical environments. Accurately representing vertical terrain can aid in analyzing data that is relevant to a variety of disciplines, such as resource management, archaeology, hydrology, vegetation, natural habitats, and more.

We aimed to solve this problem by taking systematic photographs of the vertical terrain using a geotagging camera aimed at the vertical terrain while attached to a quadcopter. Using the imagery in Agisoft PhotoScan, we processed the data and produced digital elevation models, orthographic photos and Google Earth files. We then merged the results with current terrain maps in order to produce a unified terrain model.

Methods and Tools

We selected the south east portion of the cliff walls surrounding the Ka’a’awa Valley as our subject area. The area was selected due to its ease of access, its current under representation in available terrain maps, and its steep slope. The DJI Phantom quadcopter (Figure 1) was our flight platform, with its GPS enabled stability control and ability to carry a variety of payloads, it provided a good place to start. Using a Pentax Optio WG-II camera (Figure 2) set to take pictures every 10 seconds and a maximum of 35 pictures, I captured imagery of the cliff walls in as systematic way as possible. I planned multiple flights to cover the area, with each flight two transects of the subject area were captured by flying straight up, over and down at a constant pace and at a constant distance from the cliff wall. I had to limit the flights to 5 minutes due to battery constraints of the quadcopter. Photos were collected and stored by day, then filtered for useful photos. Photos of the ground, pilot, co-pilot or photos outside of the subject area were removed.

id=”attachment_4873″ align=”alignnone” width=”300″ caption=”Figure 1″[1]

id=”attachment_4874″ align=”alignnone” width=”300″ caption=”Figure 2″[2]

The photo sets were then added to Agisoft PhotoScan. Using the “Import EXIF Data” feature of PhotoScan, the pictures are placed in three dimensional space. Then the “Align Photos” workflow was used to produce a sparse point cloud. I used the “High” setting for “Accuracy” and “Ground Control” for “Pair Precision”. After this many iterations of the next step were made before finding setting that worked well for our case. Unfortunately, PhotoScan does not aid in making decisions about settings or options it presents. So, much effort and time were put into trying and understanding the different options presented for the next step which is “Build Geometry”. The “Build Geometry” phase is used to create a 3D mesh out the sparse point cloud. The data set included 193 photos and I found that the following settings (Figure 3) worked well.

id=”attachment_4875″ align=”alignnone” width=”300″ caption=”Figure 3″geometry[3]

After the mesh is created I had to clean up the edges. Using the “Freeform Select Tool” I selected areas on the edges of the model that were invalid or unwanted and deleted them. The next step in the process is to create the texture for the model. In this step PhotoScan uses the photographs to properly texture the 3D model, instead of stretching the photos over the model like most technology. Because PhotoScan has detail knowledge of the photos and the 3D model, it knows how to skew the images properly so they are not simply stretched over the model. For this step I used the “Build Texture” workflow and found the following settings (Figure 4) to work well for the data set.

id=”attachment_4876″ align=”alignnone” width=”300″ caption=”Figure 4″texture[4]

After the texture was applied I had some further editing to do, a few more edges needed to be cleaned up, then we were ready for export. I first exported a digital elevation model using the following settings (Figure 5). I used the “Estimate” button to fill in the boundary data.

id=”attachment_4877″ align=”alignnone” width=”215″ caption=”Figure 5″dem[5]

Then I used PhotoScan to create an orthophoto TIFF file using the following settings (Figure 6), again using the “Estimate” button to determine the boundary.

id=”attachment_4878″ align=”alignnone” width=”215″ caption=”Figure 6″ortho[6]

After the products were created I brought them into ESRI ArcScene, floating the orthophoto onto the DEM.

Results

The results from PhotoScan (Figure 7) were an improvement from current mapping systems, showing detail in the model with limited skewing of the imagery. However, the exported products did not display well in other products.

id=”attachment_4879″ align=”alignnone” width=”300″ caption=”Figure 7″photoscan[7]

In ESRI ArcScene (Figure 8), I saw that the orthophoto was not properly georeferenced, which I believe was due to the lack of ground control points. The orthophoto and DEM were close, but they were off in elevation and slightly in orientation. I also noticed that the imagery is significantly skewed just like current mapping technology.

id=”attachment_4880″ align=”alignnone” width=”300″ caption=”Figure 8″arcscene[8]

I also tried generating the KML form of the orthophoto which when opened in Google Earth also resulted (Figure 9) in a similar manner as ArcScene.

id=”attachment_4881″ align=”alignnone” width=”300″ caption=”Figure 9″GoogleEarth[9]

Conclusions

There seems to be a disconnect between PhotoScan and other products. I am not sure where the problem lies, while the results look desirable in PhotoScan, the exported products do not work well in their destination. Also, the results were not well georeferenced, but I believe this is due to a complete lack of ground control points. Unfortunately, it is logistically challenging to collect ground control points for a vertical terrain due to its inaccessibility. Without the ground control points, PhotoScan has to estimate the location and orientation of the model, resulting in significant error. Perhaps in future studies of this nature, ground control points can be collected by increasing the study area to accessible locations. Also, it is our hope that other software will advance to take advantage of the detail provided by PhotoScan.