Oblique Imagery and 3D Models

Introduction

In this lab I was subjected to the tedious work that is processing oblique imagery. The process involved manually removing pixels from images so that only certain objects remained once a 3D model was processed using Pix4D.

Oblique imagery is imagery that is collected by taking pictures at an angle (rather than straight down like for nadir). Nadir is normally used for mapping large areas while oblique imagery is good for seeing the sides of three dimensional objects like structures or landmarks.

Annotation in Pix4D is removal of unnecessary parts of images that are not desired in the final product. There are three different types of annotation in Pix4D that are discussed in the tutorial.

Mask: The selected pixels are not used for processing. The description for this one is vague and, quite frankly, I'm not entirely sure what it is meant to do. Its listed used include removing an object that appears in the in a few images and removing the background of an othoplane. Neither of these uses apply to this project.

Carve: "All the 3D points located on the rays connecting the camera center and the annotated pixels are not used for processing". This is basically like shooting a laser through the pixel that you are annotating, it will get rid of the one you are aiming at while also removing anything behind it from the perspective of the camera.

Global Mask: "The pixels annotated are propagated to all the images. All these pixels are not used for processing." This can apparently be useful for removing consistent obstacles in the images like if the drone's landing gear can be seen the images.

Methods

There were 3 image sets offered; an aerial view of a parked truck, a view of a light pole, and a view of a cell tower. I choose to process the images of the truck and the light pole.

I used this tutorial:
https://support.pix4d.com/hc/en-us/articles/202560549-How-to-Annotate-Images-in-the-rayCloud

In Pix4D I opened a new project with the chosen data set and set the project to be a 3D model.

Then, after initial processing (~30min), I selected a picture to process and used the annotate tool to begin the process of removing pixels. Figure 1 is an example of what the masking process looks like.
Figure 1: Masking in progress
The masking tool seems to function very similarly to the "magic wand" tool in photoshop in that for every click it selects a certain amount of pixels around the selected area with the size of the area being dependent on similarity of neighboring pixels. The tolerance for the selection, however, is determined by how zoomed in the picture is rather than a user selected value, which was pretty annoying to use.

I went through four pictures from different angles and removed the pixels that were not the truck
using the mask option. After several attempts (and several hours) of attempting to use the mask tool to remove the background from the truck, I ended up having to switch to the carve tool which worked better. In hindsight, I should have realized that the masking tool was not correct for this particular use and just started with the carving tool.

To save time (not wanting to carve the entire image every time) I would select a pixel in the 3D point cloud it would show the image that that pixel was found in and where it was found. From there I could annotate only that portion of the image and skip over re-annotating pixels that I had already done.

Then, after a round of Point Cloud and mesh processing, the final annotated product was generated and can probably be found in the video below.




Figure 2 also shows the finished product in case the video does not work.

Figure 2: Hupy's Annotated Truck
It seems that the program has some trouble with deciding what to do with windows since they have different reflections depending on what angle the picture was taken from as well as different levels of opacity depending on the level of the lighting. The underside of the truck was also a bit of a problem because there was so little of it to work with in each picture. A good couple images from really low angles, capturing more of the underside of  the truck, would have improved the quality of the underside in the model. That's not to say that the quality of the model was completely out of my hands, but I was working with limited time and decided to focus more effort on the general outline of the truck rather than the underside.

Next was the image set containing the light post. This time I immediately started with using the carve tool which saved me a lot of time (and maybe took less time to process). Figure 3 is an example of the carve tool in use. It's red instead of purple.

Figure 3: Carving up the light pole
Overall this image set did not seem to difficult to annotate, with the exception of a couple pesky images at lower angles that captured a lot of the landscape. The light pole was a very different color from the background and so it was much easier to annotate than the truck with had a tendency to blend in with the similarly color asphalt.

In the end the light pole ended up looking like this:
Obviously not an ideal outcome. I'm not entirely sure what happened to the model but it clearly is not in the best shape that it could be. The image set is smaller, which may have contributed to the quality since there isn't as many matching pixels for the program to work with.

One problem with the image set is that it fails to capture the underside of the lights but that doesn't explain the loss of several pixels on the sides of the lights.

Conclusion

The annotation tool in Pix4D has its flaws but it generally very useful. As of right now, it is probably the easiest annotation tool to use and performs its job well enough. The relative lack of control in the tool can make it frustrating to use and acts as a obstacle in an already very time consuming task when annotation must be performed several times in an image set to achieve usable results.

Popular posts from this blog

GIS Day Final Poster

Abstract and GIS Day

Whitepaper Conclusion