Digital Archaeology in Nova Scotia

Methods and Applications from Nova Scotia


Project maintained by Wesley Weatherbee Hosted on GitHub Pages — Theme by mattgraham

Photogrammetry 2 ─ Using masks to improve reconstructions of artifacts in archaeological photogrammetry.

In my last post, I hinted at how masking can improve processing in artifact photogrammetry.

In today’s blog post, I plan to give a couple examples of this, including important items to note when preparing to take your photographs.

The truth about photogrammetry

I have a pet peeve when it comes to how we talk about archaeological photogrammetry, and it’s something I am guity of myself.

First off, no creating a photogrammetric reconstruction is not necessarily a hard endeavour to complete. However, the software involved in Structure from Motion (SfM) photogrammetry is composed of sophisticated machine learning algorithms that I don’t fully understand.

There is a whole literature on SfM photogrammetry, and here I only intend to cover a bit of the how-to rather than how.

My pet peeve is…

Photogrammetry is often simplified when practitioners explain our work to others. We aim to allow comrade archaeologists and heritage professionals to gain an interest in valuable applications of the technique. I worry though, that emphasizing the simplicity of processing is underselling the technical complexity of using the software and the pragmatic approach to planning a photogrammetry project. Often the sentiment is:


"Take a bunch of photos and the software stitches them together. Easy as pie!"


Again, it isn’t rocket science, but the above statement is a bit of a fib. At least it is when we are talking about artifacts. An excavation processed with no associated geospatial information is usually a different story as it is technically only a 2.5D surface rather than a 3D object.

Digression aside, reconstructing an object in full 3D requires some planning when using photogrammetry. You’ll likely have a fair bit of post-processing to undertake as well. Agisoft Metashape has some interesting tools to reduce the time spent in post-processing and produce a better model.

"Metashape's Magic Wand!"

Magic Wand tool is used to select uniform areas of the image.” For this reason, I always suggest shooting photos with a uniformly coloured background that highly contrasts the colour of the object being modelled. This will allow for effective use of the magic wand, and dramatically decrease time spent creating the masks.

The greyed out background indicates the area is a mask.

If you have only one take away from this post, just let it be this:


Choose a backdrop colour that highly contrasts the colour(s) of the object being modelled.


So how do we use masks in processing? and, how do they actually make the processing experience in photogrammetry more effective?

Do you need masking?

So you’ve captured your photos, and are ready to begin processing. In my own experience the first thing after loading the photos into the software is to run the Align Photos step to see how well the photos align without any manual processing.

In some cases, I will luck out and the object stitches a clean sparse point cloud together in perfectly and the only manual intervention needed is to remove points not associated with the object. No masking is required here. You can likely just clean the model by removing noisey/inaccurate points and continue onto later processing steps.

Image2

Clean sparse point cloud.

More frequently the resulting object stitches together fairly well on one side, but so does the background. In this case, you will likely need to tell the software to ignore the background in order to capture the whole image.

Image3

Poorly aligned sparse point cloud. The software only aligned points on one side of the object.

You do need masking.

To create the masks, double click the photo you want to create a mask on. Then select the Magic Wand tool from the drop down under the Selection button (or just hit the ‘W’ hotkey), and click the background of your image. With any luck, by using a highly contrasting background, Magic Wand will select all of the pixels in the photo that are not your object.

Drop down menu under *Selection* button.

If you run into issues because Magic Wand is selecting pixels within your object, you may have to decrease the selection tolerance by selecting Magic Wand Options just below Magic Wand under the drop down for the Selection button. After decreasing the tolerance you will want to hold CTRL when clicking in order to select multiple areas with Magic Wand.

After making a selection using either of the above workflows, you will need to add your current selection to a mask by clicking the Add Selection button.

Add selection button (above).

Image5

Suitable mask applied to image in Agisoft Metashape.

Deploying your mask(s)

From here you have two choices when running Align Photos.

*Align Photos* pop up window.

First, you can apply masks to Key Points. This will necessitate a mask on every photo because a tie point is a point derived from a pixel on a single image which the software identifies as having a high liklihood of being present in another photo. This method was necessary in Agisoft Photoscan, and relies on the user creating a mask for every image! This is a time consuming endeavour.

However, Agisoft Metashape brings a new, second, approach to deploying a mask during processing. This method applies masks to Tie Points. This means that with only a few representative masks (i.e.: Masking all pixels from background) you can ignore all of the background information in your photos. This method would be work extremely well if using a rotating model and static background, i.e.: a turntable. In any case, the use of masks increases processing time, and also reduces the chance of the software only aligning photos for one side of the object.

There you have it! You’ve made a mask that directs Metashape to ignore pixel information in your photo that isn’t the object itself. In doing so, you’ll be able to increase processing time and model objects in full 3D!

After this you should be able to build a Dense Cloud, Depth Maps, a Mesh, and Textured Mesh. This general workflow will produce a photorealistic reconstruction of an object, assuming the photos are correctly captured.

Examples of models made using masks

The model of the clay tobacco pipe shown last week used masks to complete the processing, but below are two more examples that used masking to allow the models to be fully represented in 3D. The first has been shown in the previous examples. It is four sherds of feather edge pearlware mended into a single sherd. The second is a recently produced model of a point I flintknapped last summer.



In my next post, I will present a workflow that will increase the quality of your artifact models while decreasing the resolution. In turn, we can easily share and display higher quality, lower resolution 3D models retaining their geometry. If this is something that interests you or any of your colleagues, be sure to check back or direct them here on February 4th when the post is released.

Wesley Weatherbee

Written on January 21, 2020

Creative Commons License
DigitalArchNS by Wesley Weatherbee is licensed under a Creative Commons Attribution 4.0 International License.