<p>
Photogrammetry is the art of reconstructing digital 3D models using photos.<br />
To reconstruct a 3D model of an object, photos are captured from different angles.
</p>
<h3>Contents</h3>
<ul>
<li><a href="#input">Input</a>
<ul>
<li><a href="#input_photos">Photos</a></li>
<li><a href="#input_video">Video</a></li>
<li><a href="#input_preparation">Preparation</a></li>
</ul>
</li>
<li><a href="#colmap">COLMAP</a></li>
<li><a href="#openmvs">OpenMVS</a></li>
<li><a href="#meshlab">MeshLab</a>
<ul>
<li><a href="#meshlab_filters">Filters</a></li>
<li><a href="#meshlab_pymeshlab">PyMeshLab</a></li>
</ul>
</li>
<li><a href="#blender">Blender</a></li>
<li><a href="#other">Other Photogrammetry Software</a>
<ul>
<li><a href="#other_meshroom">Meshroom</a></li>
</ul>
</li>
<li><a href="#links">External Links</a></li>
</ul>
<h3 id="input">Input</h3>
<h4 id="input_photos">Photos</h4>
<p>
Tips for capturing photos:
</p>
<ul>
<li>As a general rule in photography:<br />
The better the illumination, the better the captured photo.</li>
<li>Avoid reflecting surfaces.</li>
<li>For photogrammetry, the scenery and the objects have to stand still.<br />
Even slight movements of the objects cause distortions.<br />
Move the camera to obtain photos from different angles.</li>
<li>Move the camera only slightly between two photos.<br />
This ensures that the same keypoints can be found in the photos.</li>
<li>It is recommended to capture roughly 100 photos of an object.</li>
</ul>
<h4 id="input_video">Video</h4>
<p>
A set of images can be extracted from a video too. Note that a set of photos<br />
has (minor) advantages over a set of extracted images from a video:
</p>
<ul>
<li>Photos contain meta information like focal length and sensor width.</li>
<li>It might be that the moving camera creates some kind of motion blur.</li>
</ul>
<p>
<code>ffmpeg</code> can extract images from a video with a specified framerate (FPS):
</p>
<pre><code class="language-bash">ffmpeg -i "$VIDEO" -r "$FPS" -qmin 1 -qscale:v 1 img_%04d.jpg
</code></pre>
<h4 id="input_preparation">Preparation</h4>
<p>
Photos stored in JPG files usually contain meta information in EXIF tags.<br />
<code>exiftran</code> losslessly rotates photos according to the EXIF tag <code>Orientation</code>:
</p>
<pre><code class="language-bash">exiftran -ai *.jpg
</code></pre>
<p>
<code>rembg</code> removes the background in images.<br />
To replace the background with black:
</p>
<pre><code class="language-bash">rembg p -bgc 0 0 0 255 "$INPUT_DIR" "$OUTPUT_DIR"
</code></pre>
<h3 id="colmap">COLMAP</h3>
<p>
COLMAP reconstructs a point cloud using images captured from the same object<br />
from different angles.
</p>
<p>
Put all photos into a folder called <code>images</code> inside a named project folder:
</p>
<pre><code class="language-bash">PROJECT="my_photogrammetry_experiment"
mkdir -p "$PROJECT/images"
cp *.jpg "$PROJECT/images"
</code></pre>
<p>
Providing image masks is optional. If the input images contain background,<br />
it is highly recommended to create an image mask for each image.<br />
Image masks tell COLMAP where to look for keypoints during feature extraction.<br />
For an image <code>images/0123.jpg</code>, the corresponding mask has to be named <code>masks/0123.jpg.png</code>.<br />
The size of the mask (width and height) has to match the size of the image.<br />
No features will be extracted in regions, where the mask is black (pixel value 0).<br />
Set the regions of interest to white (without transparency / alpha channel).
</p>
<p>
Execute the reconstruction:
</p>
<pre><code class="language-bash">colmap automatic_reconstructor \
--workspace_path "$PROJECT" \
--image_path "$PROJECT/images" \
--mask_path "$PROJECT/masks"
</code></pre>
<p>
For a dense point cloud reconstruction, COLMAP requires CUDA.<br />
Without CUDA, only a sparse point cloud is generated.
</p>
<p>
The reconstructed point cloud is written to:
</p>
<pre><code class="language-bash">$PROJECT/dense/0/fused.ply
</code></pre>
<p>
The reconstructed mesh is written to:
</p>
<pre><code class="language-bash">$PROJECT/dense/0/meshed-poisson.ply
</code></pre>
<p>
COLMAP reconstructs a mesh with vertex colors.<br />
It does not create textures for the mesh.
</p>
<h3 id="openmvs">OpenMVS</h3>
<p>
OpenMVS reads a sparse point cloud and the corresponding images to generate<br />
a textured mesh. A command line interface (CLI) controls OpenMVS.
</p>
<p>
The command line tools of OpenMVS need to be found in one of the directories<br />
in the <code>PATH</code> variable. Set the environment variable <code>PATH</code> and<br />
change to the project directory:
</p>
<pre><code class="language-bash">export PATH="/usr/bin/OpenMVS:$PATH"
cd "$PROJECT"
</code></pre>
<p>
Read the sparse point cloud and the corresponding images from COLMAP:
</p>
<pre><code class="language-bash">colmap image_undistorter \
--image_path images \
--input_path sparse/0 \
--output_path dense \
--output_type COLMAP
InterfaceCOLMAP -i dense -o scene.mvs --image-folder dense/images
</code></pre>
<p>
Run the OpenMVS pipeline:
</p>
<pre><code class="language-bash">DensifyPointCloud scene.mvs
ReconstructMesh scene_dense.mvs -p scene_dense.ply
RefineMesh scene_dense.mvs -m scene_dense_mesh.ply
TextureMesh scene_dense.mvs -m scene_dense_refine.ply --empty-color 16777215 # = 0xFFFFFF
</code></pre>
<p>
The reconstructed textured mesh is written to:
</p>
<pre><code class="language-bash">$PROJECT/scene_dense_texture.ply
$PROJECT/scene_dense_texture0.png
</code></pre>
<p>
A shell script that executes the whole OpenMVS pipeline can be found<br />
in the <a href="#files">files</a> section.
</p>
<h3 id="meshlab">MeshLab</h3>
<p>
MeshLab is a graphical user interface to visualize point clouds and meshes and<br />
to apply various operations to them. These operations are organized in filters.
</p>
<p>
MeshLab can import point clouds and meshes stored in<br />
the Stanford Polygon File Format (*.ply):
</p>
<pre><code class="language-plaintext">File > Import Mesh...
</code></pre>
<p>
Because there is no undo function, it is recommended to duplicate the mesh<br />
before applying a filter. On the right side of the window, right-click on<br />
the mesh name. Choose <code>Duplicate Current Layer</code> to copy the mesh.<br />
Filters are applied on the selected layer / mesh.
</p>
<h4 id="meshlab_filters">Filters</h4>
<p>
There are various filters available in MeshLab. Only a few are listed here.
</p>
<p>
Using the point cloud, a mesh can be computed using the following filter:
</p>
<pre><code class="language-plaintext">Remeshing, Simplification and Reconstruction > Surface Reconstruction: Screened Poisson
Reconstruction Depth: 12
</code></pre>
<p>
There is a filter to remove small unconnected objects:
</p>
<pre><code class="language-plaintext">Cleaning and Repairing > Remove Isolated pieces (wrt Face num.)
Enter minimum conn. comp size: 1000
</code></pre>
<p>
In COLMAP, the y-axis is pointing downwards. To rotate the mesh into<br />
a coordinate system where the z-axis is pointing upwards:
</p>
<pre><code class="language-plaintext">Normals, Curvatures and Orientationn > Transform: Rotate
Rotation Angle: -90
</code></pre>
<p>
To move the mesh:
</p>
<pre><code class="language-plaintext">Normals, Curvatures and Orientation > Transform: Translate, Center, set Origin
Z Axis: 1
</code></pre>
<p>
To cut the mesh with a plane:
</p>
<pre><code class="language-plaintext">Quality Measure and Computations > Compute Planar Section
Plane perpendicular to: Z Axis
Create also section surface: On
Create also split surfaces: On
</code></pre>
<p>
The newly created section surface and the split surface need to be merged<br />
into one mesh. Set only these two surfaces/meshes to be visible.<br />
Visibility is set by clicking on the eye next to the mesh name on the right side.<br />
After visibility has been set correctly, right-click on the mesh name and<br />
select <code>Flatten Visible Layers</code> to merge the visible meshes into one mesh.
</p>
<p>
Remaining holes in the surface should be closed:
</p>
<pre><code class="language-plaintext">Remeshing, Simplification and Reconstruction > Close Holes
Max size to be closed
</code></pre>
<p>
Most applications need a 'watertight' mesh. A 'watertight' mesh is a mesh<br />
that forms a closed surface. To check the quality of the mesh:
</p>
<pre><code class="language-plaintext">Quality Measure and Computations > Compute Geometric Measures
</code></pre>
<p>
The output is shown in the log view in the lower right corner of the window.
</p>
<p>
There is also a filter for mesh simplification.<br />
Before simplification, the mesh should be smoothed.
</p>
<pre><code class="language-plaintext">Remeshing, Simplification and Reconstruction > Simplification: Quadric Edge Collapse Decimation
Target number of faces
</code></pre>
<h4 id="meshlab_pymeshlab">PyMeshLab</h4>
<p>
All filters available in MeshLab can be used in Python scripts too. Writing a<br />
PyMeshLab script is very useful to apply the same filters on different meshes.
</p>
<pre><code class="language-python">import pymeshlab
ms = pymeshlab.MeshSet()
ms.load_new_mesh('input.ply')
# Cleaning and Repairing > Remove Isolated pieces (wrt Face num.)
ms.meshing_remove_connected_component_by_face_number(mincomponentsize=1000)
# Normals, Curvatures and Orientationn > Transform: Rotate
ms.compute_matrix_from_rotation(angle=-90.0)
# Smoothing, Fairing and Deformation > Laplacian Smooth
ms.apply_coord_laplacian_smoothing(stepsmoothnum=5)
# Texture > Transfer: Texture to Vertex Color (1 or 2 meshes)
ms.transfer_texture_to_color_per_vertex()
ms.save_current_mesh('output.ply')
</code></pre>
<p>
In case a functionality is missing in MeshLab, write a function in Python.<br />
See in the <a href="#files">files</a> section for an example.
</p>
<h3 id="blender">Blender</h3>
<p>
Blender can import meshes stored in Stanford PLY files:
</p>
<pre><code class="language-plaintext">File > Import > Stanford PLY (.ply)
</code></pre>
<p>
To manually smooth the mesh at selected positions, open the <code>Sculpting</code> view.<br />
In the <code>Sculpting</code> view, the tool <code>Smooth</code> is found in the toolbar at the bottom.
</p>
<h3 id="other">Other Photogrammetry Software</h3>
<h4 id="other_meshroom">Meshroom</h4>
<p>
Using Meshroom 2025.1.0, the quality of the created mesh was not as detailed<br />
as a mesh reconstructed with COLMAP 3.12.6.</p>
<h3 id="links">External Links</h3>
<dl>
<dt>Rembg</dt>
<dd><a href="https://github.com/danielgatis/rembg" target="_blank">
https://github.com/danielgatis/rembg</a></dd>
<dt>COLMAP</dt>
<dd><a href="https://colmap.github.io/" target="_blank">
https://colmap.github.io/</a></dd>
<dt>OpenMVS</dt>
<dd><a href="https://cdcseacave.github.io/" target="_blank">
https://cdcseacave.github.io/</a></dd>
<dt>MeshLab</dt>
<dd><a href="https://www.meshlab.net/" target="_blank">
https://www.meshlab.net/</a></dd>
<dd><a href="https://pymeshlab.readthedocs.io/" target="_blank">
https://pymeshlab.readthedocs.io/</a></dd>
<dd><a href="https://www.youtube.com/user/MrPMeshLabTutorials" target="_blank">
https://www.youtube.com/user/MrPMeshLabTutorials</a></dd>
<dt>Blender</dt>
<dd><a href="https://www.blender.org/" target="_blank">
https://www.blender.org/</a></dd>
<dt>Meshroom</dt>
<dd><a href="https://alicevision.org/#meshroom" target="_blank">
https://alicevision.org/#meshroom</a></dd>
<dt>Comparison of photogrammetry software</dt>
<dd><a href="https://www.reddit.com/r/photogrammetry/comments/nlkxfd/colmap_openmvs_is_my_favourite_free_combination/" target="_blank">
https://www.reddit.com/r/photogrammetry/comments/nlkxfd/colmap_openmvs_is_my_favourite_free_combination/</a></dd>
</dl>