Welcome to the tutorial for py_sfm_depth
If you use this tutorial and the accompanying software, please cite the following paper:
Dietrich JT. 2017. Bathymetric Structure-from-Motion: extracting shallow stream bathymetry from multi-view stereo photogrammetry. Earth Surface Processes and Landforms 42 : 355–364. DOI: 10.1002/esp.4060 (LINK)
The above article was focused on UAV-based imagery collection, but ground-based should be usable as long as the images do not include the horizon (i.e. the camera is pointing mainly down)
Because this is a long tutorial, I've added some table of contents links throughout for easy navigation between the sections.
Top | SfM Collection and Processing | Point Cloud Processing | Data Prep | Running the Script
Prerequisites
In order to use this this tutorial, you will need:
- Optimal site conditions:
- Clear water is first and foremost. High levels of suspended sediment (cloudy water) or tannic conditions (dark brown water) will inhibit the use of SfM to measure depths. The rule is if you can’t see the bottom of the stream, neither can the camera…and that’s going to cause problems.
- Minimal surface waves. Both wind-driven waves and hydraulic waves (riffles and standing waves) will increase the “noise” in the SFM point cloud. This will lead to inaccuracies/errors in the final outputs.
- Cloudy, hazy, or smoky weather conditions create a lot of surface reflections on the water with will inhibit accurate measurements.
- A Structure from Motion (or other photogrammetry) software package that can:
- Export a georeferenced (or scaled) point cloud dataset
- Export the calculated camera positions (x,y,z) and orientations (pitch, roll, yaw)
- A point cloud/3D data processing software:
- In this tutorial, I will be using my favorite (which is free and offers cross-platform support) – CloudCompare (cloudcompare.org)
- An installation of Python (python.org) that has a number of libraries installed
- I use a scientific python distribution, Anaconda (https://www.continuum.io/downloads), that has most of the needed libraries.
- For version 1.0 of the code (Python 2.7) you will need WXpython, which you can install via CONDA
conda install -c anaconda wxpython=3.0.0.0
- For version 1.0 of the code (Python 2.7) you will need WXpython, which you can install via CONDA
- I use a scientific python distribution, Anaconda (https://www.continuum.io/downloads), that has most of the needed libraries.
- As with any SfM collection technique, practice is your best friend. Do not expect perfect results from the first attempt to correct SfM bathymetry.
Top | SfM Collection and Processing | Point Cloud Processing | Data Prep | Running the Script
SfM Collection and Processing
Collect imagery for your site
I’m not going to go into the details here, I’m assuming you have knowledge/experience with imagery collection for SfM.
Some tips for bathymetric data collection:
- I’ll reiterate, clear water is one of the most important factors and calmer wind conditions are ideal ( i.e. less surface waves).
- Imagery should be collected at low-oblique angles (~20° off nadir) in convergent, overlapping patterns. Collecting at two heights/altitudes is also advantageous.
- Using a polarizing filter, adjusted to reduce glare, will help the camera see through the water column to the bottom.
- As much as possible, keep the sun behind the sensor.
- Collect images at a time of day that minimizes shadows from the banks/riparian vegetation. Noon sun is not required and it is better to optimize for the site.
- Accurate ground control points are critical for any SfM collection. RTK-GPS or a Total Station are the recommended survey methods.
Surveying
Accurate ground control points are critical for any SfM collection. RTK-GPS or a Total Station are the recommended survey methods.
- Additional surveying requirements:
- In-water validation points:
- You should collect a number of validation points (100 is a fairly easy number to reach) at a variety of the depths present in the study area. These are for error checking/error statistics.
- Water’s edge points:
- You will need to collect a number of survey points on/at the water’s edge (for establishing the water surface elevations).
- You can do an extensive survey of the water’s edge (one point every 3-5 meters, plus extras at elevation transitions) or take a samples (one point every 20-30 meters, plus extras at elevation transitions).
- Either way you will need a water’s edge points for every part of the stream that where you want to map the bathymetry.
- With sample surveys, you will need to fill in by digitizing points on the water’s edge (more on that later).
- For streams with steep banks, this will be a challenge. However, surveying is the best option because digitizing the water level on step banks is notoriously difficult.
- In-water validation points:
Processing
Go through your software’s processing chain to the point where you have a georeferenced, dense point cloud (any additional processing, orthophotos or DEMs, is optional).
- Export your point cloud (LAS files are good, but it’s up to you)
- Export the calculated/estimated camera positions and orientations
- The key here is that you need:
- The x,y,z coordinates of the cameras in the same reference system as our points
- The orientation angles of the cameras (pitch, roll, and yaw)
- Pitch should be 0° = Nadir (straight down), you may need to convert pitch angles if your software uses a different 0° reference.
- Roll in most software should be 0° = horizontal.
- Yaw should be a compass angle, 0° = North
- The key here is that you need:
Top | SfM Collection and Processing | Point Cloud Processing | Data Prep | Running the Script
Point Cloud Processing
The instructions in this section use the tools in CloudCompare (CC), so assume that the tool names are for CloudCompare. If you are using a different software, you’ll need to read through and translate the instructions/tools to your specific software. Info on how to use the specific tools in CC can be found on the CloudCompare Wiki (http://www.cloudcompare.org/doc/wiki/index.php?title=Main_Page)
Water Surface Processing
Open your point cloud in CloudCompare
If you are prompted to apply coordinate offsets, you should accept the default X and Y, HOWEVER…set the Z offset to zero (it makes the calculations easier)
Import your surveys water’s edge points (with the same offset as the point cloud)
- If you did an extensive survey of the water’s edge you can skip down to the "Create a Delaunay Mesh..." step
Select you point cloud in the DB Tree
- Zoom into an area with surveyed water’s edge point as a reference to where the water’s edge appears in the point cloud. The obvious attribute to observe is the color change from dry to wet.
- Select the “Point List Picking” tool, which lets you digitize multiple points
- With this tool you can digitize (pick) additional points along the water’s edge. If you make a mistake, you can use the back button on the picking list window.
- For many of my sites, I go crazy picking points every 2-3 meters. You will need a water’s edge points for every part of the stream that you want to have corrected data points.
- For study areas with “infinite” water surfaces (large lakes/ocean), I would suggest creating “synthetic” water’s edge points. These are just additional points with arbitrary X and Y coordinates and Z coordinates that set to the water surface elevation. These points should extend slightly beyond the study area. You can make the points in a text editor or Excel and import them.
- Once you are satisfied with the point selections, click the green check mark in the toolbox to save the points as a sub-entity of your point cloud. Select the point cloud and re-open the Point List Picking tool…(The points should reappear in the pinking list)
- Click the drop-down arrow by the “Save” icon and select “new cloud”, which will save the points as a separate point cloud in the tree with the name ‘Picking list’
Select and merge the new ‘Picking list’ cloud with the surveyed points using the “Merge Multiple Clouds” tool.
- Select the new merged point cloud and add the Z values as a new scalar field (SF) attribute
- Tools…Projection…Export coordinates(s) to SF(s)…check ‘Z’…OK
Create a Delaunay Mesh using the merged edge points
Select the merged water surface points in the DB Tree.
Select Edit…Mesh…Delaunay 2.5D (XY plane)
- Max edge length = your point spacing (2-5m, this limits wild interpolation)
The mesh should appear with colors representing elevation.
- Check the mesh to make sure that there are not anomalous, abrupt changes in elevation along the edges (e.g. color changes).
- If there are errors, you can edit the points (delete/re-pick) and recreate the mesh.
Subsampling the point cloud
In order to limit the influence of noise in the point cloud, we need to subsample the point cloud to a uniform point spacing using a minimum elevation filter. The point spacing will need to be determined by you and the requirements/limitations of the site and your specific research question(s).
Delete extra scalar fields
- Select your SfM point cloud in the DB Tree
- In the ‘Properties’ window, scroll down to the ‘Scalar Field’ section
- If you have extra scalar fields in the point cloud (e.g. Point Source ID or Scan Direction) that are not needed…
- Use the drop-down to select them and press the ‘Delete Current Scalar Field’ button on the toolbar. Repeat for all the fields you do not need.
- If you have extra scalar fields in the point cloud (e.g. Point Source ID or Scan Direction) that are not needed…
With the SfM point cloud selected, launch the ‘Rasterize tool’:
- Tools…Projection…Rasterize…
- In the Grid step box, enter the distance between points you want
- Active layer = Height grid values
- Projection, direction = Z
- Cell height = minimum height
- No checks in ‘interpolate SF’ or ‘resample input cloud’
- Empty Cells, Fill with = leave empty
Click the ‘Update Grid’ button at the top
- At the bottom, select the ‘Export tab’
- In the Export per-cell statistics section, I would recommend you check all the boxes. This will export the statistics for each point and may be useful in doing a final error analysis (e.g correction error vs. roughness (std. dev.))
- Click the ‘Cloud button’ to export the resampled points as a new point cloud
The new cloud will appear in the DB Tree with the suffix *- Cloud.raster(#), where # is the spacing you specified.
Transferring attributes to the subsampled point cloud
Select your new subsampled point cloud in the DB Tree...
Calculate the point-to-mesh distances
Select both the subsampled point cloud and the water surface mesh
- On the toolbar click ‘Calculate point/mesh distance’ OR Tools…Distances…Point/Mesh Dist
- Leave the options on the default
- This will add a new scalar field to the point cloud called ‘C2M Signed Distance’ which represents the “depth” of the points below the water surface
Filtering the “underwater” points
Select the subsampled point cloud
Click the ‘Filter points by value’ tool on the toolbar
- Change the Min value to the max depth for your area (round up to a whole number)
- Change the Max value to zero
- Click OK
The underwater points will be extracted to a new cloud with a suffix *- Cloud.extract
- The edges outside the water surface mesh will likely be included in the extract
- Use the editing tools to delete these edges, cropped to just inside the water surface mesh.
Convert depth to water surface elevation
With the extracted point cloud selected, click the Calculator icon on the toolbar (“Add, subtract, multiply, or divide two scalar fields”). Here we will add the calculated depth back to the elevations to get the water surface elevation.
- Set SF 1 = Coord. Z
- Set operation = minus
- Set Sf 2 = C2M Signed Distances
- Uncheck ‘Update SF1 directly’
- …OK
This subtracts the negative depth value from (adding it to) the SfM elevation value to give the water surface elevation for each point.
Rename scalar fields
In the properties window…change the active SF to ‘C2M Signed Distances’ and delete this field
- Switch to Height grid value, and rename it to
sfm_z
- Edit…Scalar fields…Rename…[name]…OK
- Switch to (SF# - SF#), and rename it to
w_surf
Export the point cloud
Select the edited point cloud
- File…Save...
- Choose your file path
- File name = [your_file_name].csv
- Save as type = ‘ASCII cloud’
Optional/Recommended…Save all of the processing data
Click in the DB Tree and select all (Ctrl+A)
- File…Save...
- Choose your file path
- File name = [name]
- Save as type = ‘CloudCompare entities (*.bin)’
Top | SfM Collection and Processing | Point Cloud Processing | Data Prep | Running the Script
Data Prep
The inputs for the Python script are the edited point cloud, the camera location/orientations, and a file with camera sensor parameters. These all need to be comma-delimited (*.csv) files. You will need open all of the files in a text editor or Excel to double-check the header names (they are case-sensitive). There are examples included in the "Sample Data" folder in the Git repository.
Point cloud
- The required columns/headers are
x,y,sfm_z,w_surf
- Any extra columns like color [r,g,b] will be copied to the output point cloud without any modification.
- If you followed the instructions above, you will have an extra column named ‘z’ in the point cloud, you can delete it if you like. It is the same as the ‘sfm_z’ column.
Camera positions
- The required columns/headers are
x,y,z,pitch,roll,yaw
- Camera/photo names can be included, but are not required
Sensor properties
- The required columns/headers are
focal,sensor_x,sensor_y
- Focal is the focal length of the camera in millimeters
- sensor_x & sensor_y are the physical sensor dimensions in millimeters
- You may have to dig around the internet to find these values if they are not listed with the technical specifications of the camera.
Top | SfM Collection and Processing | Point Cloud Processing | Data Prep | Running the Script
Running the script
A quick note, there is very little error checking/handling in the first version of this script. Most of the errors that are likely to occur will be because of incorrect header names in the CSV files, double check them against the sample data files. If the output is empty or looks weird, double check that the camera orientation values are in the correct directions and that the point cloud has correct water surface elevations.
Download the Git repository
- Unzip and files into a convenient spot (near your exported data is always a good choice)
Launch the script
- If you are using Anaconda:
- Open the Spyder editor
- Load the python file from (py_sfm_depth_v1_0.py)
- Run the script with the green “Play” button on the toolbar (no editing required)
- If you want to use the command line:
- Open your command line/terminal program of choice
- Change your directory to the place you unzipped the files
- Execute the script with:
python py_sfm_depth_v1-0.py
Load files
- The first window that pops up will ask for the point cloud (i.e. the one you exported from CloudCompare).
- Navigate to the directory and select the file (the subsequent windows will return to this directory)
- Second is the camera position/orientation file.
- Third is the sensor file.
- Forth is the output file. I use something like
siteName_CORRECTED.csv
- If the output file window pops back up, there is already a file with that name and you’ll need to choose another name.
The script will start…
- Processing the camera's instantaneous fields of view first.
- Then processing point in batches of 7,500.
When the script is finished, it will report how many points were processed and approximately how long it took. You can exit the editor or terminal.
Examining the results
Open the output file in CloudCompare
- The ‘Open ASCII File’ import window will open
- The header names will be in the first line, note the column number of ‘corElev_avg’
- At the bottom, separator is comma(,) and skip line = 1 and check ‘extract scalar fields from first line’
- CC will try to make the third column ‘coord Z’. With the drop downs, reassign ‘coord Z’ to the ‘corElev_avg’ column
- If you have extra columns…
- You can set RGB columns, if they are not automatically interpreted
- Make sure none of the columns are interpreted as
Nx,Ny,Nz
. These are the “normal” directions (the surface orientation of the point). Change these columns to “Scalar”. (That is, unless you do have normal stored as part of the point cloud, but you will most likely not have these fields).
The cloud should be shown in the viewer. You can change the scalar field to ‘h,avg’ to display depth, or change the colors to RGB (if you have them in your data).
And that’s your data corrected for refraction. I would suggest doing some error analysis before going too far, but that’s a topic for another tutorial…
Top | SfM Collection and Processing | Point Cloud Processing | Data Prep | Running the Script