Welcome to pyBathySfM

If you plan to use this tutorial and the accompanying software, I would suggest you read (and cite) :

Dietrich JT. 2017. Bathymetric Structure-from-Motion: extracting shallow stream bathymetry from multi-view stereo photogrammetry. Earth Surface Processes and Landforms 42 : 355–364. DOI: 10.1002/esp.4060 Also available on ResearchGate

The above article was focused on UAV-based imagery collection, but ground-based imagery should be usable as long as the images do not include the horizon (i.e. the camera is pointing mainly down)

The instructions here are very similar to the original software py_sfm_dpeth

Because this is a long tutorial, I’ve added some table of contents links throughout for easy navigation between the sections.


Top : SfM Collection and Processing : Point Cloud Processing : Data Prep : Running the Script

Quick Start

If your already familiar with the processing workflow you can run the code (load the GUI) at the command line. Change the directory to the location of the downloaded code and run: python py_BathySfM.py

Prerequisites

In order to use this this tutorial, you will need:

  1. Optimal site conditions:
    • Clear water is first and foremost. High levels of suspended sediment (cloudy water) or tannic conditions (dark brown water) will inhibit the use of SfM to measure depths. The rule is if you can’t see the bottom of the stream, neither can the camera…and that’s going to cause problems.
    • Minimal surface waves. Both wind-driven waves and hydraulic waves (riffles and standing waves) will increase the “noise” in the SFM point cloud. This will lead to inaccuracies/errors in the final outputs.
    • Cloudy, hazy, or smoky weather conditions create a lot of surface reflections on the water with will inhibit accurate measurements.
  2. A Structure from Motion (or other photogrammetry) software package that can:
    • Export a georeferenced (or scaled) point cloud dataset
    • Export the calculated camera positions (x,y,z) and orientations (pitch, roll, yaw)
  3. A point cloud/3D data processing software:
    • In this tutorial, I will be using my favorite (which is free and offers cross-platform support) – CloudCompare (cloudcompare.org)
  4. An installation of Python (python.org) that has a number of libraries installed
    • I use a scientific python distribution, Anaconda (https://www.continuum.io/downloads), that has most of the needed libraries.
    • The minimum libraries are: PyQT5, numpy, pandas, sympy, matplotlib
  5. As with any SfM collection technique, practice is your best friend. Do not expect perfect results from the first attempt to collect and correct SfM bathymetry.

Top : SfM Collection and Processing : Point Cloud Processing : Data Prep : Running the Script

SfM Collection and Processing

Collect imagery for your site

I’m not going to go into the details here, I’m assuming you have knowledge/experience with imagery collection for SfM.

Some tips for bathymetric data collection:

Surveying

Accurate ground control points are critical for any SfM collection. RTK-GPS or a Total Station are the recommended survey methods.

Processing

Go through your software’s processing chain to the point where you have a georeferenced, dense point cloud (any additional processing, orthophotos or DEMs, is optional).

  1. Export your point cloud (LAS files are good, but it’s up to you)
  2. Export the calculated/estimated camera positions and orientations
    • The key here is that you need:
      • The x,y,z coordinates of the cameras in the same reference system as our points
      • The orientation angles of the cameras (pitch, roll, and yaw)
      • Pitch should be 0° = Nadir (straight down), you may need to convert pitch angles if your software uses a different 0° reference.
      • Roll in most software should be 0° = horizontal.
      • Yaw should be a compass angle, 0° = North

Top : SfM Collection and Processing : Point Cloud Processing : Data Prep : Running the Script

Point Cloud Processing

The instructions in this section use the tools in CloudCompare (CC), so assume that the tool names are for CloudCompare. If you are using a different software, you’ll need to read through and translate the instructions/tools to your specific software. Info on how to use the specific tools in CC can be found on the CloudCompare Wiki (http://www.cloudcompare.org/doc/wiki/index.php?title=Main_Page)

Open your point cloud in CloudCompare

If you are prompted to apply coordinate offsets, you should accept the default X and Y, HOWEVER…set the Z offset to zero (it makes the calculations easier)

Water Surface Processing

Water surface elevations are one of the primary sources of error! Accurate water surfaces are critical to the success of this method

Create a Delaunay Mesh using your edge points

Import your surveyed or digitized water’s edge points (with the same offset as the point cloud)

Select the water surface points in the DB Tree add the Z values as a new scalar field (SF) attribute

Select Edit…Mesh…Delaunay 2.5D (XY plane)

The mesh should appear with colors representing elevation.

Subsampling the point cloud

In order to limit the influence of noise in the point cloud, we need to subsample the point cloud to a uniform point spacing using a minimum elevation filter. The point spacing will need to be determined by you and the requirements/limitations of the site and your specific research question(s).

Delete extra scalar fields (Optional)

With the SfM point cloud selected, launch the ‘Rasterize tool’:

  1. Tools…Projection…Rasterize…
  2. In the Grid step box, enter the distance between points you want
  3. Active layer = Height grid values
  4. Projection, direction = Z
  5. Cell height = minimum height
  6. No checks in ‘interpolate SF’ or ‘resample input cloud’
  7. Empty Cells, Fill with = leave empty
  8. Click the ‘Update Grid’ button at the top

The new cloud will appear in the DB Tree with the suffix *- Cloud.raster(#), where # is the spacing you specified.

Transferring attributes to the subsampled point cloud

Select your new subsampled point cloud in the DB Tree…

Calculate the point-to-mesh distances (points to water surface = apparent depth)

Select both the subsampled point cloud and the water surface mesh

Filtering the “underwater” points

Select the subsampled point cloud

Click the ‘Filter points by value’ tool on the toolbar

The underwater points will be extracted to a new cloud with a suffix *- Cloud.extract

Convert depth to water surface elevation

With the extracted point cloud selected, click the Calculator icon on the toolbar (“Add, subtract, multiply, or divide two scalar fields”). Here we will add the calculated depth back to the elevations to get the water surface elevation.

This subtracts the negative depth value from (adding it to) the SfM elevation value to give the water surface elevation for each point.

Rename scalar fields

In the properties window…change the active SF to ‘C2M Signed Distances’ and delete this field

Export the point cloud

Select the edited point cloud

Optional/Recommended…Save all of the processing data

Click in the DB Tree and select all (Ctrl+A)


Top : SfM Collection and Processing : Point Cloud Processing : Data Prep : Running the Script

Data Prep

The inputs for the Python script are the edited point cloud, the camera location/orientations, and a file with camera sensor parameters. These all need to be comma-delimited (*.csv) files. You will need open all of the files in a text editor or Excel to double-check the header names (they are case-sensitive). There are examples included in the “Sample Data” folder in the Git repository.

Point cloud

Camera positions

Sensor properties


Top : SfM Collection and Processing : Point Cloud Processing : Data Prep : Running the Script

Running the script

A quick note, there is very little error checking/handling in this program. Most of the errors that are likely to occur will be because of incorrect header names in the CSV files, double check them against the sample data files. If the output is empty or looks weird, double check that the camera orientation values are in the correct directions and that the point cloud has correct water surface elevations.

Download the Git repository

Launch the script

  1. Input Point Cloud: Your point cloud (i.e. the one you exported from CloudCompare).
    • Navigate to the directory and select the CSV file
  2. Camera Position/Orientation: CSV file with camera x,y,z,pitch,roll,yaw
  3. Camera Options:
    • Export Camera File: This will output a Pickle (*.pkl) file in the Output Directory with the calculated camera fields of view (useful for reprocessing to avoid waiting during subsequent processing)
    • Use Precalc’d Camera: If, in a previous run, you exported the camera file above you can input the Pickle file here to bypass the camera FOV processing.
  4. Sensor File: The sensor CSV file.
  5. Output File: Forth is the output file.
    • I use something like siteName_CORRECTED.csv
  6. Extra Output Options:
    • Small Angle Approx: This will add an extra column to the output using the Small Angle Approximation for refraction correction 1.34 x Apparent_Depth (Woodget et al., 2015)
    • Extra Point Stat: This will add an extra columns with additional statistics about the corrected depth (h).
      • Standard Dev., Median, Min/Max, a range of percentile ranks, and interquartile ranges
    • Extra Camera Outputs: Useful for error analysis (see below), this outputs a number of different files relating to the master camera visibility calculations, including raw outputs for corrected depths and elevations. This is meant as a debugging output, so only for the first batch of 10,000 points is output.
    • Filtered depth values: IMPORTANT
      • Add additional columns: h_filt and corElev_avg_filt
      • Recent trials/research (by me) have demonstrated that filtering some of the extreme values (high camera angles, points far from the camera) significantly improves accuracy. I’d suggest trying the defaults 35° off nadir and 100 meters, as a first cut, but feel free to play…
  7. The ‘OK’ button will start the…
    • Processing the camera’s instantaneous fields of view first (This takes a while…)
    • Then processing points in batches.
    • The progress bars are for show, they don’t generally work…in your command line there is feedback on the progress

When the script is finished, it will report how many points were processed and approximately how long it took. You can exit the window or hit the ‘Cancel’ button.

Examining the results

Open the output file in CloudCompare

The cloud should be shown in the viewer. You can change the scalar field to ‘h,avg’ to display depth, or change the colors to RGB (if you have them in your data).

And that’s your data corrected for refraction. I would suggest doing some error analysis before going too far, a discussion of error is a topic for another tutorial, however…

Basic Error Analysis

You will need independently collected elevation data for the bed of your stream/waterbody

  1. Load your surveyed validation points into CloudCompare
  2. Following a similar workflow as the Point Cloud Processing above.
    • Add a Point to Mesh distance to the the water surface (name the scalar w_surf)
  3. Add a scalar field for the uncorrected elevation from the SfM dataset to your validation points
    • Can be done in the raster world through a GIS or some other nearest neighbor attribute transfer
    • Make sure you name the attribute sfm_z
  4. You can then run the enhanced validation points through the bathymetric correction like you would a full point cloud.
  5. Tada! you have directly comparable error stats.
References

Top : SfM Collection and Processing : Point Cloud Processing : Data Prep : Running the Script