Convert Images to Point Cloud Pixels-to-Points™ BETA

The Pixels-to-Points tool takes in photos with overlapping coverage and generates a 3D point cloud output using photogrammetry methods of Structure from Motion (SFM). It can also generate an ortho-rectified image. This technique uses overlapping photographs to derive the three-dimensional structure of the landscape and objects on it, producing a 3D point cloud. The resulting point cloud is sometimes referred to as PhoDar or Fodar because it can generate a similar point cloud to traditional Lidar data collection. This photogrammetric point cloud can then be analyzed with other Lidar processing tools. This tool is designed to work with sets of many overlapping images that contain geotags (EXIF data), including those collected from UAV or drone flights.

The Convert Images to Point Cloud Pixels-to-Points tool is accessible from the Lidar toolbar. It requires a Lidar Module license.

Note: The image to point cloud process is memory intensive and may take several hours to process depending on the input data and quality setting. It is recommended to perform this process on a dedicated machine with at least 16GB RAM. This tool requires a 64-bit operating system. For more information about the requirements see System Requirements

Press the Convert Images to Point Cloud Pixels-to-Points button to display the Pixels to Points Tool Dialog

The Calculating Cloud/Mesh from Images dialog will display the progress of the process, and estimated completion.

When finished, a dialog will display a summary of the log file location, settings, and location error summary.

Data Recommendations

This tool requires an input of many overlapping images. At least 60% overlap in image extents is recommended for successful point cloud generation. Evenly distributed photos taken from varying angles are also recommended. See Data Collection Recommendations for Pixels-to-Points™ BETA for more information.

Input Image Files

The input image files section lists the photos that will be processed by the tool. Load input images using the buttons below the input image files list. Highlight a loaded image to display it in the Image Preview window to the right of the list.

The loaded images will be sorted based on the time stamp of the images. During the initial processing, all images are converted to JPG and copied to a temporary system file.

The input image file list will display information about each input image based on the metadata and EXIF tags associated with the input files.

Add File(s)... — Press the Add Files button to select image files to add to the input image files list. This list automatically filters to show just JPG, PNG and TIFF files. Use the SHIFT and CTRL keys to select multiple files to load from the file browser.

Add Folder... — Press the Add Folder button to add all files from a directory.

Add Loaded... — If the image files are already loaded into the main map view as picture points, press the Add Loaded button to add them to the input image files list. If a subset of the picture points has been selected with the digitizer tool, an option will appear to add only the selected points. Press Yes to add only the pictures selected with the digitizer tool to the Input Image Files list. Press No to add all loaded picture points to the Input Image Files list.

Remove Selected - Highlight images in the Input Image Files list, then press the Remove Selected button to delete the highlighted images from the input list. Use the SHIFT and CTRL keys to highlight multiple images.

Load Images in Main Map - Press the Load Images in Main Map button to add the images to the main map display as picture points. This will also display directional arrows indicating the orientation of the image, and a Flight Path line feature connecting the images in the order they were captured.

Camera Sensor Width

If the image metadata indicates a camera not in the built in camera database or previously added, a dialog will appear asking for the sensor width. Check the manufacturer information for the appropriate value. The user added camera sensor width data is stored in the user data folder in a sensor_width_camera_database.txt file.

Output Files

A point cloud output will generate automatically and create a layer in the active workspace. Check the Save to GMP File option and enter a filename to save the point cloud output to a package file, rather than just storing it in a temp file. An orthoimage can also be generated and saved to the same file or, if so desired, a separate GMP file.

Output files are generated in the current display projection if the projection is planar. When there is no loaded data in the map or the display projection does not contain linear units (such as a Geographic projection), the point cloud and orthoimage outputs will be generated in the appropriate UTM zone.

Point Cloud Output

A point cloud is generated automatically. If no output GMP file is specified, it will be saved in a temporary file and loaded in the workspace. Check the Save to GMP File option and press the Select... button to specify the name and location of the output point cloud file. In the Layer Description box specify the name of the point cloud layer when it is loaded into the workspace. The default output layer name for the point cloud is Generated Point Cloud.

The generated point cloud is treated as a lidar point cloud and may be further processed with additional Automated Lidar Analysis Tools. The point cloud will contain the RGB colors from the images, and an intensity value that represents the grayscale color value (note this is not a true intensity value since there was no active remote sensing performed).

The point cloud output will contain metadata parameters reflecting the settings used in the tool. These special metadata parameters are stored in the generated output layer, and are only saved in a Global Mapper package file export.

Create Point Cloud by Resampling Mesh (3D Model)

Select this option to produce a point cloud from the mesh. This creates a less noisy point cloud. Checking this option will generate a mesh feature, whether is it saved as an output or not. This will increase the processing time. A point cloud can also be created from the saved mesh at a later point. See Create Point Cloud from Mesh

Orthoimage Output

Check the Create Orthoimage GMP file option to produce an ortho-rectified image layer as an additional output to the tool. This output can be saved to the same global mapper package file as the point cloud, or press the Select... button to specify a different output package file.

Note: An orthoimage output can also be created later from the generated point cloud after the tool has run, using the Create Elevation Grid tool, and selecting a Grid Type of Color(RGB).

The standard orthoimage output is calculated using a binning method of gridding which selects the color of the highest elevation point for each output pixel. (See below for option to generate the orthoimage from the Mesh).

Resampling (for Noise Removal)

Specify the resampling method set when the orthoimage is loaded into the workspace. The default value is a Noise Filter that takes the median value in a 3X3 neighborhood. This filter impacts the display of the ortho-image and will be remembered as the display setting anytime the Global Mapper Package containing the orthoimage is loaded. The noise filter is the selected default to help reduce some of the noise in the data that may be particularly noticeable in areas where the generated point cloud is less dense, or at the edges of above ground objects like buildings. The resampling method can also be changed in the layer display setting.

Resolution

Specify the pixel resolution of the output orthoimage. This grid spacing setting can be specified using a multiple of the calculated average point spacing, or using an explicit linear resolution in Feet or Meters.

See also more information about output file projection.

Create Higher Quality Orthoimage from Mesh

Check this option to create the Orthoimage from the mesh. When checked, a mesh is always created internally (even if not being saved), which will increase processing time. This option typically results in a better quality ortho-image.

Note: This can also be generated later from a saved textured mesh file. See Create Image from Mesh

Mesh (3D Model) Output

Select this option to create a simplified and texturized 3D mesh output. This export requires additional processing time, and will perform the following additional steps: 

Save to Format

Specify the format of the output mesh file. The mesh can be saved as a Wavefront OBJ file, or a Global Mapper Package file. From there it can be converted into other 3D model formats via Export 3D Formats.

The Global Mapper package export will contain projection information and other 3D model orientation settings.

The Wavefront OBJ file will have an external *.prj file that Global Mapper recognizes as the coordinate reference system to load the file. The Wavefront OBJ file is exported with a Y-Up orientation, so when loading the model if prompted with the 3D File Import Options dialog, do not check the 'Load Z-up Model as Y-up' setting. This model is already oriented Y-Up.

The mesh file will use the same projection as the generated point cloud. If no display projection was set prior to run of Pixels-to-Points, this will generate the mesh in an appropriate UTM zone. The export will also create a texture image stored in an external image file ( *.jpg) and the material will be defined in a *.MTL file.

Log/ Statistics Output

Choose to output a log and statistics file from the process. This will also contain residual error calculations. If log/ statistics folder is selected, the log will be generated while the tool is running, and saved to a temporary folder that will be listed at the bottom of the Calculating Cloud/ Mesh from Images dialog.

Ground Control Points

Ground Control Points are not required. The point cloud may also be adjusted after it is generated using rectification or shifting for horizontal adjustment. Vertical adjust can be done on the resulting point cloud using Alter elevation values for a fixed offset, or the Lidar QC tool for vertical control point comparison and alignment. For more information on adding ground control points see Ground Control Points with Pixels-to-Points™ BETA

Options

Reduce Image Size (Faster/ Less Memory) by a Factor

This setting will downsample (i.e. reduce the pixel resolution) of the input images before processing them. The input image files will be resampled using a box average based on the scale factor size.

This setting has the greatest impact on the speed of the Pixels-to-Points tool processing. By reducing the number of pixels in the input images, the processing time is decreased exponentially. This will slightly reduce the number of points in the generated point cloud, but by a smaller factor than the initial image reduction.

Use Relative Altitude Based on Ground Height

The relative altitude setting will vertically shift the point cloud. Elevation measurements in GPS data are not extremely precise, and this setting, therefore, overrides the elevation values by specifying a starting ground height for the first input image. For example, if the site of the launch is surveyed during UAV data collection, that precise elevation value can be set as the ground height. Subsequent calculated points in the output point cloud will then use the relative altitude to calculate the vertical positions.

The Relative Altitude value will automatically populate from terrain sources. First it will check loaded terrain data, and if none is present, it will query the 10m National Elevation Dataset (NED), then the ASTER GDEM, and last the 30m SRTM data to recommend a ground height value for the image with an elevation (the query stops at the first valid elevation value).

Note: Obtaining the ground height from terrain data like NED, ASTER, and SRTM requires an internet connection.

Analysis Method

Specify the method used for the Structure from Motion (SfM) analysis. For more information see Analysis Method

Quality

The quality setting controls how much examination is done to identify matching features in the initial sparse point cloud generation (for the incremental method only), as well as the resolution used in the point cloud densification process. In most cases, the difference between normal and high quality is not significantly noticeable but using the high setting will increase the processing time.

Normal (Default)

The normal setting impacts the amount of feature identification and matching performed when first detecting feature points. This is only used in the Incremental Analysis Method. During the densification process, a setting of Normal will use half the full image resolution.

High

The high setting does additional feature identification and matching when first calculating the sparse point cloud. The setting also uses the full image resolution during the point cloud densification and mesh creation process. A high setting requires more memory, and if the densification and mesh creation runs out of memory it will attempt to rerun with the normal setting (half resolution images for densification and mesh generation).

Camera Type

The camera type accounts for distortion in the image. Most consumer cameras are pinhole cameras, where the image can be mapped onto a planar surface. The camera type needs to be known in order to accurately reconstruct the 3-dimensional structure. The camera type typically only needs to be modified if the camera is a Fisheye or wide field of view lens. The default value of a Pinhole Radial 3 will calculate a best-fit model with 3 parameters of radial distortion when locating the pixels in 3-dimensional space.

Save to File...

This option will save the Pixels-to-Points settings, input image references, and any added ground control points to a *.gmi2c file.

A *.gmi2c file is an ASCII based Global Mapper file that stores the settings, entered control points, and referenced files specific to the Pixels-to-Points tool.

Load From File...

Select this option to load a saved Pixels-to-Points tool setup from a *.gmi2c file.

Run

Once the input images have been loaded, and all the desired settings select, press the run button to start the conversion process.

When the run button is pressed, before beginning the processing, the application will check the expected memory requirements based on the input data and settings. Using the specified settings, if the process is expected to require more memory than the machine has available, I warning dialog will be displayed. This dialog will suggest an image reduction factor to facilitate reasonable processing of the input data based on the machine resources.

Calculating Cloud/ Mesh from Images

Once the process is run, this calculation dialog will display the progress. This will list the log file as the process is running, as well as display a progress bar and estimated finish time. The bottom of the dialog lists the path to Log File.

Results

A dialog will display the point cloud process summary when the process is complete. The output files will be loaded into the current workspace.