You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are two issues with the default point2dem grid:
The current algorithm sets the grid size as the "typical" ground sample distance, that is, the estimated value of how far two camera pixels are when projected onto the ground. That is not right, a good rule of thumb is that the DEM resolution is about 4x this ground sample distance.
Currently, this "typical" ground sample distance is obtained by taking the median of all estimated distances between two neighboring pixels on the ground. That usually works, but for LRO NAC and other linescan cameras, the ground sample distance in one direction and the other can be very different, e.g., 0.5 m along a scanline, and 2 m across scanlines. The median of a set of numbers some of which are 2 and some are 0.5 will be one of the two, while the correct solution here is more like 1 m. I think given the list of all GSD values encountered by poinit2dem, one should take the ones in the 25% to 75% range, and average them, rather than taking the median. This will avoid worst outliers and the final result in this example should be on the order of 1 m.
There is already some kind of logic like this for mapproject, maybe it can be factored out and used in both places.
The text was updated successfully, but these errors were encountered:
point2dem also currently gives bad results close to poles if used with default options. It should switch to polar stereographic (unless the user explictely chooses a project) if say within 20 degree of a pole (the precise number is to be determined).
Also, a single outlier not filtered out by intersection error can result in huge DEMs mostly filled with empty space (or point2dem can crash running out of memory). A simple filtering based on percentiles (which we already use for triangulation error) but applied to each of projected_x, projected_y, height_above_datum coordinates of each point should be able to throw out points that are way out there compared to the distribution of most of the values. Currently exposed parameters can control this new outlier removal.
There are two issues with the default point2dem grid:
The current algorithm sets the grid size as the "typical" ground sample distance, that is, the estimated value of how far two camera pixels are when projected onto the ground. That is not right, a good rule of thumb is that the DEM resolution is about 4x this ground sample distance.
Currently, this "typical" ground sample distance is obtained by taking the median of all estimated distances between two neighboring pixels on the ground. That usually works, but for LRO NAC and other linescan cameras, the ground sample distance in one direction and the other can be very different, e.g., 0.5 m along a scanline, and 2 m across scanlines. The median of a set of numbers some of which are 2 and some are 0.5 will be one of the two, while the correct solution here is more like 1 m. I think given the list of all GSD values encountered by poinit2dem, one should take the ones in the 25% to 75% range, and average them, rather than taking the median. This will avoid worst outliers and the final result in this example should be on the order of 1 m.
There is already some kind of logic like this for mapproject, maybe it can be factored out and used in both places.
The text was updated successfully, but these errors were encountered: