LiDAR vs Photogrammetry | Part 2 – Comparison
In the first part in this mini blog series, I discussed the use of airborne LiDAR (Light Detection And Ranging) versus photogrammetry, and what we can expect from advances in sensors and post processing. In part two, I will now elaborate on the topic and make a comparison between the two technologies.
For most of the applications of airborne LiDAR or camera data, accuracy is the most important requirement. Accuracy can be quantified in absolute and relative accuracy. Absolute accuracy is how accurate the point cloud is in relation to known points in any given coordinate system. Relative accuracy is how accurate the point cloud is relative to itself. If you have good relative accuracy, then points in your point cloud would be where they are supposed to be in relation to the other points in the same point cloud. This means is that you can have good relative accuracy, but have terrible absolute accuracy. It also means that good absolute accuracy is dependent on good relative accuracy.
There is a popular belief that LiDAR has unparalleled accuracy in applications where the deliverable is a point cloud. And while this is definitely true for terrestrial LiDAR, and a well used argument from LiDAR manufacturers, the fact is that when it comes to airborne LiDAR it is nowhere near as obvious. Aerial LIDAR uses a very accurate distance and angle measuring device, but this is reliant upon relatively inaccurate position and angular measurements caused by the UAV. This means the final data is of comparatively lower accuracy. Furthermore, airborne LiDAR also makes it difficult to gather high resolution data in comparison to photogrammetry.
Airborne LiDAR sends laser pulses and measures the time it takes for the pulse to return, but as the position and direction in space only can be determined relative to the aircraft’s coordinate system, given by IMU data and GPS positioning, there are obvious dependencies very similar to that in the case of photogrammetry. Photogrammetry uses advanced triangulation and pixel matching to establish point clouds, utilizing post processing software, and does a very good job of creating an accurate 3D model. Another difference between photogrammetry and LiDAR is that LiDAR works by scanning progressively through the scene, as opposed to taking a snapshot of the scene at one moment, so any discrepancy in its measured movement during the scan will distort the resulting data further. Think of it as a coordinate system inside a coordinate system, and while the inside coordinate system may be accurate (produced by the LiDAR itself), the outside coordinate system (determined by the aircraft) is not only less accurate, it is constantly changing position during a progressive scan, making it difficult to achieve significant accuracy within.
The positional information can be made more accurate if using RTK/PPK GPS systems. A much bigger problem though is the angular measurement. The rate and accuracy with which normal drone IMUs measures angular movement is nowhere near what the LiDAR itself is capable of. Assuming you did know precisely where the sensor is positioned, using even top of the range military-spec angular measurement sensors (not likely as it would require an enormous budget), the best you can generally obtain is in the order of 1/100th of a degree. This sounds quite precise but results in a large error when you are flying high over a site. Lightweight sensors currently used on drones are typically unable to achieve better than 1/10th of a degree angular error.
With photogrammetry, the accuracy comes from each photo being a momentary, complete and precise record of everything seen within the frame, so it doesn’t require such accurate positional and directional sensor information. Within each image, there is always a precise relationship between the angles of different pixels in the image at a given distance, so there is no substantial chance of inaccuracy within an image once you know the precise internal lens and camera parameters.
A potential source of error comes from the process of stitching together many different images to make the 3D model. However, this can be countered by a high-degree of image overlap and the use of Ground Control Points (GCPs). GCPs are positions marked and surveyed on the ground, visible in the aerial imagery, to which the data is precisely fitted during the processing stage. This ensures that the entire 3D model is accurate, i.e. usually ~5cm across the site, but it is possible to increase this accuracy, which has been demonstrated to as high as 5mm accuracy.
Pros and cons
As we have seen above, there are limitations as well as advantages to both airborne LiDAR and photogrammetry.
Photogrammetry uses ambient light, why it becomes obvious that low light conditions are detrimental to producing a useful deliverable. It also needs structure to be able to identify unique points in several pictures, as such a snow-covered ground or a large asphalt space makes it difficult, if not impossible, to post process. Photogrammetry cannot penetrate vegetation canopy, making it impossible to produce an accurate Digital Terrain Model, DTM. With photogrammetry modeling narrow objects such as transmission lines, pipes, sharp-edge features results in low conformance because photogrammetry makes rough approximations and then smooths the model out to remove noise. An unfortunate side effect, is that conformance at edges declines.
Airborne LiDAR is an expensive choice if you are measuring bare earth mine sites, earthworks projects, and other areas that are not occluded by vegetation canopy. Modeling with LiDAR is monochrome and generally has much less resolution/point density. Accuracy is not as good as photogrammetry, and one can expect positioning accuracy to be as bad as 15 to 20cm.
While accuracy usually is priority 1, it is as often followed by cost considerations. Airborne LiDAR is more expensive than photogrammetry, period. Mostly because airborne LiDAR is an active scanner requiring a high precision GNSS receiver and an equally advanced IMU, while photogrammetry uses passive scanners that rely on available ambient light and ground control points, and post processing require less of the GNSS and IMU.
There is a distinct difference between the deliverable from airborne LiDAR data and photogrammetry. While both are able to generate point clouds and 3D models, the airborne LiDAR data is often of significantly less resolution and lack the color information provided by a RGB camera. Finally, water bodies or transparent surfaces are not accurately captured by either photogrammetry or LIDAR, however this is not a problem as you can interpolate from the nearest shoreline, given water is effectively a level surface.
Photogrammetry is well suited to the task of mapping mine sites, earth works, quarries, and other areas that are not occluded by vegetation. Typical use cases include surveying, generating Digital Surface Models, DSM, and volume calculations. There are of course many other use cases that involves other types of sensors than RGB, and that utilize photogrammetry to achieve its end result, these are however not relevant in a comparison with LiDAR.
LiDAR, while capable of doing what photogrammetry does, disregarding the previously discussed limitations, is particularly well suited to the task of mapping areas that are occluded by ground vegetation, mapping in low light conditions and mapping of narrow objects such as transmission lines, pipes and sharp-edge features. Typical use cases include generating Digital Terrain Models, DTM, calculating biomass in forestry, modelling of narrow objects such as transmission lines, pipes, sharp-edge features.
In comparing these two technologies, what’s essential to understand is that the one is not better than the other. They both have their applications, and they also have their limitations. It boils down to understanding the differences in approach and the capabilities/limitations of the two technologies, and then make an educated decision based on business need, cost and ROI.
Some of what I have written about above is subject to change. Once focal plane array (FPA) LiDAR, or solid-state LiDAR, is available on the market, LiDAR will be able to frame a shot, similar to taking a photo, capturing a momentary scene with an array of detectors, as opposed to the previously discussed progressive scan. FPA LiDARs are thought to become cheaper, smaller and more geometrically accurate than the current LiDAR technology.
Roger Öhlund, CMO SmartPlanes