QA/QC Challenges with Lidar
is the co-founder and CEO of CompassData, one of three businesses based on GIS, GNSS and wireless technology. He developed the innovative idea of owning and licensing Ground Control Points (GCP), and created the CompassData GCP Archive to provide standardized, accurate GCP at five precision levels. Howard makes numerous presentations for seminars and conferences around the world and has authored articles for technical journals on Geology, Survey, GIS and fleet management topics. Memberships include GITA, ASPRS, CSIA, APWA, and MAPPS.
To ensure a quality product in any LiDAR project, good Quality Assurance and Quality Control (QA/QC) processes must be in place from the beginning. This is a common sense statement but the devil is in the details. A QC program should be part of any initial project design, for measurement and checking each step of the process. Then, to verify results, QA is conducted during the process and before delivery to ensure the project meets client requirements.
LiDAR is a rapidly evolving technology in all aspects. The hardware, software, and processes continue to develop, and client expectations are rising, too. Implementing a QA/QC program in every project will help to ensure continuous quality improvement, increased productivity, and decreased cost from project to project with best practices that are repeatable and measurable.
The huge volume of data that is generated by LiDAR can be overwhelming. So, having smart processes designed to build in quality, such as using referential Ground Control, aids in processing the raw data correctly in the first place.
It’s a good idea to base your QA/QC plan on known standards and professional judgments, both internal and third party, and to include those in the process plan. The import-
ance of good professional judgment cannot be overstated, as the standards themselves are evolving along with the technology.
In fact, QA/QC is actually needed twice: Fundamental Vertical Accuracy (FVA) testing for the initial data, which verifies the data are ready for the ‘bare earth’ processing, building recognition, or determined next steps (see Figure 1 for example). This is followed by a Consolidated Vertical Accuracy (CVA) test, performed for verification of the bare earth model.
By finalizing with an independent data validation, project managers can assure that they are producing the very best for their clients.
In LiDAR, just as in photogrammetry, there are typical points of failure, and because we know where those generally fall, we can usually catch them through standardized QA/QC so they can be corrected.
↓ This is a Fundamental Vertical Accuracy (FVA) test, which provides a check on LiDAR accuracy based on ASPRS precision levels standards. All the points in columns A through I are surveyed and were supplied either by Ground Control Points or new field data collection. The LiDAR vendor provides the data in column J; then the difference is calculated. This provides the statistical summary, based on the differences, to summarize the LiDAR quality.
UTM 14 North, NAD83, NAVD88
All units in meters where applicable.
MSL = Geiod09
|Z Min:||-0.15||* 1.9600||0.140|
Common Error-Producing Issues in LiDAR Projects
When the LiDAR collection is first planned, project managers design the flight lines in order to have small overlaps in the data. The overlap is crucial for well-balanced and controlled texture, ensuring that the data will come together smoothly. If there is a missing overlap, it is difficult to stabilize the flight lines, by sewing them to each other. For example, in a grid with six flight lines, line four is tested by lines five and three. If there is an opening or a gap between the lines, then there is a lack of stability in the final results.
Another typical issue that can result in errors in LiDAR data has to do with the technical components of the collection and the project realities, such as topo-graphy, and weather during collection. It is a complex system, pulling together the GNSS from the airplane, the IMU and the LiDAR sensor. The components have to work flawlessly together as the plane is flying quickly over the landscape.
The GNSS technology is solid, but cycle slips or similar failures still occur occasionally. The IMU component also works very well, but there are some technical problems that can occur here too: moving bias errors (for instance, ‘sensor drift’) and random errors. The GNSS unit helps to stabilize the IMU, but minute problems can happen.
Related to the IMU, before every LiDAR collection flight there is a calibration protocol, but it’s not consistently done, or done completely all the time. This is a sensitive thing to point out, because this brings human error into the equation. Consider that the calibration protocol for a particular project takes around 15 minutes, before every single flight. Occasionally, the pilot doesn’t want to wait the full 15 minutes. It’s gone fine for every previous flight, every day for several days, so why not take off after eight minutes, or nine?
Different IMU models require different calibration procedures, and the atmospheric conditions at the time of collection can affect the calibration (when the plane compensates for strong wind, for instance). Calibration issues do not create errors – yet – but they could result in one. And finally, the laser unit is a complex piece of the technology itself, and problems can occur here too.
None of the above issues is unusual, and most can be fixed through data processing as long as there is a stable set of Ground Control Points (GCP) available to ensure the data collected is accurate. For instance, an evenly distri-buted set of GCPs, surveyed in the field, strengthens the entire texture of the collection. Now, the line four mentioned above might not contain a GCP – but because it’s tied to lines three and five, the GCP helps to stabilize the entire collection. GCP surveys can be a source of error if not performed properly, and must be reviewed and measured as part of the QA/QC process.
Once the LiDAR data are collected, it is up to solid QA/QC to find and correct the issues mentioned above.
Millions of points are gathered in a LiDAR collection, so in processing, it is usually cut into tiles to create manageable amounts of data at a time.
LiDAR offers a number of different uses, so for purposes of this article, let’s assume we’re working with a LiDAR collection to ascertain flood plains. Within the tiles of data, the operators work through map layers to remove the vegetation and the man-made structures from the imagery. We use the software to remove those items, to calculate areas prone to flooding, looking for points on the ground among the vegetation in order to find actual ground elevation – ‘bare earth.’
The operator’s competency is very important at this stage, in order to direct the software algorithms correctly. Other uses of this technology that require a different density of points and slightly different processing include looking for power lines and other items at a height above the ground, urban infrastructure modeling, etc.
Once the processing has been done to remove the vegetation, man-made struct-ures and artifacts from the imagery, the QA/QC process should be brought back into the picture. In doing that, we look at random points within the tiles: we ask what elevation do you have here, or here. We reference the model against Ground Control Points. Using a method of distributed GCPs through the series of tiles, across the full project, we can put together a verification of the elevations. We can verify that it meets reality and is not skewed by some dense vegetation or a crumbled structure that the software would not have found obvious and inaccurately modeled.
Standards in QA/QC
The basic tenet of standards such as FEMA’s PM61, that Ground Control should be used to develop the surface, then once collection is done, an independent QA/QC should be conducted to prove the verification, is still a valid requirement. Current standards generally reference each other, and with the rapidly changing LiDAR technology, they are constantly in a state of revision. This changing environment requires professional judgment in the development of each program.
Regardless of how careful the initial collection is and how thoroughly the project is planned, it’s still vitally important to require independent QA/QC performed by a neutral independent third party before turning a project over to a client.
“Just as we use ground control to improve the product, using independent checkpoints to validate, we should use independent software to check the data for quality,” said Dr. Chuck O’Hara, President of Spatial Information Solutions. “LiDAR production happens through a series of teams: the flight itself, the surveying team providing ground control, then the processing team, and an independent check on all of that. It’s rare that the same people are working on it across the board, which makes an independent QA/QC process even more crucial.”
O’Hara’s product automates the verification process using ground check points and produces a report that meets requirements of such organizations as FEMA.
Whether from a software supplier, an independent third party firm, or as part of your own internal QA/QC, it’s best practice to check your results through the FVA and CVA processes and to use Ground Control to validate throughout. Constant process improvement means successful projects and happy clients.
In the final analysis, it’s the client requirements that we should be most concerned about. If we produce consist-ently accurate results, using standardized processes so that we can do this faster and smarter each time, the clients will continue to order the projects, and the data we collect and share with the world will continue to be seen as valuable. QA/QC processes ensure this.