Winter  >>  2011  

Lidar Update

Terrestrial Explodes, Software Still Lags

Fig. 1

Power lines outside Toronto, Ontario, Canada, courtesy of Optech. This image is also on the cover.

Fig. 2

Taylor Creek Reservoir, Orange and Osceola counties, near the St. John’s River in Florida. This image is the full point cloud of LiDAR displayed by elevation, collected Feb. 2010 for St. John’s Water Management District. This project was captured with Terrapoint’s sensor that is mounted in a helicopter with approximately 14-18 points per square meter. Courtesy of Dewberry.

Fig. 3

Lynx mobile LiDAR scanner, courtesy of Optech.

Fig. 4

Toronto, Ontario, with data collected by Lynx only, with a dual sensor head V200 system, courtesy of Optech

Fig. 5

I-25 and E-470 interchange, south of Denver, Colorado, flyover showing color elevation, courtesy of Woolpert.

Fig. 6

LYNX Mobile Mapper data of a series of track near Tulsa, Oklahoma, courtesy Optech Inc. (rendering is LP360 via QCoherent Software, LLC).

By , Writer
Portland, Ore.

Looking at the image of the underside of a bridge derived from data collected by a boat-mounted LiDAR system, engineers can detect spalling, a type of deterioration in concrete. Surveyors can collect data at a few centimeter accuracy while driving in traffic with a LiDAR system mounted on their truck, at much less risk than by standing with a tripod on the side of a busy freeway, where sub-centimeter is achievable. By scanning facades with LiDAR and fusing this data with aerial digital imagery, companies are creating detailed 3D building models that will soon be commonplace in digital maps on smart phones and car navigation systems. These are just a few of the rapidly increasing number of applications made possible by continuing improvements in LiDAR technology. “LiDAR systems are getting more sophisticated and standardized. The American Society for Photogrammetry and Remote Sensing (ASPRS) is considering a proposal to establish a LiDAR division,” says Jim Van Rens, President of Riegl USA.

LiDAR has three advantages, says Paul DiGiacobbe, Assistant Vice President and National Data Acquisition Manager for HNTB: safety, because it allows data collection without putting people in harm’s way; speed, because the design-build approach cuts the survey portion of projects from months to days; and survey-grade accuracy. “All of that has been there for the past year and a half or so. Initially, everyone jumped on board. Now, we are starting to develop best practices,” he says. “Expectations for horizontal and vertical accuracies have skyrocketed,” says Lewis Graham, President and CTO, GeoCue. “For aerial LiDAR, we used to be happy with 10-centimeter accuracy; now I’d like to get 2 centimeters or better.”

Due to hardware improvements, the point density of LiDAR data has been rising steeply. “Ten years ago, the Federal Emergency Management Agency (FEMA) was collecting points at 5-meter intervals,” says Dave Maune, Senior Project Manager for Mapping and GPS/GIS Services at Dewberry. “High density sensors now allow us to collect 8 points per square meter.” Other recent advances, Maune says, include the use of intermediate returns, which is due to software improvements; ground-based and mobile LiDAR, rare a couple of years ago and now very popular, which is due mainly to hardware improvements; and the use of full waveform LiDAR data, which is due mainly to hardware improvements, but takes good software to fully exploit.

Choosing a Sensor

As they have improved, LiDAR scanners have also diversified, to the point that deciding which one to buy “requires quite an analysis of a 3D matrix of technical choices,” says Alastair MacDonald, Managing Director of TMS International Ltd. “For example, Optech’s Pegasus uses multiple channels and multiple lasers, while Leica will soon be releasing a single-laser system with a beam splitter. Different devices use different scanning patterns (sawtooth, sinusoidal, raster, elliptical, etc.) and different wavelengths. Wavelengths can be green, red, IR, short IR, etc. The laser can be solid state or semi-conductor. The power levels vary as well and that affects the operational flying height.”

Manufacturers also have a wide range of prices, according to Brian Bailey, Regional Sales Manager for Optech. “However,” he adds, “the customer has to be educated that for less money you get significantly less. The price for survey-grade accuracies has come down. Even with the new technologies, the prices are coming slowly down, so companies are able to buy more sensors.” Large, established service companies tend to have strong allegiances to particular LiDAR vendors, while new companies are far more amenable to persuasion.


LiDAR hardware is still rapidly advancing and undergoing development, driven by customer demand and competition. Besides increases in the repetition rate (colloquially referred to as “rep rate”), significant recent hardware innovations include the maturation of the integration of direct georeferencing — many more points and more precise relative and absolute positioning — and greater miniaturization, making the sensors more portable and adaptable to different types of platforms, says Karen Schuckman, an instructor in the Department of Geography at Pennsylvania State University. Measurements of the intensity of the returns also have significantly improved, adds Graham, so that LiDAR data is starting to look like imagery. “For example,” he says, “you can now use mobile LiDAR to read the letters in a traffic sign.”

In September 2010, Optech launched the Lynx M1, which has a 500-kHz rep rate, a solid state drive, and multiple cameras. “We designed the Lynx Mobile Mapper from the ground up for mobile mapping applications,” says Bailey. “It is the first LiDAR sensor with a 360-degree unobstructed field of view. We are pushing the edge with an accuracy of 8 millimeters at a 200-meter range. In addition to being a GIS and mapping instrument, the Lynx allows surveyors to work at highway speeds, putting them out of harm’s way.”

“The point density for what we do is very good; we don’t need any more,” says Michael Frecks, President of Terrametrix. “It is now up to 600,000 points per second with our system. For some things in the future, we could use more points, but it is not our focus now. The accuracy is also good.”

Of course, more data means larger files, which require more computing power to process. “Our LiDAR files are typically 250 megabytes to 1 gigabyte in size,” says DiGiacobbe. “For PCs, you need high-end graphics, a 64-bit operating system, and as much RAM as possible; otherwise you may run out of memory when you are trying to load the file.”

ADSP and Firmware

From Riegl’s perspective, says Van Rens, the real big move forward is the switch to advanced digital signal processing (ADSP), as was done extensively in the 1960s with regard to radar. ADSP, in turn, is dependent on firmware, which is getting much more sophisticated. “In electronics and computing, firmware is generally a small program, data structure, or control loop that operates a device,” Van Rens explains. “We now incorporate the advances in digital signal processing in the laser scanner’s firmware, enabling us to more effectively take advantage of these advances. In 2003, when we introduced full waveform processing, we had to create a sophisticated (for the time) software package that would process the waveform. We have moved on to pull this processing back into the firmware of the LiDAR scanner, so that it operates internally, in real time, as opposed to outside, in post-processing software.”

We need the LiDAR equivalent of photogrammetric aerial triangulation. It would be a software application, though there may be ways for hardware to improve that as well.

–Dave Maune, Dewberry

For the intensity information, analog systems initially used 8-bit channels, then they used 12-bit channels, and now they use 16-bit channels. They allow users to display the digital intensity levels on a scale of 0 to 1, 0 to 255, or, with 16-bit channels, 0 to 65,536. “ADSP,” says Van Rens, “enables us to calibrate the sensor to get up to 3,160,000 intensity levels, which provides dramatic image response. For example, in open pit mines, the derived front reflectance detects disturbed earth and allows operators to locate strips of high-quality coal.”

ADSP technology is in the products that are being fielded right now. However, there is always a lag on the software side. Therefore, Van Rens says, Riegl is studying each software package and showing its clients how to change the range that the software uses to display the intensity information by making a programming change.


Advances in hardware are of little use unless data processing software keeps up. “Software is the key,” says Frecks. “Even before ground-based mobile mapping, software was the bottleneck for aerial LiDAR,” says Bailey. “We can only progress the technology as fast as computer manufacturers and software vendors progress it. We spend a lot of time working with software vendors.” Tim Blak, Project Manager for Geodesy & Remote Sensing at Dewberry, puts it even more strongly: “We continue to see the same that we’ve seen for the past year: higher repetition rates and integrations with cameras and RGB sensors, but no new revolutionary software that increases production. The software is not keeping up at all.”

Blak also points to another problem: “In the production environment, a single software product does probably 80 percent of the work. It works well, but it is not terribly efficient. We need to get to 95 percent, and we have to do the rest of the work by hand. We now see a lot of this processing done overseas, especially in India and China, because we cannot afford the cost of the manual processes. Their first products were not that good; however, they are learning and constantly improving to where we will soon not be able to compete with such a highly skilled workforce.”

Historically, most LiDAR software was developed by vendors who needed it for their own in-house use, says Schuckman. “Most of the new commercial off-the-shelf (COTS) software was optimized for LiDAR,” she explains, “because the unique topology of point clouds and the things you do with LiDAR data (such as turn points on/off by attribute) requires fairly specialized processing. It is challenging for end users to purchase software packages that are specific to LiDAR data and integrate them into their organization, but it is a necessary investment. There are some open source tools, but we still have a ways to go to develop open source tools that users can access and customize.”

There are about ten software packages on the market to process LiDAR data but, according to MacDonald, they are all different and not all are able to deal with the types of data that may be input to them. “Some manufacturers don’t sell software; some do but it is not always as sophisticated as what you can buy from other vendors who are active in mapping operations,” he says.

The primary design platforms for engineers, such as Bentley’s MicroStation, are now able to see LiDAR point clouds inside CAD, which is contributing to making LiDAR more mainstream. “Up to three months ago,” says DiGiacomo, “only specialists were able to view LiDAR point clouds using advanced software packages; now every CAD platform becomes a LiDAR viewer.”


According to Layton Hobbs, CP, Practice Leader at Woolpert Inc., airborne LiDAR is still not fully mature and will continue to evolve as waveform digitizing takes hold and new classes of sensors emerge that will move the bar even higher. Most experts agree that, as its spatial resolution and accuracy continue to improve, airborne LiDAR is undergoing evolution, rather than revolution.

Terrestrial/Mobile Mapping

Experts seem unanimous that mobile mapping — which has emerged over the past couple of years and uses LiDAR scanners mounted on cars, trucks, and trains — is “where the most excitement is these days” (Schuckman), “spreading like wildfire” (Charles Toth, Senior Research Scientist, Center for Mapping at The Ohio State University), “revolutionary” (Graham), “the biggest recent advancement” (Blak), and “the biggest step change I’ve ever seen in the marketplace, opening up a new array of business opportunities” (MacDonald).

Mobile mapping is enabling survey-grade accuracy under extremely challenging conditions. For example, Graham points out, it is now possible to survey the underside of a bridge using a mobile mapper mounted on a watercraft and collect the data with an accuracy of a couple of centimeters, despite the rocking of the boat. It is very popular with state and local departments of transportation, because it allows them to get surveyors out of danger on busy streets and highways and greatly reduces the impact of surveys on traffic flow. Therefore, according to Van Rens, mobile mapping might make big advances into the transportation infrastructure sector. Additionally, Frets points out, “it is making it more cost effective to do things we have already done before and is allowing us to do things with the laser data that have not been done before because of cost.”

Three or four years ago, Graham points out, there were no commercially available metric systems; now there are half-a-dozen vendors selling mobile syst-ems. Microsoft and Google are probably testing LiDAR capability on their vehicles for even better street and city views, says Toth.

One perfect application for mobile LiDAR is for big power companies to fully map their distribution systems, as required under new rules by the North American Electric Reliability Corporation (NERC) and the Federal Energy Regulatory Commission (FERC). Three of the biggest power companies have to map about 30,000 miles of power lines by early next year, says Van Rens.

What triggered the spread in mobile mapping? Prior to Optech’s introduction of the Lynx system, there were no high-accuracy commercial systems. According to one theory, after Optech launched Lynx, other vendors, surprised at the speed at which the system was adopted, decided that it was safe to buy into this technology. Riegl was the next vendor to introduce a mobile LiDAR system, then Trimble, using a laser scanner from Riegl.

Graham points to three challenges that mobile mapping has introduced. First, on the ground, GPS outages (in urban canyons, in tunnels, under bridges, etc.), require supplemental ground control. Second, now that mobile mapping has enabled LiDAR to collect assets and features at very high accuracy, it has heightened the importance of fusing LiDAR data, which is used to locate features, with imagery, which is used to identify them. Third, the accuracy and density of the points collected by mobile LiDAR is so much higher than for airborne LiDAR — for example, 2,000 pulses per meter, which is one order of magnitude more than what you can do from a helicopter — that it is now possible to do real 3D. For example, it is now possible to collect the elevation of highways, railroad tracks, curbs, etc. which cannot be done with airborne LiDAR.

To avoid excessive sampling of the same area when a data collection vehicle is stopped, Toth recommends that the LiDAR systems be coordinated with vehicle speed, a feature that is already available in some systems.

Aerial vs. Terrestrial

Opinions differ as to whether airborne or ground-based mobile LiDAR will expand faster. According to Blak, mobile will expand faster than airborne, because the platforms are cheaper and include platforms of opportunity, such as trains. Van Rens, however, argues that in the short term, airborne LiDAR will grow more because it has been around longer than terrestrial LiDAR and, therefore, is more mature and has more sophisticated users.

Certainly aerial LiDAR and photogrammetry are very complementary with ground-based mobile mapping. The aerial view is seamless and synoptic, while the street perspective provides more details. For example, by fusing the top view of a building and its façade, it is possible to generate true 3D models, Blak points out. The same applies to views from the top and the side of a cliff. Urban and highway data models still need some of the accurate measurements that can only be obtained from an aerial survey, says MacDonald.

“In mapping a transportation corridor,” says Schuckman, “the ground view can capture the elevation, as well as appliances, assets, etc. that would not be visible in a vertical aerial perspective. From the air you cannot reliably capture vertically-oriented features such as poles; you might at most get one hit on the top if your point density is sufficiently high. However, if you sweep from the ground, you will capture all of these vertical features. For effective 3D you need both.” Airborne LiDAR can also be used to expand the coverage or fill the gaps between mobile collections, Hobbs points out.

There is some difference of opinion as to the necessity of using ground control points in areas where GPS signals are denied. “LiDAR sensors have GPS, INS, and wheel encoders, but that is not enough when you are trying to get accuracies of two centimeters or less,” says Graham. “Thus, high accuracy projects require supplemental ground control.” DiGiacomo agrees: “To create a dataset that is engineering grade, we have to do a control survey as well. This helps to geocoordinate the data and to tie multiple passes together.”

Hobbs, however, says, “We have been in the forefront of using GPS for many years and know how to deal with GPS outages. We plan for and anticipate GPS-denied areas on our mobile mapping projects and vary or supplement the collection appropriately.” MacDonald agrees. “With mobile mapping, you have so many sensors that you don’t need control points. This improves your operational safety. If you do need control points, you can get them from aerial photogrammetry.”

Finally, Van Rens points out that, while collecting LiDAR data from the air is much safer than from the ground on busy corridors, such as interstate highways, terrestrial surveys get higher accuracy and tighter point spacing.

Full Waveform

In the past, LiDAR sensors measured only the peaks and intensities of the returns, and multiple returns were primarily used to remove vegetation and extract the bare earth. Now, instead, users are increasingly taking advantage of the full waveform, which previously could not be done because of the massive amounts of data involved. Besides the standard LiDAR para-meters, Toth explains, waveform may give users additional information with regard to both the geometry and classification of the target. “All the LiDAR system manufacturers offer waveform capability,” he says, “though processing software hardly exists. Now some of the data providers are taking notice. There is a growing interest in waveform in all sectors, and the government provides funding for applied research.”

Full waveform data, Toth points out, has two advantages. First, it enables a better and finer description of the geometry of the object space being scanned. In forested areas, it allows users to accurately measure the amount of biomass; in urban areas, it allows users to better recover various shapes from different reflections, such as building facades. Second, it can give users a signature of the surface — for example, concrete vs. asphalt vs. grass — which may contribute to land cover classification.

Use of the full waveform, as well as higher point density, greatly improves many applications, says Maune. “Analysts with the Natural Resources Conservation Service (NRCS), which advises farmers on how to terrace, now can do better validation from the office than they used to be able to do in the field,” he says. “NRCS can now generate soils maps from LiDAR data, because it also provides slope, aspect, and curvature, which are the variables you need to know to determine water flows, soil wetness, and soil types. In mapping the Channel Islands in California, we were able to pull up every crevice, nook, and cranny.”

Full waveform is wonderful on complex structures such as buildings, says Van Rens, because it allows users to identify the orientation of the laser beam. “We have a library of what returns look like (flat, tilted, etc.),” he says. “The additional information from the full waveform allows us to better analyze and fit the pulse, lower the signal-to-noise ratio, and filter much more effectively.”

“The current practical use of full waveform data is to move the pulse discrimination from hardware into software,” Graham explains, “but it is a hard problem. Embedded in the hardware, there are algorithms that determine the exact range at which a pulse is declared. The hope is that by tweaking these algorithms in the software you can remove some of the noise. This approach is showing promise, but it is also adding to the complexity of the problem. For example, it raises the question: Am I going to tweak it differently in different areas? It is adding the ability for users to adjust some parameters, but it is also adding another level of complexity.”

Full waveform is handled by firmware inside the scanner because third-party software packages just want to see discrete geo-referenced point clouds. “People doing road work just want to see the same thing that they’ve seen for the past 40 to 50 years,” says Van Rens. “Point cloud software doesn’t want to see the full waveform. Point clouds are the geometric intermediaries. People still just want to see a point cloud, but full waveform provides superior results for it.”


At this point in the development of LiDAR technology, the key bottlenecks are in transforming the massive amounts of data collected into useful visual products in a reasonable time and in providing end users with data that their systems can fully exploit. Regarding the former, MacDonald points out that the same happened years ago with marine multi-beam technology. “You collect millions of data points per second and per square meter. Then it is challenging to produce charts in a reasonable time.” Regarding the latter point, Hobbs points out that many water modeling software applications can’t utilize high fidelity LiDAR data and must be fed a downgraded or interpolated version of the original dataset.

“Traditional triangulated surface modeling methods use individual points and are processing-intensive,” says Hobbs. “The challenge is to generate math models that retain LiDAR accuracy but greatly reduce the number of discrete points required to display complex features. For example, it might require one million points to model a river channel, but you can display the same feature with only a few hundred points using complex math models such as NURBS (non-uniform rational B-spline) surfaces.”

To Graham, however, the increasingly larger data volumes collected by LiDAR sensors are no problem at all for processing software — the intelligence is what is lacking. “For example,” he says, “software today can auto-extract the ground from the data cloud, but it leaves too many things unfiltered. The real complaint from the users is that the intelligence of the software is lagging behind their needs. This is not a new problem. LiDAR data is not fundamentally different from the correlated stereo data collected in traditional aerial photogrammetry. The customer base has never been satisfied with data extraction tools used in that domain either, for similar reasons. These tools are in the realm of artificial intelligence (AI), a vexing field that has failed to deliver on expectations.” On the same note, Frecks says, “Automated processes have to be further refined to approach 99.9 percent accuracy. What happens if automated processes that identify features are incorrect 5 percent of the time?”

Schuckman agrees. “The challenge,” she says, “is in correctly identifying and classifying objects and creating the topology so that they know what they are — a road, a building, a sign, a river, etc. — and behave correctly as polygons, lines, etc. It is very hard to automate that to work at 100 percent correctness.”

Michael Blakeman, CEO of Moedus, notes that we are at a point in the evolution of LiDAR technology where it is essential to convert the LiDAR into usable 3D models that support CAD, GIS and BIM. “The convergence of these applications is due to LiDAR technologies.”

GeoNav does terrestrial-based LiDAR and spherical video collection, mostly for rural power companies, according to Guner Gardenhire. “We collect the data for use in precision measurement and attribution. We were manually extracting the features for an electrical distribution system. We partnered with SpaceNav, which develops complex algorithms and solutions for the aerospace industry. Together, we have automated the extraction of electric distribution systems infrastructure to include power poles, conductors, transformers, etc.

He continued, “We are also developing tools for FEMA analysis (that project is in its infancy) that would allow for a comparison of geographic areas that have been affected by natural disasters, and we are in discussions to expand our collection and extraction efforts to allow CONUS military installations to manage their critical infrastructure as it relates to operations, security, and maintenance. We asked SpaceNav to develop a catalog of tools to extract features through a script processing.” Matt Duncan, president of SpaceNav stated that the integration of spherical video allows them to create rule sets that enhance the feature extraction and data fusion beyond using LiDAR by itself.

Automatic feature extraction is still the holy grail, but semi-automated feature extraction is moving to the forefront, according to Van Rens.

Maune has still a different concern: “We don’t yet have a good way to tie flight lines together and know how accurate they are. We need a better way to test the overall accuracy of LiDAR data and its relative accuracy. How consistent is it when we have overlapping lines? We need the LiDAR equivalent of photogrammetric aerial triangulation. It would be a software application, though there may be ways for hardware to improve that as well.”

It would also be helpful if we could do a better job of merging LiDAR data from overlapping flight lines,” Maune adds. “With 50 percent sidelap, all areas are viewed from two perspectives and there is better penetration of vegetation. However, the perspective closer to nadir is normally the more accurate and I believe we need to find ways to apply a higher weight to observations nearer to the nadir and lesser weights to observations farther away from nadir. There is value to sidelap and the ability to see the terrain from two different perspectives, but we don’t want to weigh those points equally if one perspective is more accurate than the other. Consistency between adjoining flight lines is a good indicator of accuracy.

To DiGiacobbe, what are most needed are more accurate datasets. “We collect the data as lat/long in NAVD 88 and then have to process it to generate a geometrically corrected solution,” he says. “It would be better if the navigated solution were more accurate. We have more points than we can handle. Now, we would like manufacturers to turn their attention to creating a more accurate POS (GPS + IMU), which would give us a more geometrically accurate solution. That would help us tremendously.”

Finally, DiGiacobbe worries about data storage and distribution. “We have people in 50 offices across the United States,” he says. “Now, to transfer data, we’ll ship a hard drive. We want to create a central application that people can access via the Web, so that they can see the whole picture and work on their piece of it. That’s the new frontier. You spend 18 months working on a project and, at the end, have a huge amount of data. We need to compress it and make it usable.”

Submit a comment

Comments [ 1 ]

  1. July 8, 2011 11:10pm MST
    by Derex
    Got it! Thanks a lot again for hlenpig me out!
Sensors & Systems | Monitoring, Analyzing and Adapting to Global Change | Stay in tune with the transformation. Subscribe to the free weekly newsletter.