Beyond Terrain Models: LiDAR Enters the Geospatial Mainstream

Processing Challenges Remain

Editor's Note:
This article is a follow-up to the in-depth LiDAR story in the Spring 2008 issue, which can be found in our online archive.

Also, we apologize for an error in the printed issue, where the Optech sensor was described as "the first commercial 4-channel LiDAR mapping system," instead of 2-channel. The error has been corrected in the online version.

By Matteo Luccio
Portland, Oregon

First used by NASA in the mid-1980s, light detection and ranging (LiDAR) has become an essential complement to photogrammetry for mapping and analyzing a vast range of surfaces. LiDAR started as a topographic tool on large scale projects, such as flood plain mapping, but in the past few years it has gained wide acceptance throughout the geospatial industry for a myriad of projects – including 3D urban modeling, feature extraction, tree identification and volumetrics, mapping bare earth under thick canopy, road delineation, forward looking for vehicles, mobile mapping, and carbon inventory. Ground-based LiDAR is becoming as common as aerial applications.

The pulse rate of LiDAR sensors continues to increase rapidly, producing huge datasets. This growth in data is outpacing the ability to analyze them, and software applications will need to catch up. Meanwhile, LiDAR data is now being routinely fused with data from other sensors, especially multi-spectral and hyper-spectral cameras.


The increase in the pulse rate of LiDAR sensors will continue as long as LiDAR vendors find creative ways around system limitations, according to Matt Bethel, manager of systems engineering at Merrick. At the ASPRS/MAPPS Fall Conference in San Antonio, Texas, in November, Optech released its new ALTM Pegasus HD400 active imaging system. According to the company, it is the first commercial 2-channel LiDAR mapping system and, at 400 kHz, it has the highest sampling rate in the industry. It is coupled with digital cameras ranging from 5 to 60 megapixels. This combination of sensors allows users to colorize the LiDAR point cloud and then model it in 3D, using the LAS 1.2 data standard.

“We are promoting the Pegasus platform as an open concept,” says Mike Sitar, Optech’s product manager for airborne survey products. “It allows clients to come up with a configuration. The HD400 is the first version; it produces density for the sake of density. Down the line, it opens up the possibility of using different wavelengths. In the past, to get more density you had to increase the rep rate and fly lower. So, we came up with continuous multi-pulse that allows us to keep track of two pulses in the air and to fly twice as high at the same rep rate. Another approach is to fly two sensors together. With Pegasus’ multichannel approach, both sensors use a single mirror, so the quality of the data is better.”

The increase in the sensors’ pulse rate creates new challenges, explains Bethel. Because the maximum allowable field of view is fixed, one typically can’t go wider to cover more ground in a single pass. Flying higher is possible to make the most of the increasing pulse rates, but that means an increased percentage of down days due to clouds. Higher above ground levels (AGLs) also mean a decrease in IMU (Inertial Measurement Unit) positional accuracy. Higher accuracy IMUs could allow for the better use of higher pulsing LiDAR systems flown at higher AGLs. So instead, as the pulse rates increase, assuming that the project specs don’t change, collection flights can be flown faster at the typical flight AGLs. This capability reduces the time it takes to acquire a project, until the airplane’s maximum speed is met. At that point, one must consider upgrading to a higher class aircraft to make best use of the increasing LiDAR pulse rates.


The key function of LiDAR software is to manage efficiently the massive amounts of data collected by LiDAR sensors. “As sensors advance, we are constantly dealing with the challenge for software of dealing with the volume of data that they produce,” explains Torin Haskell, director of sales and marketing for QCoherent Software (acquired in December 2009 by GeoCue). “Until recently, we were dealing only with discrete points. Now, as an increasing number of our clients want to use the entire waveform, we have an enormous data management challenge. A huge step forward has been the expansion of the LAS standard to incorporate an option to include waveform properties. Once that format is incorporated into production and all vendors are using it, it will be easier for us to incorporate it into products for end users.” QCoherent’s core product is LP360, which allows users to import point clouds into GIS, so that they can work with the data directly.

Another way in which QCoherent Software is addressing the data management challenge is with its LiDAR Server product, which allows different software to look at LiDAR data on a server. For example, says Haskell, it allows state-wide data to be hosted on a single server, rather than having to be chopped up and distributed to users. Also, two OGC standards, WMS and WCS, allow users to manage massive LiDAR data over the Web.

Acquired a year ago by Northrop Grumman, 3001 International uses commercial off-the-shelf (COTS) hardware and software, says Bart Bailey, Director for Northrop Grumman Information Systems and formerly 3001 International’s CEO. However, “…the software doesn’t always do what you want it to, so inevitably we end up building add-ons and work-arounds. For 3D building extraction, the software we bought did not work automatically as well as we wanted, so we built our own software.”

To Sitar, the problem is that third-party software is lagging behind the explosion in LiDAR data and applications. For example, he points out, “There has been a huge adoption of mobile LiDAR, but there are few commercial packages that can exploit it effectively. For a department of transportation that needs road delineation, LiDAR is a very effective tool, but the software does not necessarily exist for that application.” However, that’s changing, he admits.

Many local governments in recent years have spent a lot of money on collecting LiDAR data but lack the software and expertise to make full use of it, says Kevin Opitz, Sales Operations Manager for Overwatch Systems. Likewise, Bethel points out that the majority of clients don’t fully understand how best to use LiDAR data. “A ground sampling distance of 10-15 feet used to be standard ten years ago, but now we are at three-to-five feet and in the near future we might be down to one-to-three feet. We go through so much effort to collect and keep good accuracy, but when we deliver we might find out that our client is making 10-20 feet grids because the software that they use cannot handle the density that they specified. We might collect data at one-to-five meter resolution; then they might plug it into flood modeling software at much lower resolution.” For this reason, Merrick’s approach is to deliver the data together with software that can enable its clients to use it.

Another way to deal with massive amounts of data is through compression. Jon Skiffington, an engineer with LizardTech, a company known for its work with image compression software used in GIS applications, compares the challenge posed by the increasing popularity of LiDAR data with that posed a few years ago by large raster images. “Our customers have been asking us over the past three years if we can do something similar with LiDAR data. File sizes are now in the hundreds of gigabytes. We can compress them to a quarter of their size, so that customers can use that point cloud in their normal applications. We released the initial version of LiDAR compressor in the summer. We are now trying to make sure that third party applications – such as ESRI, Global Mapper, and Merrick – will support our compressor, and we are expecting others to add support in the near future.”

ITT VIS takes yet a different approach. “We are working on making data size irrelevant, via intelligent pre-processing and visualization,” says Beau Legeer, the company’s director of product marketing. “Even though the dataset might have billions of points, our software will load only those that are valid at a given resolution.” The company makes the ENVI software packet and will be releasing additional LiDAR support in ENVI 5.0. “Initially,” says Legeer, “LiDAR was a part of our solution only to generate 2D products: terrain models, digital surface maps, intensity images, etc. It is becoming more important to retain and exploit the 3D data instead of just using it in 2D.”

ITT VIS has a three-phase strategy to evolve its approach to LiDAR, Legeer explains. First, it is utilizing all of the 3D point cloud without any decimation, loss, or limit. It is aiming to release a point cloud viewer in ENVI by the end of 2010. This is made possible by the fact that ENVI is underlined by the IDL programming language. “Customers can use IDL to customize and enhance ENVI. In this phase, we will also allow users to write their own LiDAR algorithms,” says Legeer.

Second, over the next two years, the company will dive into the point cloud with exploitation algorithms. “We will extend feature extraction into 3D, then fuse proven multi-spectral and hyper-spectral algorithms, which we will supplement with LiDAR data. Extraction tasks, such as target detection and material identification, will become more accurate by adding LiDAR point clouds to the equation. It will make our tools better.” Finally, the third phase will depend on user feedback and where the market goes.

According to Legeer, one potential bottleneck in the strategy is the LAS format (the standard file format for LiDAR data). “As sensors grow and the data content becomes more rich, the format needs to catch up. It will work itself out, but the format might not have all we need for feature extraction. We will work with ASPRS to evolve that as fast as the data is growing. All other bottlenecks are disappearing.”

Merrick uses LiDAR to collect bare earth, power lines, corridors, shorelines, borders, and rivers; to model flood plains; and to generate 3D models for very high-precision data in automated vehicle navigation. The company writes all of the software it uses, says Bethel, and sells its Merrick Advanced Remote Sensing (MARS) software designed to manage, visualize, process, and analyze LiDAR data. “We designed MARS from the ground up, to handle large amounts of LiDAR data, throughout the whole production process: checking the bore calibration and coverage, batch processing, editing the DTM, some feature extraction, auto filtering, manual filtering, break-line collection, and exporting to different file types to produce the final deliverable.”

Woolpert, a design, engineering, and geospatial firm, uses LiDAR software mainly to calibrate the data, tie flight lines together, and classify points, says Jeff Lovin, the company’s vice president and director of photogrammetry and remote sensing. However, that has changed in the last couple of years, he points out. “We used to just want the ground points, but now we are doing more with the rest of the data, and classifying it is more important to us. We are creating derivative products, such as raster images, using the intensity of the returns. For example, in Panama a couple of years ago, due to extensive cloud cover, we collected imagery by flying LiDAR at night.”

Many people don’t realize that most LiDAR software will only allow them to visualize LiDAR data and not to analyze it, points out Opitz, of Overwatch Systems. The company makes LiDAR Analyst, which enables users to convert raw LiDAR data to a format from which they can extract features, as vectors or polygons. The software was originally developed for the U.S. military by the Advanced LiDAR Exploitation System (ALES) Consortium; then Overwatch commercialized it as its core LiDAR exploitation product. The company, Opitz says, looks at LiDAR not just as a visualization tool but as a rich data set from which to extract features – including building heights and widths. For example, he says, an army platoon might use LiDAR data to determine how tall a ladder they will need to get on the roof of a building. The company is now also working on terrestrial LiDAR, which produces very dense, large file sizes that allow users to locate windows and doors on buildings and identify vehicle types, says Opitz.

Efficient processing of huge LiDAR data files also requires a lot of computing power. “We have completely re-written our LiDAR software to make it better suited for high performance and to enable faster coding,” says Bethel. “Our key recent advance has been 64-bit processing, using virtually unlimited amounts of RAM, and up to 16 co-processors per machine, all from within one program. In a production environment such as ours, we must keep all the data in a central file server, so we do all of the processing on the server side. Another growth area is the use for processing of a GPU – which is a high-end graphics card – rather than a CPU. The bottleneck is the disk R/W (read/write) speed.”

ITT VIS, Woolpert, and QCoherent are all working to take advantage of multiple processors. “We plan to take advantage of multi-core for pre-processing and GPU for the rendering,” says Legeer. “We use distributed processing and multiple work stations,” says Lovin. “The limiting factor,” he adds, “is how much data the software can handle. Feature extraction is constantly becoming more automated. How many points can we load at once? One square mile? Twenty?” Advances in LiDAR sensors, Bethel explains, cut the collection time rather than yielding denser datasets. Customers, however, may start asking for higher density, requiring further improvements in processing. “Divide and conquer,” he says, “is definitely the wave of the future.”

Data Fusion

LiDAR and camera imagery are ideally complementary sensors: direct orthorectified imagery from LiDAR can correct raster imagery, and imagery can be used to quality control LiDAR data. To illustrate the latter, Lovin cites the example of mapping Florida swamps: the tops of the vegetation will give such a consistent return with LiDAR that they can easily be confused for the surface, even though they are three or four feet up. Looking at photos while analyzing the LiDAR data will avoid this error. LiDAR can also miss contours and break lines, he says, so Woolpert technicians often add them by hand.

According to Sitar, more than half of the commercial mapping sensors that Optech sells go out the door with a camera. “Integrating LiDAR with medium-format cameras, which have a smaller footprint, improves the workflow,” he says. “There is a new term in the market: LiDARgrammetry, or using LiDAR to come up with two different models, then throw them both into the photogrammetry process. A 60 percent endlap is used to augment a LiDAR terrain model. Point clouds may miss breaklines, which are important, for example, for drainage delineation.”

Woolpert, Lovin says, will fly LiDAR in conjunction with imagery for any project that covers more than 10 square miles – which means nearly all of its projects – and involves any 3D modeling or feature extraction. “For the first six to eight years, we used LiDAR only to model; now we use it in nearly all photogrammetry projects,” he says. “For example, we flew all of Ohio at 1-meter resolution. The state did not specify that we should use LiDAR, they just wanted rectified imagery.”

“We are now using LiDAR on the ground, too, and fusing the data,” Lovin adds. “With ground LiDAR, the platform is moving slower, so there is greater point density and accuracy than with airborne LiDAR. The challenge is in post-processing – for example, loss of satellite lock. The hottest new thing we are doing is sensor fusing: oblique, vertical, satellite, aerial, and ground-based LiDAR. There is great potential for modeling and 3D GIS applications, true building facades, and attribution. We are spending a lot of our R&D on that. We have been doing this since 1998.”

“Customers are increasingly appreciating the point clouds but also want the imagery,” says Bailey, “so we are combining medium format camera with LiDAR. Fusing LiDAR and imagery gives a very accurate representation of what is on the ground.” They are not going to do ground-based LiDAR until they can see the return on investment, he adds.

The most common combination of sensors used by Northrop Grumman is an Optech LiDAR sensor, a Rollei medium format camera, and a CASI hyper-spectral camera. The company has used that combination, for example, on flights for the U.S. Army Corps of Engineers (USACE), a project in West Texas, and work in Central America. It also has three years of experience with Scanning Hydrographic Operational Airborne LiDAR Survey (SHOALS), a system that consists of a topographic LiDAR, a medium format camera, a hyper-spectral camera, and a bathymetric LiDAR, developed by USACE to monitor near-shore bathymetric environments.

“Integrating RGB values is in the future. It is definitely on our radar,” says Jennifer Whitacre, national account manager for LiDAR solutions at MJ Harden, a geospatial company that is owned by GeoEye.

“I see LiDAR adding its own dimensionality,” says Legeer. “For feature extraction in urban environments, we can combine LiDAR with our ENVI tool set. We talk a lot about data fusion as a focus of releases in the 2011-12 time frame. We would like to turn ENVI into a platform for multi-sensor data, so that if you bring multiple sensors to the project, we will allow you to register them together and create a rule that uses the properties of each sensor to describe the object. For example, the object is valid if it has a certain shape, temperature, size, etc. We will also add video to ENVI as one more factor in fusion-based processing.”

There are three ways of extracting LiDAR data, explains Mike Kitaif, manager of software development and owner of Cardinal Systems. You can produce a 3D model using pure photogrammetry, then overlay the LiDAR data on top of it; you can orthorectify raster data and overlay LiDAR data on top of it; or you can just bring up the point cloud in 3D.

Unlike with an image, he points out, you can look at a LiDAR point cloud from any angle. His company is developing software to fuse data from LiDAR and cameras. “It will be another three months before it is an official product, but we have shown it to many customers,” he says.

Looking Ahead

An emerging technology is that of 3D Flash LiDAR, being developed by Advanced Scientific Concepts, Inc. Its cameras can collect full 128x128 pixel frames of 3D point cloud data per single laser pulse, up to 60 frames per second, according to Thomas Laux, the company’s VP for Business Development. They have the ability to image through dust, fog, and smoke, he explains, and provide accurate 3D measurement and real-time imaging, including video. This allows for detection of dynamic hazards, making them ideal for use in moving vehicles. The cameras co-register the range and intensity of each pixel, allowing such manipulations as filtering objects in an image by their distance from the sensor. So, for example, you could display a group of firefighters, enveloped in thick smoke, who are between 60 and 90 feet from the camera, filtering out anything that is closer or more distant.

The current ASC 3D camera has the equivalent of 16,384 range finders on its sensor chip and, unlike with scanning LiDAR, it captures an entire frame of data from a single pulse of light. Therefore, motion and vibration of the platform or the subject do not affect the measurements, says Laux. For example, helicopter blades spinning at supersonic speed will appear as stationary. Additionally, the system provides a direct calculation of range, unlike stereoscopic cameras, and is smaller, lighter, and more rugged than scanning LiDAR.

This technology allows users to acquire 3D movies at the laser pulse repetition frequency, making 3D video a reality and enabling real-time machine vision, Laux says. High frame rates allow faster acquisition of topographical mapping than with point scan technology, decreasing the amount of flight time required to scan and capture an area.

NASA has tested ASC cameras on orbit for automated rendezvous and docking, is funding further development of the technology, and iRobot has chosen them for use in unmanned ground vehicles. They plan to launch in 2010.

Looking ahead, Opitz, like Skiffington, sees compression of LiDAR data as a key development. “Anything that will make those files more manageable will make a huge difference, especially in the field,” he says. His company is also developing tools to exploit full motion video, including LiDAR data.

For the foreseeable future, advances in LiDAR software will continue to chase after the explosive growth in LiDAR data produced by advances in hardware – by improving processing speed and by finding new ways of fusing data from different kinds of sensors, including video.

Sensors & Systems | Monitoring, Analyzing and Adapting to Global Change | Stay in tune with the transformation. Subscribe to the free weekly newsletter.