LiDAR Advances & Challenges

A Report from the International LiDAR Mapping Forum

Standing room only crowds at this year’s International LiDAR Mapping Forum (ILMF) conference in Denver provided evidence of the technology’s status as a three-dimensional mapping technique that continues to grow in popularity. The two-day event, held Feb. 21-22, included both workshops for beginners and advanced presentations detailing many of the technical advantages and challenges associated with Light Detection and Ranging (LiDAR).

Rod Franklin
Denver, Colo.
This year’s conference turnout (more than double the attendees from 2007) was reflective of growth that in recent years has added variety to the list of players contributing to the LiDAR art. Since its genesis in the mid-1990s, LiDAR has advanced a respectable distance beyond the realm of hardware production. Affiliate roles now include system operator, support services vendor and government sponsor.

Widespread embrace of two- and three-dimensional modeling has pushed the modality into applications as diverse as archaeology, geography, geology, geomorphology, seismology, remote sensing and atmospheric physics. Yet LiDAR remains a technological work in progress. A cousin to radar, it is characterized by a few technical peculiarities which have had an impact on its adoption curve. Pulsed ultraviolet, visible or near-infrared light dances nimbly in the space of a few nanometers and in the glimpse of a second, so any handling software used to quantify it must be able to work within those parameters.

In many scenarios, LiDAR continues to serve in an adjunctive capacity to alternate imaging techniques, rather than taking the lead role. Some of the reasons for this adjunctive role have to do with market forces and the requirements of individual projects. Others have to do with technology.

James Wilder Young, LiDAR manager for the Sanborn Map Company of Colorado Springs, profiled an industry that has evolved significantly since its commercial debut, with a number of defining parameters improving by orders of magnitude. For instance, systems which once hummed along at a meager 5 to 10 Khz now operate in the 150 to 300 Khz range. Another development has been the advent of multi-pulsed LiDAR, which effectively put an end to one- and two-pulse configurations. Also, the nature of LiDAR data itself—typically millions of pulsed light elevation data points that require formatting, importation and processing—isn’t quite as vexing a problem as it used to be for those off-the-shelf software packages that are designed to groom it into textured vistas of urban and rural beauty.

Despite these advances, LiDAR has its limits. Young observed that mapping options such as interferometric synthetic aperture radar are better tailored for certain projects. And on a broader level, LiDAR suffers from the same kind of software development gap that plagues other high-tech industries, due to the fact that programs coded to work with advanced LiDAR hardware generally fail to optimize the full potential of device circuitry. Young showed a slide which asserted that the LiDAR market "is growing all around the world, but LiDAR handling software is not," and identified three-dimensional urban modeling, automated classification and vegetation mapping as markets where this is especially problematic.

In an interview, Young explained that the physics of scattered light and its return values when encountering surfaces of different reflective quality presents huge challenges to LiDAR users. Classifying a shoreline with a gradual slope requires a math model different from that used in classifying a riverbank with a steep grade. Looking at pooled water is different from looking at flowing water. Classifying clear water calls for a different algorithm than for water colored by silt.

Figures 1A & 1B - LiDAR imagery from Sanborn collected for client AMBAG (Association of Monterey Bay Area Governments) in California, showing the west coast shoreline with LiDAR elevations, with Figure 1B adding contours.

"To define a river and shore is extremely difficult because of the function of the amount of points that hit the river. Wherever there are rivers, there are trees and vegetation. So it’s even harder to get down through the vegetation. In terms of water, LiDAR operates at 1054 nanometers on the light spectrum…and given the characteristics of how that returns off of water…no LiDAR vendor should be quantifying that. If you have muddy water, you have a higher probability of getting better returns." See Figure 1.

Even though the intricacies of LiDAR may initially have appeared mysterious to traditional surveyors and photogrammetry vendors, Young has witnessed a change in perspective over the past several years.

"Everybody’s finally accepted it," he said. "When LiDAR first started, there was a lot of reluctance for people to use LiDAR data because they didn’t know what it was. They didn’t know how accurate it was. The photogrammetry folks were not really sold on it and they were very territorial with what they were doing. Since then, the demand for LiDAR has gone through the roof. The size of the jobs that the LiDAR companies are doing is significantly larger."

Because LiDAR stands as one of several alternate file formats to use for elevation modeling, it has had to work its way through orphan status and earn full integration into the GIS community. But now, a few more developers are recognizing that profit lies waiting for those who can decipher its sheer complexity.

"Obviously, there’s money to be made," said Young. "People go out and say, ‘Ok, we’re going to start developing software to target the LiDAR vendors as well as the end users so they can get the most out of their LiDAR data.’ Another thing that’s difficult for [developers] is that the LiDAR technology continues to advance, and the algorithms required and the amount of data that they have to process is significantly more. So they’re having to develop better software to keep up with the demand for new applications."

Figures 2A & 2B - The failure of the Taum Sauk Dam on December 14, 2005, caused massive flooding as more than a billion gallons of water rushed down Proffit Mountain, overwhelming the east fork of the Black River in Johnson’s Shut-Ins State Park in Missouri. The rate of flow was about 150,000 cubic feet per second, or about the same as the rate water rushes over Niagra Falls. Figure 2A shows the dam break, while 2B shows the flooded area.

Sanborn Map Company has worked diligently to overcome the limitations of first- and second-generation LiDAR software releases, chalking up some success stories in the process. A break in Missouri’s Taum Sauk Dam was modeled three dimensionally by Leica Geosystems using Sanborn LiDAR data (see Figure 2). The company has used 0.3-meter point density LiDAR in building surface modeling and refinery tank management. Sanborn has also joined others in the forestry and biomass sectors to show how LiDAR-based files produce bare earth surface models that can be cleanly separated from or overlain with accurate images of vegetation canopy.

Others who spoke at the ILMF share Young’s thoughts on the software issue. Jennifer Whitacre, account manager for LiDAR sales at MJ Harden Associates Inc. (now a GeoEye company) of Kansas City, cites automatic data filtering as a feature that LiDAR experts rank high on their wish list of processing functionality. Specifically, she said, they want filtering capability robust enough to "clean up" LiDAR points so topographic features in complex elevation profiles can be distinguished, but not so aggressive that the targeted data itself is stripped away.

"I think the higher technology systems are becoming so advanced that software does lag behind," said Whitacre. Her company currently uses a software package capable of providing everything from data calibration to project management, tile parsing and data filtering, but because any given mapping assignment may call for modifications that program developers can’t plan for in advance, "it’s really important to be able to go in and tweak those algorithms to make them fit your specific project," she said. "I think all software has to be tweaked."

Alternate approaches to data acquisition may help reduce the burden for those pushing the data crunch buttons. Whitacre was recently involved in an experiment with BSF Swissphoto that was carried out to address the issue of the need for the ever-larger LiDAR project. Instead of using a medium-format camera per normal procedure, the team flew an Optech ALTM 3100 laser scanner, along with a Vexcel UltraCam-D large format camera, on a single platform over the city of Paris. A white paper on the experiment observed that the decision to collect LiDAR and digital images simultaneously usually depends on a number of parameters, including project size, ground sampling distance, point spacing and topography, but noted that such configurations often make great sense, due to the "great synergy between elevation and image data, (with) each helping the other achieve its full potential." More to the point, mappers using a large format camera can benefit from reduced flight times and reduced mosaiking of the larger images that result.

"What we’re saying in this paper is that it’s not always the best solution if you have a very large project area and you’re flying a medium format camera - a smaller camera," Whitacre explained. "You’re going to have a lot of exposures, a lot more than you would receive with your large format camera…This is going to take longer to fly and it’s going to take longer to process. With the large format, you’re collecting fewer exposures, so you’re cutting back on your processing time, as well as saving money that way." See Figure 3.

Figure 3 - This LiDAR image of Kansas City was generated by MJ Harden Associates Inc. from unedited first return data captured at 0.9-m postings. Since the raw data was not modified by point classification at this stage, building features took on a rather spiky appearance.

Young agrees with Whitacre that the trend toward larger jobs is forcing mappers and software coders to seek out greater economies of technological scale. "A large job say ten years ago was about a maximum of 700 square miles, or maybe 300 to 700 square miles," he said. "And now we’re doing statewide LiDAR programs. You have to be very efficient because it’s very competitive out there. To be able to have software that helps you be more competitive is what’s driving the market."

Software vendors themselves have acknowledged the traditionally "kludgy" approach to the processing of giant datasets. Virtual Geomatics is an Austin company that isn’t shy about commenting on the industry’s struggles in this regard. Its web site notes that workflow complexity is "often due to the limitations of software tools, complexity in the patchwork of kludged-together applications and software work-arounds needed to handle large LAS datasets, and complexity in training new employees to use the mashup of currently required tools and software. Even the best service providers deal with a plethora of unnecessary and wasteful complexity that all ends up pointing to inadequate comprehensive software solutions."

VG4D Production Manager, Virtual Geomatics’ flagship product, is designed to handle workflow management, project management, filtering, 3D visualization and special toolbox needs like gridding and point density calculation. Another comer in the industry is QCoherent Software of Colorado Springs, which has created an extension called LP360 that integrates point cloud datasets into ArcGIS without requiring an import or conversion process.

In his ILMF presentation on feature extraction from LiDAR data, Overwatch Textron Systems’ Chief Operating Officer Stuart Blundell observed, "Software development is always chasing after advances in sensor capability." He is dedicated to resolving the specific requirements of LiDAR in three areas that create bottlenecks during the feature extraction process:

  • The use of multiple sensors during acquisition;
  • The merging of RGB color data or intensity data with XYZ data points;
  • High spatial resolutions and very large datasets.

Logjams result when the software chokes while trying to accommodate all of these characteristics of LiDAR. According to Blundell, they are most likely to occur at the data registration stage, during feature extraction, or when attributes are applied to the data so it can be useful in a GIS database. If LiDAR data can be manipulated in a way that aligns with GIS-ready vector or shapefile formats, mapping experts should be able to move beyond the mere visualization of a scene and into the extraction of smaller physical features from within that scene. See Figures 4 and 5.

Figure 4a - The first image shows a 7x7-kilometer tile of airborne LiDAR collected over Denver, Colo. The LiDAR data has been Hill Shaded using LiDAR Analyst software to support the visualization of the data. Higher elevations are in white colors and lower elevations in green.

Figure 4b - The second image shows the Bare Earth grid automatically extracted from the LiDAR using LiDAR Analyst. The Bare Earth is used to support terrain analysis.

Figure 4c (below) - The third image is the building footprints that are automatically extracted by LiDAR Analyst as 3D Shapefiles. LiDAR Analyst extracts simple, multi-component and complex 3D buildings from LiDAR. These 3D Shapefiles include 18 different geometric and descriptive attributes for each building such as maximum height above ground, roof type and area.

  • LiDAR Analyst is an Overwatch Textron Systems software product designed in 2004 to provide this functionality as a plug-in for ArcGIS and ERDAS Imagine. It will be available in 2008 as a plug-in for Remote View and ELT.

Blundell also emphasizes the importance of file specification standards in software design, and lauds the work that has been completed in this area by the LiDAR Committee of the American Society for Photogrammetry and Remote Sensing. "If people are using LAS (LiDAR Data Exchange Format) standards for LiDAR, regardless of the size of the dataset, that’s standardized in a way that we can interpret information about the spatial resolution and that kind of thing," he says.

The LAS 1.0 specification already is widely accepted in the industry. An update, LAS 1.1, incorporated features that allowed for more robust point marking. LiDAR committee member Lewis Graham represents LAS 1.1 as an interim update that remedies problems in the 1.0 specification, such as limitations in the encoding of flight line numbers. He expects the LAS 2.0 standard to be a major revision, with accommodations for more comprehensive encoding of terrain modeling and for emerging technologies such as waveform digitization.

Figures 5a, 5b, 5c & 5d - This series from Overwatch Textron Systems shows feature extraction for identifying a dumpster.

With additional products being made available for multiple platforms and with LiDAR advancing for both airborne and terrestrial mapping projects, the industry is maturing. ILMF Conference Chairman Alastair MacDonald pointed out that record delegate attendance and 34 detailed presentations over two days reflect the technology’s growing importance to the world’s mapping industries. And the variety of talks given this year bore testimony to the increasing richness of niche-specific LiDAR deployments. For example, GIS Visualization Specialist Kevin McMaster informed attendees about the use of LiDAR to target riparian setback restoration areas. Spokesmen for Applied Research Associates Inc. gave a presentation entitled Rapid Reconstruction of Buildings with Arbitrary Roof Topology from Airborne LiDAR Data. Another talk focused on sea floor characterization using airborne sensors.

In one particularly unique project, Merrick & Company assisted the U.S. Army Corps of Engineers on a 22-square mile study of the Missouri River to assess the benefits of LiDAR and sensor fusion for the preservation of eagle habitat. LiDAR was collected along with digital photography and hyperspectral imagery to determine if new analytic methods can be proven faster and more cost effective than traditional mapping and field inventory techniques. The company will fuse photographic and hyperspectral imagery onto LiDAR digital elevation models to analyze land cover and vegetation species delineation factors that the Corps might then decide to incorporate into conservation planning efforts. If this approach appears to be feasible, it will be applied to the entire mitigation region of 735 square miles.

Denver’s ILMF conference showed that the ranks of LiDAR specialists are swelling. With more brains working in tandem and a very capable array of hardware waiting, current deficiencies in the software realm should diminish, positioning LiDAR into the future as an even more popular modal option for digital mapping experts.

Sensors & Systems | Monitoring, Analyzing and Adapting to Global Change | Stay in tune with the transformation. Subscribe to the free weekly newsletter.