Illustration of the basic operating principles of airborne mapping LiDAR and its enabling technologies.
Illustration of LiDAR waveform vs. discrete recording characteristics.
Map showing the Vaca Plateau project area in western Belize defined by the white polygon. The colored lines represent the ground tracks of the different flights.
Elevation rasters generated from a sample of the dataset. A) First return digital surface model (DSM) showing the canopy and open areas, B) bare-earth digital elevation model (DEM) of the Caracol site showing 1) the “Caana” pyramid, 2) causeways to other Mayan cities, and 3) terraces.
Rendering from an airborne classified LiDAR point cloud of the area surrounding the Caana pyramid, the main building in the Caracol complex.
One of the main goals of archeology is to recover traces of human activity from nature’s unforgiving grip, be it in the sands of the desert, the depths of water bodies, or the vegetation of the jungle. While most people outside the field will still hold to the Hollywood impression of archeologists being Indiana Jones-type adventurers bearing a gun in one hand and a whip in the other, today some archeologist are relying on high-tech sensors worthy of the Star Trek or Star Wars films for their explorations. Sensors such as ground-penetrating radar (GPR) to identify buried artifacts and structures, multibeam sonar for subaquatic research, computed axial tomography (CAT) scanners to analyze the internal structure of artifacts, and high-resolution light detection and ranging (LiDAR) to create three-dimensional (3D) models of artifacts, sites and topography have all offered enhanced means of revealing the secrets of the past. Of the four sensors, LiDAR is showing enormous potential in areas of jungle canopy.
Airborne Mapping LiDAR Technology
LiDAR is also known as LaDAR (Laser Detection and Ranging), or optical radar. It is an active remote sensing technique that uses electromagnetic energy in the optical range to detect targets, determine the distance between the target and the instrument (range), and deduce physical properties of the object based on interaction of the radiation with the target. LiDAR has many scientific, engineering and military applications. Examples include 3D mapping of the atmosphere, topography, bathymetry (underwater surface), and forest structure; the determination of air temperature and wind speed and direction; computer vision for semiautonomous or autonomous vehicle navigation and operation; and the detection of pollutants or chemical agents in the atmosphere or water. LiDAR sensors of multiple types have been deployed at fixed stations and virtually on every imaginable terrestrial, marine, submarine and aerial platform.
The first uses of airborne mapping LiDAR were as profiling altimeters by the U.S. military in the mid-1960s, and included the recording transects of Arctic ice packs and detecting submarines. The interest of the military in detecting and mapping underwater targets led to the development of several airborne bathymetric LiDAR profilers during the 1970s. Funding partners for the development of those systems were the U.S. Naval Oceanographic Office (NAVOCEANO), the National Aeronautics and Space Administration (NASA) and the National Oceanic and Atmospheric Administration (NOAA).
In the last two decades, development of key enabling technologies such as satellite navigation systems, inertial measurements units (IMU), lasers, and optical detectors, has made LiDAR the most accurate and highest resolution geodetic imaging and mapping method available today.
In 1977, work began on the first airborne LiDAR system to incorporate a scanning mirror that enabled the laser beam to be steered perpendicular to the aircraft line of flight. The airborne oceanographic LiDAR (AOL), as it became known, was the first airborne LiDAR to perform mapping along swaths rather than simple profiles. The first results of topographic mapping with this system were reported in 1984. In the last two decades, development of key enabling technologies such as satellite navigation systems, inertial measurements units (IMU), lasers, and optical detectors, has made LiDAR the most accurate and highest resolution geodetic imaging and mapping method available today.
The basics of airborne mapping LiDAR are illustrated with Figure 1. The core of a system is a laser source that emits pulses of laser energy with a typical duration of a few nanoseconds (10-9 s) and that repeats several thousands of times per second (kHz) in what is called pulse repetition frequency (PRF). The laser pulses are distributed in two dimensions over the area of interest. The first dimension is along the airplane flight direction and is achieved by the forward motion of the aircraft. The second dimension is obtained using a scanning mechanism, which is most often an oscillating mirror that steers the laser beam side-to-side perpendicular to the line of flight. The combination of the aircraft motion and the optical scanning distributes the laser pulses over the ground in a sawtooth pattern. The selected scanning angle and flying height determine the swath width. The scanning frequency, in conjunction with the PRF, determines the across-track spacing of the laser pulses, or the cross-track resolution. The aircraft ground speed and scan frequency determines the down-track resolution.
Figure 2 illustrates that the laser energy spreads in a conical fashion as it propagates through the atmosphere, similar to the pattern of a highly directive spotlight. This spread is determined by the laser beam divergence and, in conjunction with the flying height, defines the size of the beam footprint on the ground. Some airborne systems, such as NASA’s Laser Vegetation Imaging Sensor (LVIS), have large footprints of 10-30 m in diameter as they are used to simulate or validate spaceborne LiDAR sensors. However, most commercial airborne LiDAR units are characterized by beam divergences that produce footprints between 15-90 cm from their typical operational altitudes, and thus are considered “small footprint” systems.
Figure 2 also illustrates a time-versus-intensity plot, or waveform of a laser pulse propagating in time at the speed of light. When the pulse exits the sensor, it generally has a nearly Gaussian profile. As the light interacts with the trees or the ground, some of the energy is reflected back towards the sensor, modifying the waveform shape according to the geometric properties of the target. The reflected photons are registered by a photodetector, and the signal generated can be recorded continuously for posterior analysis, as in the case of waveform digitizing systems, or can be analyzed on-the-fly by electronics that provide precise timing tags of specific waveform features, as in the case of discrete recording systems. From the analysis of the waveform or the time tags, the two-way flight times between the sensor and the reflective surfaces that the laser pulses encounter along the path are determined. Dividing these time intervals (time of flight) by 2 and multiplying by the speed of light yields the slant range to the reflective surfaces.
In order to determine coordinates for each laser return event, in addition to the range and scan angle, it is necessary to know the airplane’s position and orientation (trajectory). This is achieved by an integrated navigation system (INS) that processes observations of an IMU and global navigation satellite system (GNSS). The IMU is comprised of triads of accelero-meters and gyroscopes that record the linear and angular accelerations of the aircraft. The GNSS observations are carrier phase measurements collected from both the aircraft and fixed ground reference stations, which are processed differentially post mission.
The unique combination of wavelength, beam divergence and scanning capability allows the laser energy to penetrate through the canopy on its way to the ground and back to the sensor. The waveforms are analyzed along their entire reflection path to isolate the last laser return. These last returns obtained from waveform or discrete LiDAR are collectively analyzed over a given area using 3D morphological filters to classify the returns as coming from the ground or other objects. This spatial classification allows the removal of the canopy to obtain digital elevation models (DEM) of the bare surface revealing geological features or manmade structures hidden under the forest structure. This ability to “see” through the canopy is of high interest to the archeology community studying or searching for past civilization remains in highly forested areas. One such area is the Vaca Plateau in southwestern Belize, on which the Mayan city of Caracol flourished between 250 and 900 AD and after its abandonment was swallowed by the tropical rainforest.
Mapping the Mayan City of Caracol with Small Footprint Airborne LiDAR
The Caracol area has been extensively studied by Arlen and Diane Chase of the University of Central Florida since 1983. Over the past 25 years, small sections of the Vaca Plateau have been painstakingly surveyed using traditional field techniques. In addition, several attempts using satellite remote sensing in the form of synthetic aperture radar (SAR) and optical imaging from LandSat and IKONOS have been made to aid in the large-scale mapping and exploration of the region. However, the thick vegetation canopy limited the applicability of these remote sensing methods.
To overcome the limitations imposed by the canopy, the Chases obtained a grant from NASA in 2009 to collect airborne LiDAR data of an area spanning 200 km², covering almost the entire Vaca Plateau. In Figure 3, a white polygon outlines the mapped area. The data collection was performed by the National Science Foundation (NSF) National Center for Airborne Laser Mapping (NCALM), a mapping LiDAR research center jointly operated by personnel from the department of Civil and Environmental Engineering of the University of Houston (at the time of the project the personnel were based at the University of Florida) and by the department of Earth and Planetary Science of the University of California at Berkeley.
To maximize the probability of obtaining ground returns through the thick canopy, NCALM mission planners designed a flight plan based on two sets of perpendicular flight lines, roughly oriented North to South and East to West, each with lateral swath overlaps of 50% (see Figure 3). With this design, any given area is imaged from four different angles. In addition to the tight flight lines, the chance of canopy penetration was enhanced by planning for a high point density (number of laser shots fired per unit area) of almost 12 shots/m². This high density is achieved by flying low and slow, 800 m above ground level at 60 m/s (~150 knots) ground speed, by having a narrow scan angle (±18º from nadir), and by having a high laser PRF (100 kHz).
The project was conducted using an Optech, Inc., (Ontario, Calif.) Gemini Airborne Laser Terrain Mapper (ALTM) mounted in a Cessna 337 Skymaster between April 25-30, 2009. Six flights were required to complete the mission, totaling 25.5 hours of flight time and 9.24 hours of laser-on time. Figure 3 shows the ground tracks depicted in a unique color for each flight. A total of 2.38 billion laser shots were fired, and because the Gemini is a discrete return system with the capability of recording up to 4 return events per shot, multiple events were detected from the ground and the canopy, yielding approximately 4.28 billion points (~ 20 returns/m²).
After processing the GNSS and INS data to obtain the aircraft trajectory, the raw data product of the LiDAR survey is produced. This first product is an irregularly spaced point cloud, which is basically a list of 3D coordinates and laser return intensities generated for each flight line. Instrument biases are calibrated to ensure the seamless integration of all data from the different flights. All the data for the project area are then binned into 1 km x 1 km tiles for ease of processing and storage. The binned laser returns are then classified into ground and non-ground points. Once the ground points have been isolated, it is possible to generate a regularly spaced elevation raster or digital elevation model (DEM). The elevation raster can also be created based on the first or intermediate returns, in which case they are referred to as digital surface models (DSM). Figure 4 shows shaded relief plots from elevation rasters of the first laser return (DSM) and a bare-earth DEM generated from a sample of the Vaca Plateau survey.
By comparing the shaded relief from the first surface with that of the DEM, the power of LiDAR to see through the canopy veil becomes immediately evident. The DEM shows topographic and man-made features hidden by the vegetation canopy. Besides the main architectural structures, which are highly evident, it is possible to identify stone causeways that linked Caracol to other Mayan cities such as Puchituk, Retiro and Ramonal. Also extremely evident is the large-scale modification of the topography by the Mayan in the form of slope terracing for agricultural purposes. See Figure 5.
Other features that are not evident in this figure and at this scale, but which can also be detected and are of high archeological relevance are the multiple cenotes (sinkholes) or entrances to other types of subterranean structures. These are relevant because of their high importance in Mayan folklore as the portals to the underworld and as such were used for multiple ceremonial purposes, including burials, offerings, and sacrifices.
The LiDAR data have been compared, where possible, with the results obtained from earlier field surveys, and agreement is remarkably good. Throughout much of the area, the LiDAR data actually displayed a higher spatial resolution than the ground surveys data, and many new features have been discovered from them. In fact, the level of detail obtainable from such high-resolution data raises concerns about releasing such information to the general public, as looters might use the information to identify targets of opportunity. But with appropriate precautions taken, the many products that can be derived from LiDAR observations, including false color and shaded relief images, point clouds, DSMs, DEMs, contour maps, and 3D renderings of these ancient sites, will enable archeologists and the public alike to understand and preserve the Mayan heritage, and that of ancient cultures in other parts of the world such as southeast Asia, where other archeological sites lie beneath veils of tropical rain forest.