Figure 1 IKONOS band 4, 3, 2 false color composition, New Orleans, Louisiana. The area is on the border between Jefferson Parish and Orleans Parish, close to the Mississippi River. Image captured on September 2, 2005. The 4-meter pixel resolution makes it possible to identify detailed land use types. IKONOS satellite imagery courtesy of GeoEye.

Urban Land Use Classification

Applying Texture Analysis and Artificial Intelligence

Wenxue Ju, PhD Candidate
Department of Geography and Anthropology
Louisiana State University
Baton Rouge, La.
www.lsu.edu/rsgis

Nina S.-N. Lam, PhD
Professor
Department of Environmental Studies
Louisiana State University
Baton Rouge, La.
www.lsu.edu/rsgis

Urban land use classification has entered an attractive new era with the recently increased availability of high-resolution satellite imagery, such as IKONOS and QuickBird. With these advanced sensors, detailed urban components such as single-family and multi-family houses, trees, driveways, and parking lots are now identifiable, as their distinctive electromagnetic reflectance and shapes can be captured at a high resolution. This, on one hand, makes detailed land use type identification possible, but on the other hand, makes the traditional per-pixel classification unsuitable, because the electromagnetic signature of different land use types, such as residential, commercial, and industrial, now has large in-class variance and between-class overlap.

In contrast, people can easily separate these different land use zones vis-ually because they can make sense of image texture. Drs. Myint and Lam argued in their article, "This spatial information needs to be extracted, in addition to its individual spectral value, to characterize the heterogeneous nature of urban features in high-resolution images."[1] Intuitively, scientists developed texture analysis method and artificial intelligence classifiers to improve urban land use classification performance. Recent land use/cover classification research that used both advanced texture metrics and genetic algorithms for classifying land use/cover in New Orleans was done on an IKONOS image (see Figure 1).


Figure 2 1-m resolution IKONOS panchromatic band of different neighborhoods in New Orleans. Image captured on September 2, 2005. First row from left to right: Low density residential/Residential I, high density residential/Residential II, woodland; second row: commercial, water, industrial; third row from left to right: flooded Residential I, flooded Residential II, flooded industrial. IKONOS satellite imagery courtesy of GeoEye.

The City of New Orleans

The city of New Orleans, located near the Gulf of Mexico, is highly populated, with most of its urban area below sea level. New Orleans was devastated by catastrophic Category III Hurricane Katrina on August 29, 2005, and many neighborhoods were severely flooded. As shown in Figure 2, these neighborhoods exhibit different spatial patterns.

Texture Analysis

This research particularly examines three cutting-edge techniques: fractal dimension, lacunarity, and Moran's I spatial autocorrelation index. The fractal concept was invented by Mandelbrot to measure the self-similarity and irregularity of complex forms such as coastal lines.[2] In fractal geometry, a two-dimensional image may have a fractional dimension between 2 and 3, with a larger value representing a highly self-similar and rougher surface.

Figure 3 shows a computer-simulated graphic with a fractal dimension of 2.5, which is generated using Image Characterization and Modeling System (ICAMS), a software package developed by Lam and her collaborates.[3] Most remote sensing images have larger fractal dimensions than this simulated image. Emerson et al. successfully deployed fractal dimension in land cover classification scenarios with Landsat imagery.[4]

Figure 3 A 500x500 fractal surface generated using ICAMS, Fractal Dimension=2.5.
Figure 4 Demonstration of the moving window technique (numbers indicating 11-bit DN values of IKONOS satellite imagery).

Figure 5 Texture curve of different land use types, using a 33x33-meter moving window (Note that fractal dimension and Moranís Ivalues are bounded to 2~3 and -1~1, respectively, while lacunarity has no upper boundary).
As a counterpart of fractal dimension, lacunarity was introduced to measure the "gap" distribution in an image.[5] Higher lacunarity generally means a more heterogeneous pattern. Lacunarity was used to characterize landscape pattern and was found to increase accuracy in urban land use/cover classification.[6,1]

Spatial autocorrelation demonstrates the spatial similarity or dissimilarity of neighboring pixels.[7] A widely used measure is Moran's I index, which ranges from -1 to 1 for negative, random, and positive autocorrelation. Application of spatial autocorrelation in image classification includes use of Semivariogram vectors and Moran's I.[8]

Moving window techniques are widely used to generate local measurements of these texture metrics to produce textural layers. A window is placed on top of the image, and the texture measurement inside the window is assigned to the window's central pixel (see Figure 4). As the window moves across the entire image, a new textural layer will be generated. Different land cover categories then have different spectral characteristics, and different textural indices measure different textural properties.

Figure 5 shows the textural metric curve of the land use types demonstrated in Figure 2, and Figure 6 shows spectral and temporal views of Jefferson Parish, La. Researchers usually take advantage of this difference to aid in the classification, and significant improvements were reported in Myint and Lam 2005.

Artificial Intelligence Classifiers

The inadequacy of traditional maximum likelihood classifier in detailed urban land use classification has inspired adoption of artificial intelligence in land use classifiers, such as Liu and Lathrop's research using multi-layer perceptron neural network.[9] Although it has problems, the maximum likelihood classifier is still the most commercially available classifier in remote sensing and photogrammetric software and is somewhat reliable in most cases. However, it can be more promising if it can be further improved or better implemented.

Figure 6 Spectral (left) and textural (right) view of the same area. Textural layers representing fractal dimension, lacunarity and Moranís Iare stacked for a color textural view. The area is near the industrial park in Jefferson Parish, Louisiana and on the right side (east) of the Mississippi River. IKONOS satellite imagery courtesy of GeoEye.
This research undertook a further look at this widely used maximum likelihood classifier, with special attention to the sometimes ignored prior probabilities (prior knowledge). Taking this into account, we approached the problem by using genetic algorithms to optimize the probability parameter by fitting the goal of a higher accuracy through an evolutionary training process (see Figure 7). This optimized solution is then used as the prior knowledge to make better classification with the maximum likelihood classifier.

Figure 7 Training curve of the classifier.

Figure 8 Classified land use/cover maps using IKONOS multispectral band 4, 3, 2 with traditional maximum likelihood method (top) and the new method (bottom).

Figure 9 Improvement of land use classification accuracy with different approaches, compared to the traditional maximum likelihood classifier.

Figure 10 Improvement of flood detection in different land use categories with texture information utilized, compared with the per-pixel spectral classification. Maximum likelihood classifier was used in both cases.

Workflow

The calculation method of fractal dimension requires a large window to yield a stable measurement, but a large window may blur land use boundaries. This research used the panchromatic band to generate three textural layers (fractal dimension, lacunarity, and Moran's I). The textural layers were down sampled and further stacked with the spectral green, red and near-infrared bands. Blue band was excluded because of atmospheric scatter effects. The composite six-band image hence contains not only spectral characteristics, but also texture measurements extracted with three advanced textural metrics. The composite image then went through a standard supervised classification based on pre-selected training sites, with the adoption of genetic algorithm in the training classifier. ADS40 aerial photographs (courtesy of the LSU GIS Clearinghouse Cooperative) were used to check classification accuracy, and field work was also performed.

A slightly smaller area than Figure 6 was used in this research, and hence the flooded area bears no effects on classification, which is the case of most land use classification. The proposed genetic algorithm classifier was compared to the traditional per-pixel-based method. Textural layers were generated with a 65x65-meter moving window. A USGS level II classification scheme was used with only existing land use types considered: residential, woodland, grassland, commercial, industrial and water. Classified maps are shown in Figure 8.

Figure 9 demonstrates the improvement of the new method over the traditional method. The test found that the traditional per-pixel-based maximum likelihood classification yielded only 68.5 percent accuracy, whereas accuracy increased to 79.7 percent when the genetic algorithm was used, or to 86.6 percent, when texture information was used. Both the genetic algorithm and the texture information used together yielded an overall 89.3 percent classification accuracy.

In terms of computational efficiency, a test of 20 training scenarios found that on average, less than 10 additional minutes are needed. Considering the great improvement in accuracy, this little extra time is well justified. Although extraction of textural layers is computationally time-consuming, often taking several hours on a fast Windows-based computer, significant reduction in computer time could be made if parallel or distributed processing were deployed.

Detecting flood or land-water interface is one important remote sensing application. A preliminary experiment with selected sites in New Orleans was used to test the texture method with the maximum likelihood classifier, and it showed (Figure 10) that the omission errors were significantly reduced in most flooded land cover types: residential, commercial, woodland, and completely flooded areas (parking lots, grasslands, etc). The overall accuracy was improved from 60.3 percent to 75.2 percent. Although this preliminary is still not good enough, it shows great promise of applying texture analysis in disaster detection. By further combining LIDAR elevation data, other artificial intelligence classifiers, and an expert knowledge system, even higher accuracy can be achieved.

The way that people identify ground features is based on visualization of both spectral information (image color) and textural information (shape, variation, compactness, etc). While multiple spectral bands have frequently been used together in traditional classification for better separation of different land use types, there is no reason additional multiple textural bands cannot be used to increase accuracy.

This comprehensive spectral-spatial image information will help make computers work more like image specialists. Unlike object-based land use/cover classification that is available in some commercial software packages, this research represents an ongoing academic research trend of increasing use of the texture-layer-aided approach. When the texture-aided approach is combined with artificial intelligence algorithms, classification accuracy improves significantly, providing fast, accurate, robust, automated detection of affected areas, which is more critical than ever.


Note This research is partially supported by an NSF grant (Award No. BCS-0726512). Use of image data from GeoEye and the LSU GIS Clearinghouse is also gratefully acknowledged.


End Notes

  1. Myint, S.W, and N.S.-N. Lam, 2005. "Examining lacunarity approaches in comparison with fractal and spatial autocorrelation techniques for urban mapping." Photogrammetric Engineering & Remote Sensing, 71(8):927-937.
  2. Mandelbrot, B. B., 1967. "How long is the coast of Britain? Statistical selfsimilarity and fractional dimension." Science, 156: 636638.
  3. Lam, N. S.-N., D. A. Quattrochi, H.-L. Qiu, and W. Zhao, 1998. "Environmental assessment and monitoring with image characterization and modeling system using multiscale remote sensing data." Applied Geographic Studies, 2(2):77-93.
  4. Emerson, C.W., N.S.-N. Lam, and D.A. Quattrochi, 1999. "Multi-scale fractal analysis of image texture and pattern." Photogrammetric Engineering & Remote Sensing, 65(1):51-61.
  5. Mandelbrot, B.B., 1983. The Fractal Geometry of Nature. New York: W.H.Freeman.
  6. Plotnick, R.E., R.H. Gardner, and R.V. O'Neill, 1993. "Lacunarity indices as measures of landscape texture." Landscape Ecology, 8(3): 201-211.
  7. Cliff, A.D., and J.K. Ord, 1973, Spatial Autocorrelation. London: Pion Limited.
  8. Carr, J.R., and F.P. de Miranda, 1998. "The semivariogram in comparison to the co-occurrence matrix for classification of image texture." IEEE Transactions on Geoscience and Remote Sensing, 36(6): 1945-1952.
  9. Liu, X., and R.G. Lathrop, 2002. "Urban change detection based on an artificial neural network." International Journal of Remote Sensing, 23(12): 2513-2518.



 
Sensors & Systems | Monitoring, Analyzing and Adapting to Global Change | Stay in tune with the transformation. Subscribe to the free weekly newsletter.