Mapping Chicago area urban tree canopy using color infrared imagery
Abstract: General summary Satellite imagery has been used to create highly accurate land cover maps, even effectively identifying different types of vegetation. Differences in pixel values in one or more colors in an image provide a way to create computer-generated land cover maps, but the large pixel size of many satellite images is not appropriate for urban areas where details, such as individual street trees, are not recognizable. In Illinois, USA, color infrared (CIR) aerial photos with a two-meter pixel size are freely available via download. This study explores the possibility of developing a method for mapping tree canopy in an urban environment that can be applied to various images from this CIR collection. The possibility of applying the pixel values to that best identify tree canopy in one image to several other images (rather than evaluating tree canopy characteristics in each image), was tested. The limits to this type of template for accurately mapping urban tree canopy, and the classification method that is most effective when applied to several different images, form the focus of this paper. One image was used for model development, and the most successful model tested on that image was applied to three adjacent images for further evaluation. The three bands in the color infrared image (near-infrared(NIR), red, and green) were tested for potential to separate pixels representing tree canopy from those representing grass. Nine indices (combinations of pixel values from more than one color), were also evaluated. Using the normalized difference vegetation index (NDVI), which employs a combination of red and NIR values to detect biomass, a mask was created to exclude pixels that did not represent vegetation. General statistics for pixel values in areas identified as grass or canopy were then evaluated to determine the best ranges for assigning the remaining pixels to the correct vegetation type. Three automated computer classifications (where the computer creates a map based on examples of different land cover in the image) were tested for comparison against these methods. Results from the classification of the first image found both the red and green color values produced a land cover classification with 82% accuracy. The automated computer classifications ranged from 79 to 80% in overall accuracy. Two of the vegetation indices, NIR/green and (NIR+red+green/3), also resulted in 80% overall accuracy. When the model developed on the first image was extended to include three adjacent images, both the red and green spectral band values, and two of the vegetation indices, produced land cover classifications with greater than 76% overall accuracy. The model that produced the best tree canopy classification results when applied to all four images was tree canopy pixels are those where NDVI > 0.119 and (NIR+red+green)/3 <=132. This tree canopy definition resulted in an overall accuracy of 81% for the expanded study area. The estimated error in tree canopy extent was +25.7%.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)