Image Processing

Image Classification

The process of assigning pixels in a satellite image to predefined categories (land cover types) based on their spectral characteristics. Methods include supervised classification (using training data) and unsupervised classification (automated clustering).

Overview

Image classification in remote sensing assigns thematic labels — typically land cover or land use categories — to pixels or groups of pixels in satellite imagery. It transforms continuous spectral data into discrete categorical maps. Classification is one of the most fundamental tasks in Earth observation, underpinning applications from national land cover inventories to global deforestation alerts.

How It Works

Classification approaches divide into supervised and unsupervised methods. In supervised classification, the analyst provides labeled training samples and the algorithm learns spectral characteristics of each class. Common algorithms include Maximum Likelihood, Support Vector Machines, and Random Forest — an ensemble of decision trees that is extremely popular due to its reliability. Unsupervised classification (K-Means, ISODATA) requires no training data but results need manual interpretation.

Deep learning has transformed classification since the mid-2010s. U-Net and DeepLabV3+ perform semantic segmentation, classifying every pixel while preserving spatial detail. Accuracy assessment uses confusion matrices to derive overall accuracy, producer's/user's accuracy, and Kappa coefficient.

Key Facts

  • Random Forest is the most widely used ML classifier for LULC mapping.
  • A confusion matrix cross-tabulates predicted vs. reference classes to quantify accuracy.
  • Object-Based Image Analysis (OBIA) classifies image segments rather than individual pixels, reducing salt-and-pepper noise.
  • Training data quality is often the single largest factor determining classification accuracy.
  • Multi-temporal classification dramatically improves crop type mapping by capturing phenological differences.

Applications

National Land Cover Mapping

Programs like NLCD (US), CORINE (Europe), and GlobeLand30 use classification to produce standardized LULC maps.

Deforestation Monitoring

Systems like Global Forest Watch use classification on Landsat time series to detect forest loss at 30m resolution globally.

Agricultural Crop Type Mapping

Classifying satellite time series by crop type supports yield estimation and food security monitoring.

Urban Expansion Tracking

Classifying impervious surfaces from multi-temporal imagery reveals urban growth patterns.

Limitations & Considerations

Spectrally similar classes are difficult to separate with multispectral sensors alone. Mixed pixels cause systematic misclassification along boundaries. Models trained in one region often transfer poorly to others. Deep learning requires substantial training data and GPU compute. Class definitions are subjective and vary between classification systems.

History & Background

Maximum Likelihood Classification dominated from the late 1970s through the 1990s. Breiman's Random Forest algorithm (2001) was rapidly adopted. Object-Based Image Analysis emerged in the early 2000s with sub-meter commercial imagery. Deep learning arrived around 2015-2017 with CNNs adapted for semantic segmentation. Today, foundation models like Prithvi are emerging as general-purpose classifiers.

Analyze Image Classification data with LYRASENSE

Use our agentic notebook environment to work with satellite data and compute indices like Image Classification — no setup required.