Figure 1: Graphical representation of the 3D hyperspectral data cube and the acquisition parameters of scanning systems. Scanning systems measure only one slice through the data cube at any given instant. Wavelength-scanning (tunable filter-type) systems measure a slice at a fixed wavelength, slit-scanning (pushbroom) systems measure at a fixed position in one spatial dimension.
A conventional color image has three colors per pixel (red, green, and blue), but a hyperspectral image can have hundreds of colors across the electromagnetic spectrum. Because every material has a characteristic color signature, this information can be used to identify an object by analyzing its spectra and comparing it to a database of known materials.
In hyperspectral imaging, this spectral data is combined with spatial information to create three-dimensional hyperspectral data cubes. Two dimensions, X and Y, describe the position of a point in space of an area of interest, and the third dimension, Z, provides the spectral signature at that point.
Over the years, a number of different techniques have been developed to acquire hyperspectral images. The two main methods currently commercially available are scanning and staring systems.
Scanning hyperspectral imagers scan a scene over time to build a hyperspectral datacube. This process is done either by wavelength scanning which measures an image slice at a fixed wavelength, or slit scanning which measures a spectral slice at a fixed position (figure 1). The advantage of these types of systems is that they can provide extremely high resolution images. However, since multiple images are required to build the cube, motion artifacts are often created making these technologies unsuitable for high speed applications.
Staring hyperspectral imagers capture both the spectral and the spatial information of a scene simultaneously (figure 2). The information is collected by capturing a single image on a commercial two-dimensional focal plane array. The advantage is that all motion artifacts are eliminated and images can be captured at video rates limited only by the speed of the camera. The trade off for this rapid artifact-free image is reduction in either spatial or spectral resolution.
This massively parallel system collects the full three-dimensional hyperspectral data cube without scanning. Incident photons from a surveillance image are detected en masse by an optical processor. Manipulation of the data set occurs prior to any electronic detection or software processing, operating on the data set at the speed of light. No computer algorithm can process faster.
Figure 2: The staring Hyperspectral imager utilizes a parallel optical processor to measure the entire data cube simultaneously and distributes the entire cube onto a detector array in real time. The system combines spectral data with spatial information in a single frame to create a 3D hyperspectral data cube, where a point in space is described by its position in the X & Y and its wavelength Z.