Super-Resolution Challenges in Hyperspectral Imagery
Fereidoun A. Mianji,
Humayun Karim Sulehria,
Mohammad Reza Kardan
In this study, a variety of recent proposed spatial
and spectral processing methods for hyperspectral imagery is reviewed
and several important aspects of super-resolution problems and challenges
are presented. The inherent variability in target and background spectra
in hyperspectral imagery, the problem of high dimensionality of hyperspectral
data in hyperspectral image classification, limitation in application
of learning-based methods and the question of an optimal resolution factor
for an arbitrary set of images, are some of the main challenges in this
In most electronic imaging applications, High Resolution (HR) is required.
An HR image can offer more details than a Low Resolution (LR) image due
to its higher pixel density and which is more crucial in various applications.
Spatial resolution is the basis for expressing the resolution of monochrome
images, while for a color images or material analysis purposes, spectral
resolution must be seriously taken into account.
Many applications involve the remote detection of objects such as the
species of plants on the ground or military vehicles. For this kind of
applications hyperspectral imagery is more adequate. Figure
1 shows how hyperspectral imagery sensors provide image data containing
both spatial and spectral information. The basic idea for hyperspectral
imaging stems from the fact that, for any given material, the amount of
radiation that is reflected, absorbed, or emitted, i.e., the radiance,
varies with wavelength. Hyperspectral imaging sensors measure the radiance
of the materials within each pixel area at a very large number of contiguous
spectral wavelength bands (Manolakis et al., 2003). The resulting
reflectance representation, termed the spectral signature, if sufficiently
characterized, can be used to identify specific materials in a scene.
||Four basic parts of a remote sensing system: the radiation
source, the atmospheric path, the imaged surface and the sensor
||Different materials produce different electromagnetic
radiation spectra. The spectral information contained in a hyperspectral
image pixel can therefore indicate the various materials present in
Figure 2, for example, illustrates the different reflectance
spectra for naturally occurring materials such as soil, green vegetation
and dry vegetation.
Unfortunately, atmospheric scattering, secondary illumination, changing
viewing angles and physical limitations of imaging sensors such as dynamic
range, pixel size, artifacts and sensor noise degrade the quality of these
data. We refer to the hyperspectral remote sensing system shown in Fig.
1 again, to describe the process. It has four basic parts including
the radiation source, the atmospheric path, the imaged surface and the
sensor. The primary source of illumination in a passive remote sensing
system is the sun. The solar energy is modified before reaching the sensor
due to the following factors:
||Intensity modifications and spectral changes during
propagation through the atmosphere.
||Interaction with the imaged surface materials and reflection, transition
and/or absorption by these materials.
||Additional intensity modifications and spectral changes during passing
back through the atmosphere.
Finally, the energy reaches the sensor, where it is measured and converted
into digital form for further processing and exploitation.
Spatial resolution enhancement and spectral resolution enhancement are
basically different approaches for different applications, but the need
for both spatial and spectral resolution in many hyperspectral images
has attracted many researchers to develop new techniques for super-resolution
of hyperspectral imagery in both spatial and spectral regions. In this
research, some areas of research in Super-Resolution (SR) of hyperspectral
imagery are outlined and some of the effective methods in resolution enhancement
and the main challenges in this field are highlighted. It is also shown
that despite the lower relevance of spatial resolution in hyperspectral
imagery it is of a considerable importance in it.
SPECTRAL RESPONSE OF END MEMBERS
A frequent assumption in hyperspectral remote sensing is that spectral
signatures result from linear combinations of end-member spectra (Penn,
2002). End-member spectra are end-member components in n-dimensional space.
Let E equal the number of end-members in the spectral library with e ranging
from 1 to E. Each spectrum in the library consists of N discrete wavelengths
(λn) where n = 1 to N. Let Se (λn)
represent the spectral response of material e at wavelength λn.
Each spectrum in the library is described by the following vector:
Se = (Se (λ1),
Se (λ2), ..., Se (λN))
For an unknown spectra u = (u1, u2, ..., uN)
each vector component is composed of a linear combination of m end-members
from M. u is related to M by the estimation vector x = (x1,
x2, ..., xE) where, 0≤xe≤1 and
. For a mixture described by u, the spectral response at λn,
Su (λn), would be as following:
DATA CUBES AND MIXED PIXELS
Airborne hyperspectral imaging sensors produce a three dimensional (3D)
data structure (as a result of spatial and spectral sampling), referred
to as a data cube.
Figure 3 shows an example of such a data cube. If we
extract all pixels in the same spatial location and plot their spectral
values as a function of wavelength, the result is the average spectrum
of all the materials in the corresponding ground resolution cell. In contrast,
the values of all pixels in the same spectral band, plotted in spatial
coordinates, result in a grayscale image depicting the spatial distribution
of the reflectance of the scene in the corresponding spectral wavelength
(Manolakis et al., 2003). A hyperspectral data cube is composed
of pure and mixed pixels, where a pure pixel contains a single surface
material and a mixed pixel contains multiple materials. Figure 4 shows
the mixed-pixel interference.
||Spectra for single pixels in hyperspectral images and
||Pure pixels and mixed pixels. Smaller targets are detectable
by means of smaller pixel sizes
Figure 4 shows that how depending on the spatial resolution
of the sensor and the spatial distribution of surface materials within
each ground resolution cell, radiance from all materials within a ground
resolution cell is seen by the sensor as a single image pixel. Mixed-pixel
interference is one of the main obstacles in hyperspectral imagery.
SPATIAL PROCESSING VS. SPECTRAL PROCESSING
All practical sensors have limited spatial and spectral resolution, which
results in finite-resolution recordings of the scene radiance. The field
of digital image processing refers to processing digital images by means
of a digital computer aiming for improvement in quality of LR images.
The most direct solution to increase spatial resolution is to reduce the
pixel size (the basic unit of image sensor). As the pixel size decreases,
however, shot noise that degrades the image quality increases. To avoid
the severe effects of shot noise, there is a limitation on the pixel size
reduction; the optimally limited pixel size is about 40 μm2
for a 0.35 μm CMOS process (Choi et al., 2004). Due to the
fact that current image sensor technology has almost reached this level,
the best approach is to use image processing methods to obtain a HR image
from observed low resolution images. Basic methods for image enhancement
||Enhancement in Spatial Domain (SD) using techniques
such as histogram processing, arithmetic/ logic operations and spatial
||Enhancement in Frequency Domain (FD) using Smoothing FD Filters,
Sharpening FD Filters and Homomorphic Filtering.
Spatial methods have proven to be more flexible and efficient compared
to the frequency methods (Omrane and Palmer, 2003). SR algorithms attempt
to extract the high-resolution image corrupted by the limitations of the
optical imaging system. This type of problem is an example of an inverse
problem, wherein the source of information (high-resolution image) is
estimated from the observed data (low-resolution image or images) (Farsiu
et al., 2004).
||Comparison of spatial processing and spectral processing
The spectral resolution is determined by the width Δλ of the
spectral bands used to measure the radiance at different wavelengths λ.
In Table 1 (adapted from Manolakis et al., 2003)
some of the most important properties of spatial and spectral image processing
are compared. Generally, speaking hyperspectral processing techniques
are of higher priority in remote sensing although, it should be noticed
that higher spatial resolution is desired even in hyperspectral imagery.
Any effort to measure the spectral properties of a material through the
atmosphere must consider the absorption and scattering of the atmosphere,
the subtle effects of illumination and the spectral response of the sensor.
The recovery of the reflectance spectrum of each pixel from the observed
radiance spectrum is facilitated with the use of sophisticated atmospheric
compensation codes. Now we discuss the main approaches for SR as applied
to hyperspectral imagery.
MAIN APPROACHES TO SUPER-RESOLUTION
The SR problem has received much interest within the Image Processing
(IP) community. Lu et al. (2004) suggested that the underlying
meaning of the SR problem refers to super-resolving an imaging system
by image sequence observation, instead of merely improving the image sequence
itself. An SR algorithm consists of two steps: image registration and
fusion of many LR images into an HR image. Many effective techniques have
been developed for the first step, which is also called motion estimation
(Chalidabhongse and Kuo, 1997; Li et al., 1994). The second step
is based on the fact that the HR image, after being appropriately shifted,
blurred and down-sampled to take into account the alignment and to model
the imaging process, should produce the LR images.
Learning-Based Method (LBM): A general SR method is to capture
multiple LR observations of the same scene by sub-pixel shift in the image
sensor`s motion. However, this method requires an accurate registration
process, a difficult and challenging task (Joshi et al., 2006).
In recent years, IP researchers have started to exploit different approaches
to overcome this difficulty, as discussed here. One technique is the LBM
for image SR, in which, a database of high-resolution training images
is used to create high-frequency details in the zoomed images (Baker and
Kanade, 2002; Capel and Zisserman, 2001; Freeman et al., 2002).
The main advantage of LBM is that it provides a natural way of obtaining
the required image characteristics. The main disadvantage of this method
is requiring a long learning time that severely limits its applications.
Reconstruction-Based Algorithms (RBA): These algorithms have been
around for a few decades and produce HR images that minimize the difference
between observed LR images and images estimated from the HR image with
a camera model. There are some techniques and problems for RBA and these
are explained below.
SR variable-pixel linear reconstruction: The development and applications
of this SR method, is described in (Merino and Nunez, 2007). The algorithm
works by combining different LR images in order to obtain, a resultant
HR image. It is shown that it can make spatial resolution improvements
to satellite images of the Earth`s surface, allowing recognition of objects
with sizes approaching the limiting spatial resolution of the LR images.
Blur and noise: Blur in LR images has received considerable attention
in image reconstruction. In many practical situations, the blur is often
unknown and little information is available about the true image. Therefore,
the true image is identified directly from the corrupted image by using
partial or no information about the blurring process and the true image.
In overcoming the blur problem, blind SR techniques are used and it is
anticipated that research in integrating various blind SR algorithms will
continue in the future (Lu et al., 2004).
In addition, noise is often amplified that induces severely ringing
or aliasing artifacts in the process of restoration (Pan, 2002). Some
new methods for the reconstruction of a HR image from a set of highly
undersampled and thus aliased images are presented in some articles (Marziliano
and Vetterli, 2000; Vandewalle et al., 2004; Vandewalle et al.,
Computational complexity: RBA requires iterative calculations
and has large calculation costs because reconstruction-based SR is a large-scale
problem (Tanaka and Okutomi, 2007). The proposed methods (Alvarez et
al., 2004; Zhang et al., 2005) for solving the problem of computational
complexity are to introduce fast algorithms, such as:
||Reducing the number of observed pixel value estimations
from the high resolution image
||Using an average of pixel values in a divided region
||Determining the pixels in the image that provide useful information
for calculating the HR image and
||Parallel image reconstruction
Other performance measures: All the suggested techniques have
limited performance. SR performance depends on a complex relationship
between measurement SNR, the number of observed frames, sets of relative
motions between frames, image content and the imaging system`s Point-Spread
Function (PSF) (Robinson and Milanfar, 2006). It has been shown that this
degradation occurs most severely along edges within images. Furthermore,
the question of an optimal resolution factor for an arbitrary set of images
is still intriguing (Farsiu et al., 2004).
Instrumental Schemes (IS): Instrumental limitations in hyperspectral
(HS) cameras make it difficult to perform HR scanning of microscopic samples,
target and material identification in remotely sensed data as well. Appreciating
the significance of this problem, some IS and computational methods for
improving the spatial resolution of HS images were proposed (Munechika
et al., 1993; Robinson et al., 2000). As an example, the
increase in scanning resolution of microscopic samples can be achieved
by combining a high-precision stepping table which shifts the spatial
positions of the HS camera with a maximum entropy SR method (Buttingsrud
and Alsberg, 2006). The basis of this method is to combine multiple LR
HS images to construct a single HS image with a higher spatial resolution
at all wavelengths such that the spectral profiles in each pixel is accurate.
The generated image quality is limited by the resolution and noise of
the stepping table and the original camera.
Fusion methods: Improving the resolution in HS images has a high
payoff, but applying SR techniques separately to each spectral band is
problematic for two main reasons. First, the number of spectral bands
can be in the hundreds, increasing the computational load excessively.
Second, considering the bands separately does not make use of the information
that is present across them. Furthermore, separate band SR does not make
use of the inherent low dimensionality of the spectral data, which can
effectively improve the robustness against noise. A proposed approach
is a model that enables representing the HS observations from different
wavelengths as weighted linear combinations of a small number of basis
image planes (Bachmann et al., 2005). Then, a method for applying
SR to HS images using this model is presented. The method fuses information
from multiple observations and spectral bands to improve spatial resolution
and reconstruct the spectrum of the observed scene as a combination of
a small number of spectral basis functions.
Resolution in target and material identification in remotely sensed
data may be enhanced by the use of spectral information (Rhody, 2002).
The basis for this technique is the fact that HS instruments can gather
high-resolution spectral information, but suffer from low spatial resolution.
Conversely, monochrome or color images that have high spatial resolution
have low spectral resolution. The fusion of the two types of images has
been proposed to produce a data set that has higher resolution in both
the spatial and spectral domains than that can be obtained either type
of image alone.
Curvelet transform: An improved method of image fusion is based
on the amélioration de la résolution spatiale par injection
de structures (ARSIS) concept using the curvelet transform (Choi et
al., 2005). Based on the fact that the curvelet transform represents
edges better than wavelets and regarding the importance of edges in image
representation, enhancing spatial resolution has been carried out by means
of enhancing the edges.
MAP estimation method: Another approach is a maximum a posteriori
(MAP) estimation method for enhancing the spatial resolution of an HS
image using a higher resolution coincident panchromatic image (Eismann
and Hardie, 2004). This involves the use of a stochastic mixing model
of the underlying spectral scene content to optimize the estimated HS
SVM classification approach: The SVM classification approach in
fusion of HR and HS imagery is also an effective way to produce a data
set that has higher resolution in both the spatial and spectral domains
(Gualtieri and Chettri, 2000). Improving the classification accuracy using
spectrally weighted kernels is also investigated by means of assigning
weights to different bands according to the amount of useful information
they contain (Guo et al., 2005).
Non-linear methods: Methods for exploiting the nonlinear structure
of HS imagery were developed and compared against the de facto standard
of linear mixing in a new algorithm (Bachmann et al., 2005). This
approach seeks a manifold coordinate system that preserves geodesic distances
in the high-dimensional HS data space. Algorithms for deriving manifold
coordinates, such as isometric mapping (ISOMAP), have been developed for
other applications. ISOMAP guarantees a globally optimal solution, but
is practical only for small datasets because of computational and memory
Other methods: Optical and structural properties of images, such
as 3-D shape of an object, regional homogeneity, local variations in scene
reflectively, etc., are also of importance in HS imaging. Some techniques
are proposed based on generalized interpolation and polarization for separation
of real components from reflected components in an overlapping panchromatic
image that is useful for many applications such as high quality TV camera
images (Ohnishi et al., 1996; Rajan and Choudhuri, 2001; Schechner
et al., 1999). The underlying theory for the application of anomaly
detection to systems with inherently high dimensionality is outlined in
(Stein et al., 2002). It is demonstrated that the performance improves
with SNR and diminishes with increasing dimension.
Joint Endmember Determination (JEMD) is the technique used to combine
a HR image with a low spatial resolution HS image to produce a product
that has the spectral properties of the HS image at a spatial resolution
approaching that of the panchromatic image (Winter and Winter, 2002).
Despite the considerable advances in SR of hyperspectral imagery, some
important challenges still lie ahead in developing a SR algorithm capable
of producing high-quality results on general image sequences. Spectral
basis of hyperspectral imagery makes it a powerful tool in material analysis
and target recognition but the increasing demand for higher spatial resolution
in hyperspectral imagery makes it more challenging. Some of the main challenges
in this field can be categorized as the follows:
||In spite of the emphasis of hyperspectral imagery on
spectral information of the ground cells, many of the researches have
focused on spatial resolution enhancement of hyperspectral imaging
||Computational complexity of SR algorithms is one of the major challenges
which restrict us from achieving higher standards in image processing.
||Requiring a long learning time is the significant disadvantage of
learning-based methods that severely limits their applications to
SR problem. Learning-based methods are effective when applied to very
specific scenarios, such as faces or text.
||Inherent variability in target and background spectra is a severe
obstacle in development of effective detection algorithms for hyperspectral
imagery. The use of adaptive algorithms deals quite effectively with
the problem of unknown backgrounds; the lack of sufficient target
data, however, makes the development and estimation of target variability
||One of the main problems encountered with hyperspectral image classification
is the high dimensionality of hyperspectral data; such as, the AVIRIS
hyperspectral sensor has 224 spectral bands.
Alvarez, L.D., J. Mateos, R. Molina and A.K. Katsaggelos, 2004. High-resolution images from compressed low-resolution video: Motion estimation and observable pixels. Int. J. Image. Syst. Technol., 14: 58-66.
Direct Link |
Bachmann, C.M., T.L. Ainsworth and R.A. Fusina, 2005. Exploiting manifold geometry in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens., 43: 441-454.
Baker, S. and T. Kanade, 2002. Limits on super-resolution and how to break them. IEEE Trans. Pattern Anal. Mach. Intell., 24: 1167-1183.
Buttingsrud, B. and B.K. Alsberg, 2006. Super-resolution of hyperspectral images. Chemometrics Intell. Lab. Syst., 84: 62-68.
Capel, D. and A. Zisserman, 2001. Super-resolution from multiple views using learnt image models. Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition, December 8-14, 2001, IEEE Computer Society, pp: 627-634.
Chalidabhongse, J. and C.C.J. Kuo, 1997. Fast motion vector estimation using multiresolution-spatio-temporal correlation. IEEE Trans. Circuits Syst. VideoTechnol., 7: 477-488.
Choi, E.C., J.S. Choi and M.G. Kang, 2004. Super-resolution approach to overcome physical limitations of imaging sensors. An overview. Int. J. Image. Syst. Technol., 14: 36-46.
Direct Link |
Choi, M., R.Y. Kim, M.R. Nam and H.O. Kim, 2005. Fusion of multispectral and panchromatic satellite images using the curvelet transform. IEEE Geosci. Remote Sens. Lett., 2: 136-140.
Eismann, M.T. and R.C. Hardie, 2004. Application of the stochastic mixing model to hyperspectral resolution enhancement. IEEE Trans. Geosci. Remote Sens., 42: 1924-1933.
Farsiu, S., D. Robinson, M. Elad and P. Milanfar, 2004. Advances and challenges in super-resolution. Int. J. Image. Syst. Technol., 14: 47-57.
Direct Link |
Freeman, W.T., T.R. Jones and E.C. Pasztor, 2002. Example-based super-resolution. IEEE Comput. Graph. Appl., 22: 56-65.
Gualtieri, J.A. and S. Chettri, 2000. Support vector machines for classification of hyperspectral data. Proceeding of the International Geoscience and Remote Sensing Symposium, July 24-28, 2000, IEEE Xplore London, pp: 813-815.
Guo, B.F., S. Gunn, B. Damper and J. Nelson, 2005. Hyperspectral image fusion using spectrally weighted kernels. Proceedings of the 8th International Conference on Information Fusion, July 25-28, 2005, Philadelphia, PA., pp: 402-408.
Joshi, M.V., L. Bruzzone and S. Chaudhuri, 2006. A model-based approach to multiresolution fusion in remotely sensed images. IEEE Trans. Geosci. Remote Sens., 44: 2549-2562.
Li, R., B. Zeng and M.L. Liou, 1994. A new three-step search algorithm for block motion estimation. IEEE Trans. Circuits Syst. Video Technol., 4: 438-442.
Lu, Y., M. Inamura and M. Del Carmen-Valdes, 2004. Super-resolution of the undersampled and subpixel shifted image sequence by a neural network. Int. J. Image. Syst. Technol., 14: 8-15.
Direct Link |
Manolakis, D., D. Marden and G.A. Shaw, 2003. Hyperspectral image processing for automatic target detection applications. Lincoln Lab. J., 14: 79-116.
Direct Link |
Marziliano, P. and M. Vetterli, 2000. Reconstruction of irregularly sampled discrete-time bandlimited signals with unknown sampling locations. IEEE Trans. Signal Proc., 48: 3462-3471.
Meria, M.T. and J. Nunez, 2007. Super-resolution of remotely sensed images with variable-pixel linear reconstruction. IEEE Trans. Geosci. Remote Sens., 45: 1446-1457.
Munechika, C.K., J.S. Warnick, C. Salvaggio and J.R. Schott, 1993. Resolution enhancement of multispectral image data to improve classification accuracy. Photogram. Eng. Remote Sens., 59: 67-72.
Ohnishi, N., K. Kumaki, T. Yamamura and T. Tanaka, 1996. Separating real and virtual objects from their overlapping images. Proceedings of the 4th European Conference on Computer Vision Cambridge, UK., April 15-18, 1996, Springer, Berlin, Heidelberg, pp: 636-646.
Omrane, N. and P. Palmer, 2003. Super-resolution using the walsh functions-a new algorithm for image reconstruction. Proceedings of International Conference on Image Processing, ICIP 2003. September 14-17, 2003, IEEE., pp: 299-302.
Pan, M.C., 2002. A novel blind super-resolution technique based on the improved poisson maximum a posteriori algorithm. Int. J. Imag. Syst. Technol., 12: 239-246.
Penn, B.S., 2002. Using simulated annealing to obtain optimal linear end-member mixtures of hyperspectral data. Comput. Geosci., 28: 809-817.
Direct Link |
Rajan, D. and S. Chaudhuri, 2001. Generalized interpolation and its application in super-resolution imaging. Image Vision Comput., 19: 957-969.
Rhody, H.E., 2002. Enhancing spatial resolution for exploitation in hyperspectral imagery. Proceedings of the 31st Applied Imagery Pattern Recognition Workshop, October 16-18, 2002, IEEE Xplore London, pp: 19-28.
Robinson, D. and P. Milanfar, 2006. Statistical performance analysis of super-resolution. IEEE Trans. Image Proc., 15: 1413-1428.
Robinson, G., H.N. Gross and J.R. Schott, 2000. Evaluation of two applications of spectral mixing models to image fusion. Remote Sens. Environ., 71: 272-281.
Direct Link |
Schechner, Y.Y., J. Shamir and N. Kiryati, 1999. Polarization-based decorrelation of transparent layers: The inclination angle of an invisible surface. Proceedings of the 7th IEEE International Conference on Computer Vision. September 20-27, 1999, IEEE., pp: 814-819.
Stein, D.W.J., S.G. Beaven, L.E. Hoff, E.M. Winter, A.P. Schaum and A.D. Stocker, 2002. Anomaly detection from hyperspectral imagery. IEEE Signal Proc. Maga., 19: 58-69.
Tanaka, M. and M. Okutomi, 2007. A fast algorithm for reconstruction-based superresolution and evaluation of its accuracy. Syst. Comput. Jap., 38: 44-52.
Vandewalle, P., L. Sbaiz, J. Vandewalle and M. Vetterli, 2004. How to take advantage of aliasing in bandlimited signals. Proceedings of the Conference on Acoustics, Speech and Signal Processing, May 17-21, 2004, IEEE Xplore, London, pp: 948-951.
Vandewalle, P., L. Sbaiz, M. Vetterli and S. Susstrunk, 2005. Super-resolution from highly undersampled images. Proceedings of the International Conference on Image Processing, September 11-14, 2005, IEEE., pp: 889-892.
Winter, M.E. and E.M. Winter, 2002. Resolution enhancement of hyperspectral data. Proceeding of the Aerospace Conference, March 9-16, 2002, IEEE Xplore London, pp: 1523-1529.
Zhang, D., H.F. Li and M.H. Du, 2005. Fast MAP-based multiframe super-resolution image reconstruction. Image Vision Comput., 23: 671-679.