Array processing


Array processing is a wide area of research in the field of signal processing that extends from the simplest form of 1 dimensional line arrays to 2 and 3 dimensional array geometries. Array structure can be defined as a set of sensors that are spatially separated, e.g. radio antenna and seismic arrays. The sensors used for a specific problem may vary widely, for example microphones, accelerometers and telescopes. However, many similarities exist, the most fundamental of which may be an assumption of wave propagation. Wave propagation means there is a systemic relationship between the signal received on spatially separated sensors. By creating a physical model of the wave propagation, or in machine learning applications a training data set, the relationships between the signals received on spatially separated sensors can be leveraged for many applications.
Some common problem that are solved with array processing techniques are:
Array processing metrics are often assessed noisy environments. The model for noise may be either one of spatially incoherent noise, or one with interfering signals following the same propagation physics. Estimation theory is an important and basic part of signal processing field, which used to deal with estimation problem in which the values of several parameters of the system should be estimated based on measured/empirical data that has a random component. As the number of applications increases, estimating temporal and spatial parameters become more important. Array processing emerged in the last few decades as an active area and was centered on the ability of using and combining data from different sensors in order to deal with specific estimation task. In addition to the information that can be extracted from the collected data the framework uses the advantage prior knowledge about the geometry of the sensor array to perform the estimation task.
Array processing is used in radar, sonar, seismic exploration, anti-jamming and wireless communications. One of the main advantages of using array processing along with an array of sensors is a smaller foot-print. The problems associated with array processing include the number of sources used, their direction of arrivals, and their signal waveforms.
There are four assumptions in array processing. The first assumption is that there is uniform propagation in all directions of isotropic and non-dispersive medium. The second assumption is that for far field array processing, the radius of propagation is much greater than size of the array and that there is plane wave propagation. The third assumption is that there is a zero mean white noise and signal, which shows uncorrelation. Finally, the last assumption is that there is no coupling and the calibration is perfect.

Applications

The ultimate goal of sensor array signal processing is to estimate the values of parameters by using available temporal and spatial information, collected through sampling a wavefield with a set of antennas that have a precise geometry description. The processing of the captured data and information is done under the assumption that the wavefield is generated by a finite number of signal sources, and contains information about signal parameters characterizing and describing the sources. There are many applications related to the above problem formulation, where the number of sources, their directions and locations should be specified. To motivate the reader, some of the most important applications related to array processing will be discussed.
array processing concept was closely linked to radar and sonar systems which represent the classical applications of array processing. The antenna array is used in these systems to determine location of source, cancel interference, suppress ground clutter. Radar systems used basically to detect objects by using radio waves. The range, altitude, speed and direction of objects can be specified. Radar systems started as military equipments then entered the civilian world. In radar applications, different modes can be used, one of these modes is the active mode. In this mode the antenna array based system radiates pulses and listens for the returns. By using the returns, the estimation of parameters such as velocity, range and DOAs of target of interest become possible. Using the passive far-field listening arrays, only the DOAs can be estimated. Sonar systems use the sound waves that propagate under the water to detect objects on or under the water surface. Two types of sonar systems can be defined the active one and the passive one. In active sonar, the system emits pulses of sound and listens to the returns that will be used to estimate parameters. In the passive sonar, the system is essentially listening for the sounds made by the target objects. It is very important to note the difference between the radar system that uses radio waves and the sonar system that uses sound waves, the reason why the sonar uses the sound wave is because sound waves travel farther in the water than do radar and light waves. In passive sonar, the receiving array has the capability of detecting distant objects and their locations. Deformable array are usually used in sonar systems where the antenna is typically drawn under the water. In active sonar, the sonar system emits sound waves then listening and monitoring any existing echo. The reflected sound waves can be used to estimate parameters, such as velocity, position and direction etc. Difficulties and limitations in sonar systems comparing to radar systems emerged from the fact that the propagation speed of sound waves under the water is slower than the radio waves. Another source of limitation is the high propagation losses and scattering. Despite all these limitations and difficulties, sonar system remains a reliable technique for range, distance, position and other parameters estimation for underwater applications.
NORSAR is an independent geo-scientific research facility that was founded in Norway in 1968. NORSAR has been working with array processing ever since to measure seismic activity around the globe. They are currently working on an International Monitoring System which will comprise 50 primary and 120 auxiliary seismic stations around the world. NORSAR has ongoing work to improve array processing to improve monitoring of seismic activity not only in Norway but around the globe.
Communication can be defined as the process of exchanging of information between two or more parties. The last two decades witnessed a rapid growth of wireless communication systems. This success is a result of advances in communication theory and low power dissipation design process. In general, communication can be done by technological means through either electrical signals or electromagnetic waves. Antenna arrays have emerged as a support technology to increase the usage efficiency of spectral and enhance the accuracy of wireless communication systems by utilizing spatial dimension in addition to the classical time and frequency dimensions. Array processing and estimation techniques have been used in wireless communication. During the last decade these techniques were re-explored as ideal candidates to be the solution for numerous problems in wireless communication. In wireless communication, problems that affect quality and performance of the system may come from different sources. The multiuser –medium multiple access- and multipath -signal propagation over multiple scattering paths in wireless channels- communication model is one of the most widespread communication models in wireless communication.
In the case of multiuser communication environment, the existence of multiuser increases the inter-user interference possibility that can affect quality and performance of the system adversely. In mobile communication systems the multipath problem is one of the basic problems that base stations have to deal with. Base stations have been using spatial diversity for combating fading due to the severe multipath. Base stations use an antenna array of several elements to achieve higher selectivity. Receiving array can be directed in the direction of one user at a time, while avoiding the interference from other users.
Array processing techniques got on much attention from medical and industrial applications. In medical applications, the medical image processing field was one of the basic fields that use array processing. Other medical applications that use array processing: diseases treatment, tracking waveforms that have information about the condition of internal organs e.g. the heart, localizing and analyzing brain activity by using bio-magnetic sensor arrays.
Speech enhancement and processing represents another field that has been affected by the new era of array processing. Most of the acoustic front end systems became fully automatic systems. However, the operational environment of these systems contains a mix of other acoustic sources; external noises as well as acoustic couplings of loudspeaker signals overwhelm and attenuate the desired speech signal. In addition to these external sources, the strength of the desired signal is reduced due to the relatively distance between speaker and microphones. Array processing techniques have opened new opportunities in speech processing to attenuate noise and echo without degrading the quality of and affecting adversely the speech signal. In general array processing techniques can be used in speech processing to reduce the computing power and enhance the quality of the system. Representing the signal as a sum of sub-bands and adapting cancellation filters for the sub-band signals can reduce the demanded computation power and lead to a higher performance system. Relying on multiple input channels allows designing systems of higher quality comparing to systems that use single channel and solving problems such as source localization, tracking and separation, which cannot be achieved in case of using single channel.
Astronomical environment contains a mix of external signals and noises that affect the quality of the desired signals. Most of the arrays processing applications in astronomy are related to image processing. The array used to achieve a higher quality that is not achievable by using a single channel. The high image quality facilitates quantitative analysis and comparison with images at other wavelengths. In general, astronomy arrays can be divided into two classes: the beamforming class and the correlation class. Beamforming is a signal processing techniques that produce summed array beams from a direction of interest – used basically in directional signal transmission or reception- the basic idea is to combine elements in a phased array such that some signals experience destructive inference and other experience constructive inference. Correlation arrays provide images over the entire single-element primary beam pattern, computed off-line from records of all the possible correlations between the antennas, pairwise.
In addition to these applications, many applications have been developed based on array processing techniques: Acoustic Beamforming for Hearing Aid Applications, Under-determined Blind Source Separation Using Acoustic Arrays, Digital 3D/4D Ultrasound Imaging Array, Smart Antennas, Synthetic aperture radar, underwater acoustic imaging, and Chemical sensor arrays...etc.

General model and problem formulation

Consider a system that consists of array of r arbitrary sensors that have arbitrary locations and arbitrary directions which receive signals that generated by q narrow band sources of known center frequency ω and locations θ1, θ2, θ3, θ4... θq. since the signals are narrow band the propagation delay across the array is much smaller than the reciprocal of the signal bandwidth and it follows that by using a complex envelop representation the array output can be expressed as :
Where:
The same equation can be also expressed in the form of vectors:
If we assume now that M snapshots are taken at time instants t1, t2... tM, the data can be expressed as:
Where X and N are the r × M matrices and S is q × M:
Problem definition
“The target is to estimate the DOA’s θ1, θ2, θ3, θ4 …θq of the sources from the M snapshot of the array x… x. In other words what we are interested in is estimating the DOA’s of emitter signals impinging on receiving array, when given a finite data set observed over t=1, 2 … M. This will be done basically by using the second-order statistics of data”
In order to solve this problem do we have to add conditions or assumptions on the operational environment and\or the used model? Since there are many parameters used to specify the system like the number of sources, the number of array elements...etc. are there conditions that should be met first? Toward this goal we want to make the following assumptions:
  1. The number of signals is known and is smaller than the number of sensors, q
  2. The set of any q steering vectors is linearly independent.
  3. Isotropic and non-dispersive medium – Uniform propagation in all directions.
  4. Zero mean white noise and signal, uncorrelated.
  5. Far-Field.
Throughout this survey, it will be assumed that the number of underlying signals, q, in the observed process is considered known. There are, however, good and consistent techniques for estimating this value even if it is not known.

Estimation techniques

In general, parameters estimation techniques can be classified into: spectral based and parametric based methods. In the former, one forms some spectrum-like function of the parameter of interest. The locations of the highest peaks of the function in question are recorded as the DOA estimates. Parametric techniques, on the other hand, require a simultaneous search for all parameters of interest. The basic advantage of using the parametric approach comparing to the spectral based approach is the accuracy, albeit at the expense of an increased computational complexity.

Spectral–based solutions

Spectral based algorithmic solutions can be further classified into beamforming techniques and subspace-based techniques.

Beamforming technique

The first method used to specify and automatically localize the signal sources using antenna arrays was the beamforming technique. The idea behind beamforming is very simple: steer the array in one direction at a time and measure the output power. The steering locations where we have the maximum power yield the DOA estimates. The array response is steered by forming a linear combination of the sensor outputs.
Approach overview
Where Rx is the sample covariance matrix. Different beamforming approaches correspond to different choices of the weighting vector F. The advantages of using beamforming technique are the simplicity, easy to use and understand. While the disadvantage of using this technique is the low resolution.

Subspace-based technique

Many spectral methods in the past have called upon the spectral decomposition of a covariance matrix to carry out the analysis. A very important breakthrough came about when the eigen-structure of the covariance matrix was explicitly invoked, and its intrinsic properties were directly used to provide a solution to an underlying estimation problem for a given observed process. A class of spatial spectral estimation techniques is based on the eigen-value decomposition of the spatial covariance matrix. The rationale behind this approach is that one wants to emphasize the choices for the steering vector a which correspond to signal directions. The method exploits the property that the directions of arrival determine the eigen structure of the matrix.
The tremendous interest in the subspace based methods is mainly due to the introduction of the MUSIC algorithm. MUSIC was originally presented as a DOA estimator, then it has been successfully brought back to the spectral analysis/system identification problem with its later development.
Approach overview
MUSIC spectrum approaches use a single realization of the stochastic process that is represent by the snapshots x, t=1, 2...M. MUSIC estimates are consistent and they converge to true source bearings as the number of snapshots grows to infinity. A basic drawback of MUSIC approach is its sensitivity to model errors. A costly procedure of calibration is required in MUSIC and it is very sensitive to errors in the calibration procedure. The cost of calibration increases as the number of parameters that define the array manifold increases.

Parametric–based solutions

While the spectral-based methods presented in the previous section are computationally attractive, they do not always yield sufficient accuracy. In particular, for the cases when we have highly correlated signals, the performance of spectral-based methods may be insufficient. An alternative is to more fully exploit the underlying data model, leading to so-called parametric array processing methods. The cost of using such methods to increase the efficiency is that the algorithms typically require a multidimensional search to find the estimates. The most common used model based approach in signal processing is the maximum likelihood technique. This method requires a statistical framework for the data generation process. When applying the ML technique to the array processing problem, two main methods have been considered depending on the signal data model assumption. According to the Stochastic ML, the signals are modeled as Gaussian random processes. On the other hand, in the Deterministic ML the signals are considered as unknown, deterministic quantities that need to be estimated in conjunction with the direction of arrival.

Stochastic ML approach

The stochastic maximum likelihood method is obtained by modeling the signal waveforms as a Gaussian random process under the assumption that the process x is a stationary, zero-mean, Gaussian process that is completely described by its second-order covariance matrix. This model is a reasonable one if the measurements are obtained by filtering wide-band signals using a narrow band-pass filter.
Approach overview

Deterministic ML approach

While the background and receiver noise in the assumed data model can be thought of as emanating from a large number of independent noise sources, the same is usually not the case for the emitter signals. It therefore appears natural to model the noise as a stationary Gaussian white random process whereas the signal waveforms are deterministic and unknown. According to the Deterministic ML the signals are considered as unknown, deterministic quantities that need to be estimated in conjunction with the direction of arrival. This is a natural model for digital communication applications where the signals are far from being normal random variables, and where estimation of the signal is of equal interest.

Correlation spectrometer

The problem of computing pairwise correlation as a function of frequency can be solved by two mathematically equivalent but distinct ways. By using Discrete Fourier Transform it is possible to analyze signals in the time domain as well as in the spectral domain. The first The first approach is "XF" correlation because it first cross-correlates antennas using a time-domain "lag" convolution, and then computes the spectrum for each resulting baseline. The second approach "FX" takes advantage of the fact that convolution is equivalent to multiplication in Fourier domain. It first computes the spectrum for each individual antenna, and then multiplies pairwise all antennas for each spectral channel. A FX correlator has an advantage over a XF correlators in that the computational complexity is O. Therefore, FX correlators are more efficient for larger arrays.
Correlation spectrometers like the Michelson interferometer vary the time lag between signals obtain the power spectrum of input signals. The power spectrum of a signal is related to its autocorrelation function by a Fourier transform:
where the autocorrelation function for signal X as a function of time delay is
Cross-correlation spectroscopy with spatial interferometry, is possible by simply substituting a signal with voltage in equation to produce the cross-correlation and the cross-spectrum.

Example: spatial filtering

In radio astronomy, RF interference must be mitigated to detect and observe any meaningful objects and events in the night sky.

Projecting out the interferer

For an array of Radio Telescopes with a spatial signature of the interfering source that is not a known function of the direction of interference and its time variance, the signal covariance matrix takes the form:
where is the visibilities covariance matrix, is the power of the interferer, and is the noise power, and denotes the Hermitian transpose. One can construct a projection matrix, which, when left and right multiplied by the signal covariance matrix, will reduce the interference term to zero.
So the modified signal covariance matrix becomes:
Since is generally not known, can be constructed using the eigen-decomposition of, in particular the matrix containing an orthonormal basis of the noise subspace, which is the orthogonal complement of. The disadvantages to this approach include altering the visibilities covariance matrix and coloring the white noise term.

Spatial whitening

This scheme attempts to make the interference-plus-noise term spectrally white. To do this, left and right multiply with inverse square root factors of the interference-plus-noise terms.
The calculation requires rigorous matrix manipulations, but results in an expression of the form:
This approach requires much more computationally intensive matrix manipulations, and again the visibilities covariance matrix is altered.

Subtraction of interference estimate

Since is unknown, the best estimate is the dominant eigenvector of the eigen-decomposition of, and likewise the best estimate of the interference power is, where is the dominant eigenvalue of. One can subtract the interference term from the signal covariance matrix:
By right and left multiplying :
where by selecting the appropriate. This scheme requires an accurate estimation of the interference term, but does not alter the noise or sources term.

Summary

Array processing technique represents a breakthrough in signal processing. Many applications and problems which are solvable using array processing techniques are introduced. In addition to these applications within the next few years the number of applications that include a form of array signal processing will increase. It is highly expected that the importance of array processing will grow as the automation becomes more common in industrial environment and applications, further advances in digital signal processing and digital signal processing systems will also support the high computation requirements demanded by some of the estimation techniques.
In this article we emphasized the importance of array processing by listing the most important applications that include a form of array processing techniques. We briefly describe the different classifications of array processing, spectral and parametric based approaches. Some of the most important algorithms are covered, the advantage and the disadvantage of these algorithms also explained and discussed.