Blind Source Separation

1. Complex Independent Component Analysis:
In recent years the issue of blind source separation solved by Independent Component Analysis or ICA has been extended to the case of signals defined in complex domain. This extension is due to the need of working in the frequency domain, very common in the areas of signal processing, telecommunications and biomedical applications. One of the most critical points in Complex ICA is the matching of the complex activation function used in the de-mixing network with the profile of the probability distribution function. Within this research theme it has been introduced a flexible class of algorithms that use a pair of one-dimensional or two-dimensional functions, representing the real and the imaginary part of the activation function respectively, whose shape can be changed during the learning process [J2, B1, T2]. The complex nature of signals requires the choice of ad-hoc optimization techniques, such as, for example, the use of Riemannian metrics based on the natural gradient, which were analyzed in [B2].

2. Complex Nonlinear Independent Component Analysis:
Unfortunately, linear mixing models are too unrealistic in many applications where there are nonlinear distortions due to non ideal sensors. In particular the Post Nonlinear model or PNL, which is a subset of mixing models for which is guaranteed the existence and uniqueness of the solution, is been extended to the complex environment [J1, T2]. Also in this case the activation functions, as well as the compensating functions for nonlinear distortions are implemented using flexible functions, whose shape can be changed during the learning process.

3. Separation of speech signals in reverberant environment:
The fundamental problem is due to the computational complexity when taking into account the very long impulse response describing a real environment characterized by high reverberation. In fact, the length of the deconvolution filter must be long enough to cover the tail of the typical environment reverberation, although a very long window curtains to capture nonstationarity of the signal and makes it difficult to estimate the statistical properties, which are essential to obtain a good separation. This is due to the “sum effect" of time-frequency transformations, which produce samples with more Gaussian statistics. In this research it is proposed, the use of a particular type of de-mixing architecture that can reduce the computational cost by adopting a block processing procedure: an efficient implementation can be obtained by dividing (partitioning) convolution into a sum of shorter convolutions, each of which involves a smaller number of terms and solving the problem of the computational cost and the problem of capturing nonstationarities [B3, C8]. This greatly reduces the large algorithm latency, namely the long delay between the input and output, making the application suitable for its using in situations where it is necessary a real time application, such as communication systems "hands-free". It was also proposed a methodology based on an estimate of the time delay between the sensor and different sources (DOA) in order to solve the permutation ambiguity [B4]. Further improvement, in terms of increasing the convergence speed can be obtained by inserting in the algorithm some a priori information given by the environmental geometry, described in terms of Common Acoustical Poles [B5].



Acoustic Echo Cancellation

The acoustic echo cancellation is an important topic in teleconferencing. Particularly difficult is the common problem of stereophonic echo cancellation where the length of the filters used in echo canceller must satisfy two conflicting conditions. To avoid this problem and use a reasonable length, it was proposed an algorithm that exploits a priori information given by the environmental geometry, described in terms of Common Acoustical Poles [C7]. More critical is the situation where nonlinear distortions due to the use of low-cost amplifiers and speakers are used: in order to deal with these distortions it has been proposed a series of algorithms that work in critical situations, from the use of Wiener or Hammerstein-type systems using flexible nonlinear functions [C9, C13] to the use of Functional Links [C12, C14].


Sound Sources Localization

The problem of sound sources localization in reverberant environment was handled by different algorithms, but there is always some ambiguity due to the effect of reverberation. Particularly effective is the technique of Linear Intersection or LI, based on a Time Delay Estimation technique. Unfortunately, the technique LI has a high computational cost in real environments. For this reason it was proposed a simplified LI technique, where only a subset consisting of intersections is considered [C11]. With this technique it was possible to obtain good performance with an acceptable computational cost. 


Adaptive Filtering

Coming soon....