Blending Independent Components and
Principal Components Analysis
2.8 Gradient ascent
[this page | pdf | references | back links | custom searches]
Return to
Abstract and Contents
Next page
2.8 Gradient ascent
All of the above approaches require us to maximise (or
minimise) some function (the kurtosis, the log likelihood, the Kolmogorov
complexity etc.) with respect to different unmixing vectors (or unmixing
matrices, i.e. simultaneously for several unmixing vectors all at once). Whilst
brute force could be applied for simple problems this rapidly becomes
impractical as the number of signals increases. Instead, we typically use gradient
ascent, in which we head up the (possibly hyper-dimensional) surface formed
by plotting the value of the function for different unmixing vectors in the
direction of steepest ascent. The direction of steepest ascent can be found
from the first partial derivative of the function with respect to the different
components of the unmixing vector/matrix. Second order methods can be used to
estimate how far to go along that gradient before next evaluating the function
and its derivatives, see e.g. Press et
al. (2007).
NAVIGATION LINKS
Contents | Prev | Next