/

Blending Independent Components and Principal Components Analysis

4.3 Time-varying volatility

[this page | pdf | references | back links]

Return to Abstract and Contents

Next page

 

4.3          Time-varying volatility

 

4.3.1      Introduction

 

Time-varying volatility is an example of a phenomenon that cannot easily be modelled via an approach involving linear combination mixtures. Instead, it can be thought of as an example a distributional mixture as referred to in Section 2.2. This is because we can characterise a world which exhibits time-varying volatility as one in which returns are coming from different distributions (the distributions differing by reference to their variance) depending on when the return occurs. If it occurs at a time when ‘volatility’ is high then, all other things being equal we would expect the return to be more spread out than when ‘volatility’ is low. Another name for time-varying volatility is heteroscedasticity.

 

It is widely accepted within the financial services industry and within relevant academic circles that markets exhibit time-varying volatility. Markets can, for possibly extended periods of time, appear to be quite ‘quiet’, e.g. without many large daily movements, but then to move to a different regime in which, say, daily movements are more significant. However, volatility does not necessarily always appear to move in tandem across markets or even across parts of the same market.

 

We set out below some ideas for how the blending of PCA and ICA might be refined to cater for time-varying volatility and also some of the challenges that might arise in practice.

 

There are several possible ways of catering for time-varying volatility. One approach would be to assume that the market (or sub-components of it) might be in two or more relatively stable discrete ‘regimes’, the regimes being differentiated by some postulated underlying state variable that reveals itself by reference to the level of volatility that a market is exhibiting. Probabilities of movement between such the different regimes might then be built up, most probably incorporating some sort of autoregressive characteristics as in threshold autoregressive time series models.

 

A perhaps simpler approach is to assume that there is some underlying (and relatively slowly changing) continuous variable characterising the variability of the market or of a segment of it, which we might estimate at any particular point in time by applying moving average techniques to recent past observations. This sort of approach is in effect the one used in Kemp (2009) and Kemp (2010). It is also the one implicit in GARCH models and their variants. We also immediately recognise a link with the complexity pursuit variant of ICA described in Section 2.7, which also focused on moving averages as a sign of ‘predictability’ of a time-ordered series. It would be possible to use a moving average that applied equal weights to observations within a fixed length window. However, as in Section 2.7 it might be preferable to focus on an exponentially damped moving average, potentially allowing flexibility in the decay factor involved.

 

We should then bear in mind that there are several possible ways in which we might define time-varying ‘variability’ (even in the context of blind source separation when there is no obvious differentiator between individual output signals, here return series). We can think of any particular return series as possessing its own individual volatility which is somehow evolving through time. The average of these individual time-evolving volatilities, i.e. average volatility, might itself also be somehow evolving through time, in a possibly more reliably predictable manner, given the greater number of data points contributing to its calculation. However, we can also characterise the ensemble of return series as exhibiting a potentially time-varying cross-sectional volatility. Own/market average volatility and cross-sectional volatility in effect characterise different parts of the covariance matrix between different stocks. The former corresponds to the elements along the leading diagonal or their average (the ‘variance’ terms), whilst the latter corresponds to an average of the off-diagonal elements (the ‘covariance’ terms). When we talk about ‘average’ stock correlation some of the same types of topics also arise, see e.g. Measuring Average Stock Correlation.

 

4.3.2      Higher moments

 

If a partial parameterisation of the evolution of ‘variability’ through time can include elements bearing the hallmarks of the structure of a covariance matrix then a more complete parameterisation might introduce further elements akin to the higher moment structure of a multidimensional probability distribution.  This would probably involve a rather sophisticated to model and that would lack parsimony and it may be better to limit ourselves merely to models that incorporate one of three types of time-varying volatility adjustments, namely ones involving:

 

(a)    An exponentially weighted moving average estimate of volatility for a given individual stock (measured, say, by the standard deviation of past returns for that stock in isolation, with greater weight given to more recent observations);

 

(b)   An exponentially weighted moving average estimate of volatility for the average of all stocks (calculated, say, in a manner similar to (a) but applied to the average return for the market as a whole); and

 

(c)    An exponentially weighted moving average estimate of cross-sectional volatility between stocks (measured, say, by calculating for each time period the cross-sectional standard deviation of returns across the stock universe and then determining a suitable exponentially weighted moving average through time of these standard deviations).

 

4.3.2      Contemporaneous estimates of future volatility

 

It is also worth bearing in mind that there may be available to us contemporaneous estimates of future volatility that may be more reliable than exponentially weighted moving averages of past data. For example, we might be able to source market implied volatilities (and correlations) from options markets. The relevance of this sort of data to risk model design is discussed further in Kemp (2009) as is the more fundamental topic of whether it is better when trying to measure risk to use ‘market implied’ probabilities of occurrence rather than or in addition to estimated ‘real world’ probabilities of occurrence. An introduction to how it is possible to calibrate probability distributions used for risk measurement purposes to market implied data is given here.

 


NAVIGATION LINKS
Contents | Prev | Next


Desktop view | Switch to Mobile