/

Quantitative Return Forecasting

6. Neural networks

[this page | pdf | references | back links]

Return to Abstract and Contents

Next page

 

6.1          Mathematicians first realised the fundamental limitations of traditional time series analysis two or three decades ago. This coincided with a time when computer scientists were particularly enthusiastic about the prospects of developing artificial intelligence. The combination led to the development of neural networks.

 

6.2          A neural network is a mathematical algorithm that takes a series of inputs and produces some output dependent on these inputs. The inputs cascade through a series of steps that are conceptually modelled on the apparent behaviour of neurons in the brain. Each step (‘neuron’) takes as its input signals one or more of the input feeds (and potentially one or more of the output signals generated by other steps), and generates an output signal that would normally involve a non-linear function of the inputs (e.g. a logistic function). Typically some of the steps are intermediate.

 

6.3          Essentially any function of the input data can be replicated by a sufficiently complicated neural network. So it is not enough merely to devise a single neural network. What you actually need to do is to create lots of potential alternative neural networks and then develop some evolutionary or genetic algorithm that is used to work out which is the best one to use for a particular problem. Or, more usually, you define a much narrower class of neural networks that are suitably parameterised (maybe even just one class, with a fixed number of neurons and predefined linkages between these neurons, but where the non-linear functions within each neuron are parameterised in a suitable fashion). You then train the neural network, by giving it some historic data, adopting a training algorithm that you hope will home in on an appropriate choice of parameters that are likely to work well when attempting to predict the future.

 

6.4          There was an initial flurry of interest within the financial community in neural networks, but this interest seemed over time to subside. It is not that the brain doesn’t in some respects seem to work in the way that neural networks postulate. Rather, earlier computerised neural networks generally proved rather poor at the sorts of tasks they were being asked to perform in this space.

 

6.5          More recently, with the advent of ‘Big Data’ and more powerful computers, there seems to have been a resurgence of interest in the topic of ‘machine learning’ and artificial intelligence. We can expect this to percolate into the financial community, if some firms identify approaches that seem successful with investment orientated problems. However, there is no guarantee that this will be easy. As Ghahramani (2015) notes, machine learning involves uncertainty, i.e. there is no certainty that investment orientated problems are easily amenable to such techniques, although possibly there are ways of modelling this uncertainty using the probabilistic framework to machine learning and therefore using probabilistic approaches to work out which types of investment problems are most amenable to machine learning techniques. He writes:

 

The key idea behind the probabilistic framework to machine learning is that learning can be thought of as inferring plausible models to explain observed data. A machine can use such models to make predictions about future data, and take decisions that are rational given these predictions. Uncertainty plays a fundamental part in all of this. Observed data can be consistent with many models, and therefore which model is appropriate, given the data, is uncertain. Similarly, predictions about future data and the future consequences of actions are uncertain. Probability theory provides a framework for modelling uncertainty.

 


NAVIGATION LINKS
Contents | Prev | Next


Desktop view | Switch to Mobile