You are working with the text-only light edition of "H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8". Click here for further information.
|
|
Time Series
Neural Network Models
When dealing with neural networks, the model-finding
process is very similar to that for ARIMA models. The three phases,
-
model selection,
-
parameter estimation, and
-
performance checking,
can also be distinguished, but usually the
terminology is quite different. Moreover, the heuristics guiding the model
selection process are not as detailed as for ARIMA
models. This is partly due to the non-linear mapping produced by the neural
networks. A thorough analysis of the time series, involving at least a
trend analysis, checking seasonal patterns, and checking the autocorrelation
is definitely advisable, because it reveals relevant properties characterizing
the data. When using a standard feed-forward neural network, the number
of layers and units can be selected. Then, various properties of the training
algorithm may be altered. In general, the target is to find small models,
because they have fewer degrees of freedom. Therefore, they require less
training data for obtaining reliable results. Huge amounts of data are
needed for large neural networks. The hidden layers should be small enough
to allow generalization, and large enough to produce the required mapping.
Window Networks:
Since neural networks have not been developed for
handling sequences of inputs, either the input has to be pre-processed
or the model has to be adapted to temporal tasks. Pre-processing is the
easier of the two strategies. It turns a sequence of time-series elements
into a single input. This can, for instance, be achieved by sliding a so-called
"time window" over the sequence. The following figure demonstrates how
this works:

The input consists of the preceding sequence elements.
The neural network is trained to forecast the next sequence element. (Annotation:
this resembles an AR-model from the class of ARIMA models, where the window
size is equal to the order of the AR-model). Using a time window has the
advantage that standard neural networks can be used. Therefore, they can
be simulated with standard neural network modeling tools.
Time Delay Neural Networks (TDNN):
A large number of sophisticated neural network models
tailored towards processing sequences of inputs have been developed. They
are all equipped with some kind of "memory" keeping information over time.
Neural networks without such memories "forget" each input after mapping
it to the output. In time delay networks [1], the connections have time
delays of different length. Such a time delay postpones the forwarding
of a unit's activation to another unit. When using multiple time delays,
these networks can be trained to deal with a sequence of past time series
elements. At each time step, a single sequence element is fed into the
input, but the network's forecast takes preceding sequence elements into
account. While several time delays at the input form a time window, another
set of time delays at the hidden layer level duplicates the effect. In
the network described in [1], the weights of delayed connections are not
trained separately. This keeps the number of degrees of freedom low. The
resulting time delay networks are powerful tools for handling sequential
input.
Recurrent Networks:
While window networks and time delay networks are
non-recurrent nets, networks with feedback loops belong to the group of
recurrent
networks. There, unit activations are not only delayed while being
fed forward through the network, but unit activations are also delayed
and fed back to preceding layers. By this procedure, information can cycle
in the network. At least theoretically, this allows an unlimited number
of past activations to be taken into account.
[1] |
E.A. Wan
Temporal Backpropagation: An Efficient
Algorithm for Finite Impulse Response Neural Networks
in: D.S. Touretzky, editor, Connectionist
Models, pages 131-137. Morgan Kaufmann Publishers, San Mateo, CA, 1990. |
Last Update: 2004-Jul-03