Types of Neural Networks

There are many types of neural networks, but we can characterize most neural networks as either feed-forward networks or recurrent networks.

In feed-forward neural networks, the information travels in a single direction. When a neuron fires in a feed-forward network, the activation value is always passed to the next layer and is never allowed to pass an activation value back to itself or to a previous layer. Sometimes the network will have only a few hidden layers, but other networks will have many hidden layers. Neural networks that have more than one hidden layer are considered Deep Neural Networks (DNN).

Hidden Layers in a DNN

Hidden Layers in a DNN

There are different types of feedforward neural networks that can be used for different problems or types of data, including Multilayer Perceptrons (MLP) and Convolutional Neural Networks (CNN). MLP are networks with multiple hidden layers with each node in a layer connected to every node in the next layer. The figure above shows the architecture of an MLP. CNN are networks where the inputs are assumed to be images. This assumption allows the network architecture to be modified to make processing images more efficient.

In contrast, recurrent neural networks have connections that pass data towards the output layer as well as connections that pass data back to previous layers. These connections create loops which act as a sort of memory of previous inputs. The resulting “memory” feature of recurrent neural networks make them useful in processing sequences or time series because information about previous inputs can be used to process later inputs. A recurrent neural network may look like:

Recurrent Neural Network

Recurrent Neural Network

Because predictions in finance often involve time series data, a recurrent neural network can be an ideal neural network architecture to use to make financial predictions.