Recurrent Module

This module implements various approaches to recurrence.

Elman Simple Recurrent Network:

  • Source nodes are hidden
  • One level of copy nodes
  • Source Value is activation value
  • Source value replaces existing copy node value
  • Copy node activation is linear

Jordan

  • Souce nodes are output nodes
  • One level of copy nodes
  • Source Value is activation value
  • Existing copy node value is discounted and combined with new source value

NARX Non-Linear AutoRegressive with eXogenous inputs

  • Using the Narendra and Parthasathy variation
  • Source nodes can come from outputs, inputs Outputs -- multple copies or orders Inputs -- multple copies
  • Order == Number of copies
  • Copy value can be discounted

RecurrentConfig Class

This is the base class for recurrent modifications. It is not intended to be used directly.

def __init__(self):

This function initializes the configuration class.

def apply_config(self, neural_net):

This function modifies the neural net that is passed in by taking the parameters that have been set in this class. By having _apply_config, subclassed versions of apply_config can take multiple passes with less code.

def _apply_config(self, neural_net):

This function actually does the work.

def _fully_connect(lower_node, upper_nodes):

This function creates connections to each of the upper nodes.

This is a separate function from the one in layers, because using this version does not require ALL of the nodes on a layer to be used.

def get_source_nodes(self, neural_net):

This function is a stub for getting the appropriate source nodes.

def get_upper_nodes(self, neural_net):

This function is a stub for getting the appropriate nodes to which the copy nodes will connect.

ElmanSimpleRecurrent Class

This class implements a process for converting a standard neural network into an Elman Simple Recurrent Network. The following is used to define such a configuration: Source nodes are nodes in the hidden layer. One level of copy nodes is used, in this situation referred to as context units. The source value from the hidden node is the activation value and the copy node (context) activation is linear; in other words simply a copy of the activation. The source value replaces any previous value.

In the case of multiple hidden layers, this class will take the lowest hidden layer.

The class defaults to context nodes being fully connected to nodes in the hidden layer.

def __init__(self):

This function initializes the weights and default connection type consistent with an Elman Network.

def get_source_nodes(self, neural_net):

This function returns the hidden nodes from layer 1.

JordanRecurrent Class

This class implements a process for converting a standard neural network into an Jordan style recurrent metwork. The following is used to define such a configuration:

  • Source nodes are nodes in the output layer.
  • One level of copy nodes is used, in this situation referred to as context units.
  • The source value from the output node is the activation value and the copy node (context) activation is linear; in other words simply a copy of the activation.

  • The source value is added to the slightly discounted previous copy value. So, the existing weight is some value less than 1.0 and greater than zero.

  • In the case of multiple hidden layers, this class will take the lowest hidden layer.

  • The class defaults to context nodes being fully connected to nodes in the output layer.

def __init__(self, existing_weight):

Initialization in this class means passing the weight that will be multiplied time the existing value in the copy node.

def get_source_nodes(self, neural_net):

This function returns the output nodes.

NARXRecurrent Class

This class implements a process for converting a standard neural network into a NARX (Non-Linear AutoRegressive with eXogenous inputs) recurrent network.

It also contains some modifications suggested by Narendra and Parthasathy (1990).

Source nodes can come from outputs and inputs. There can be multiple levels of copies (or order in this nomenclature) from either outputs or inputs.

The source value can be weighted fully, or the incoming weight adjusted lower.

This class applies changes to the neural network by first applying the configurations related to the output nodes and then to the input nodes.

def __init__(self, output_order, incoming_weight_from_output, input_order, incoming_weight_from_input):

This function takes: the output order, or number of copy levels of output values, the weight to apply to the incoming values from output nodes, the input order, or number of copy levels of input values, the weight to apply to the incoming values from input nodes

def get_source_nodes(self, neural_net):

This function returns either the output nodes or input nodes depending upon self._node_type.

def apply_config(self, neural_net):

This function first applies any parameters related to the output nodes and then any with the input nodes.