Standard deviation
Mean value
Output value
Neuron's bias here
delta bias previous
delta bias total
derivative state
error gated
error projected
error responsibility
Connections this node gates
Node ID for NEAT
Incoming connections to this node
index
Used for dropout. This is either 0 (ignored) or 1 (included) during training and is used to avoid overfit.
More options for applying noise
Standard deviation
Mean value
Outgoing connections from this node
previous state
A self connection
state
The type of this node.
Actives the node.
When a neuron activates, it computes its state from all its input connections and 'squashes' it using its activation function, and returns the output (activation).
You can also provide the activation (a float between 0 and 1) as a parameter, which is useful for neurons in the input layer.
A neuron's output value
Constant nodes can't gate a connection.
Clears this node's state information - i.e. resets node and its connections to "factory settings"
node.clear()
is useful for predicting time series.
Connects this node to the given node(s)
Node(s) to project connection(s) to
Initial connection(s) weight
If true
connect nodes to each other
Disconnects this node from the given node(s)
Node(s) to remove connection(s) to
Is this a hidden Node?
Is this a input Node?
Is this a output Node?
Checks if the given node(s) are have outgoing connections to this node
Checks if node(s)
have outgoing connections into this node
Returns true, if every node(s) has an outgoing connection into this node
Checks if this node has an outgoing connection(s) into the given node(s)
Checks if this node has outgoing connection(s) into node(s)
Returns true, if this node has an outgoing connection into every node(s)
Activation mutations aren't allowed for a constant node.
Bias mutations aren't allowed for a constant node.
Backpropagate the error (a.k.a. learn).
After an activation, you can teach the node what should have been the correct output (a.k.a. train). This is done by backpropagating. Momentum adds a fraction of the previous weight update to the current one. When the gradient keeps pointing in the same direction, this will increase the size of the steps taken towards the minimum.
If you combine a high learning rate with a lot of momentum, you will rush past the minimum (of the error function) with huge steps. It is therefore often necessary to reduce the global learning rate ยต when using a lot of momentum (m close to 1).
The target value (i.e. "the value the network SHOULD have given")
More options for propagation
Momentum adds a fraction of the previous weight update to the current one.
When set to false weights won't update, but when set to true after being false the last propagation will include the delta weights of the first "update:false" propagations too.
Constant nodes can't gate a connection.
Set activation type
the new activation type
Can't set the bias of a constant node.
Convert this node into a json object.
the json object representing this node
Generated using TypeDoc
Noise node