Options
All
  • Public
  • Public/Protected
  • All
Menu

Class ActivationNode

Activation node

Hierarchy

Index

Constructors

constructor

Properties

activation

activation: number

Output value

bias

bias: number

Neuron's bias here

deltaBiasPrevious

deltaBiasPrevious: number

delta bias previous

deltaBiasTotal

deltaBiasTotal: number

delta bias total

derivativeState

derivativeState: number

derivative state

errorGated

errorGated: number

error gated

errorProjected

errorProjected: number

error projected

errorResponsibility

errorResponsibility: number

error responsibility

gated

gated: Set<Connection>

Connections this node gates

id

id: number

Node ID for NEAT

incoming

incoming: Set<Connection>

Incoming connections to this node

index

index: number

index

mask

mask: number

Used for dropout. This is either 0 (ignored) or 1 (included) during training and is used to avoid overfit.

outgoing

outgoing: Set<Connection>

Outgoing connections from this node

prevState

prevState: number

previous state

selfConnection

selfConnection: Connection

A self connection

squash

squash: ActivationType

state

state: number

state

type

type: NodeType

The type of this node.

Methods

activate

  • activate(): number
  • Actives the node.

    When a neuron activates, it computes its state from all its input connections and 'squashes' it using its activation function, and returns the output (activation).

    You can also provide the activation (a float between 0 and 1) as a parameter, which is useful for neurons in the input layer.

    Returns number

    A neuron's output value

addGate

  • addGate(): void

clear

  • clear(): void
  • Clears this node's state information - i.e. resets node and its connections to "factory settings"

    node.clear() is useful for predicting time series.

    Returns void

connect

  • connect(target: Node, weight?: number, twoSided?: boolean): Connection
  • Connects this node to the given node(s)

    Parameters

    • target: Node

      Node(s) to project connection(s) to

    • Default value weight: number = 1

      Initial connection(s) weight

    • Default value twoSided: boolean = false

      If true connect nodes to each other

    Returns Connection

disconnect

fromJSON

isHiddenNode

  • isHiddenNode(): boolean

isInputNode

  • isInputNode(): boolean

isOutputNode

  • isOutputNode(): boolean

isProjectedBy

  • isProjectedBy(node: Node): boolean
  • Checks if the given node(s) are have outgoing connections to this node

    Parameters

    • node: Node

      Checks if node(s) have outgoing connections into this node

    Returns boolean

    Returns true, if every node(s) has an outgoing connection into this node

isProjectingTo

  • isProjectingTo(node: Node): boolean
  • Checks if this node has an outgoing connection(s) into the given node(s)

    Parameters

    • node: Node

      Checks if this node has outgoing connection(s) into node(s)

    Returns boolean

    Returns true, if this node has an outgoing connection into every node(s)

mutateActivation

  • mutateActivation(): void

mutateBias

  • mutateBias(): void

propagate

  • propagate(target: number, options: { momentum?: undefined | number; rate?: undefined | number; update?: undefined | false | true }): void
  • Backpropagate the error (a.k.a. learn).

    After an activation, you can teach the node what should have been the correct output (a.k.a. train). This is done by backpropagating. Momentum adds a fraction of the previous weight update to the current one. When the gradient keeps pointing in the same direction, this will increase the size of the steps taken towards the minimum.

    If you combine a high learning rate with a lot of momentum, you will rush past the minimum (of the error function) with huge steps. It is therefore often necessary to reduce the global learning rate ยต when using a lot of momentum (m close to 1).

    Parameters

    • target: number

      The target value (i.e. "the value the network SHOULD have given")

    • options: { momentum?: undefined | number; rate?: undefined | number; update?: undefined | false | true }

      More options for propagation

      • Optional momentum?: undefined | number

        Momentum adds a fraction of the previous weight update to the current one.

      • Optional rate?: undefined | number
      • Optional update?: undefined | false | true

        When set to false weights won't update, but when set to true after being false the last propagation will include the delta weights of the first "update:false" propagations too.

    Returns void

removeGate

  • removeGate(): void

setActivationType

  • setActivationType(activation: ActivationType): Node

setBias

toJSON

Generated using TypeDoc