Constructs a step rate policy.
the rate at first iteration
The rate at the first iteration.
Learning rate decay per iteration; - 0 < gamma
< 1 - large gamma
CAN cause networks to converge too quickly and stop learning, low gamma
CAN cause networks to converge to learn VERY slowly
Decay rate per iteration - 0 < power
- large power
CAN cause networks to stop learning quickly, low power
CAN cause networks to learn VERY slowly
Calculates the current training rate.
count
the current training rate
Generated using TypeDoc
Inverse Exponential Learning Rate
The learning rate will exponentially decrease.
The rate at
iteration
is calculated as:rate = baseRate * Math.pow(1 + gamma * iteration, -power)