Enum fann::ActivationFunc [] [src]

pub enum ActivationFunc {
    Linear,
    Threshold,
    ThresholdSymmetric,
    Sigmoid,
    SigmoidStepwise,
    SigmoidSymmetric,
    SigmoidSymmetricStepwise,
    Gaussian,
    GaussianSymmetric,
    GaussianStepwise,
    Elliott,
    ElliottSymmetric,
    LinearPiece,
    LinearPieceSymmetric,
    SinSymmetric,
    CosSymmetric,
    Sin,
    Cos,
}

The activation functions used for the neurons during training. They can either be set for a group of neurons using set_activation_func_hidden and set_activation_func_output, or for a single neuron using set_activation_func.

Similarly, the steepness of an activation function is specified using set_activation_steepness_hidden, set_activation_steepness_output and set_activation_steepness.

In the descriptions of the functions:

Variants

Linear

Linear activation function.

  • span: -inf < y < inf

  • y = x*s, d = 1*s

  • Can NOT be used in fixed point.

Threshold

Threshold activation function.

  • x < 0 -> y = 0, x >= 0 -> y = 1

  • Can NOT be used during training.

ThresholdSymmetric

Threshold activation function.

  • x < 0 -> y = 0, x >= 0 -> y = 1

  • Can NOT be used during training.

Sigmoid

Sigmoid activation function.

  • One of the most used activation functions.

  • span: 0 < y < 1

  • y = 1/(1 + exp(-2*s*x))

  • d = 2*s*y*(1 - y)

SigmoidStepwise

Stepwise linear approximation to sigmoid.

  • Faster than sigmoid but a bit less precise.
SigmoidSymmetric

Symmetric sigmoid activation function, aka. tanh.

  • One of the most used activation functions.

  • span: -1 < y < 1

  • y = tanh(s*x) = 2/(1 + exp(-2*s*x)) - 1

  • d = s*(1-(y*y))

SigmoidSymmetricStepwise

Stepwise linear approximation to symmetric sigmoid.

  • Faster than symmetric sigmoid but a bit less precise.
Gaussian

Gaussian activation function.

  • 0 when x = -inf, 1 when x = 0 and 0 when x = inf

  • span: 0 < y < 1

  • y = exp(-x*s*x*s)

  • d = -2*x*s*y*s

GaussianSymmetric

Symmetric gaussian activation function.

  • -1 when x = -inf, 1 when x = 0 and 0 when x = inf

  • span: -1 < y < 1

  • y = exp(-x*s*x*s)*2-1

  • d = -2*x*s*(y+1)*s

GaussianStepwise

Stepwise linear approximation to gaussian. Faster than gaussian but a bit less precise. NOT implemented yet.

Elliott

Fast (sigmoid like) activation function defined by David Elliott

  • span: 0 < y < 1

  • y = ((x*s) / 2) / (1 + |x*s|) + 0.5

  • d = s*1/(2*(1+|x*s|)*(1+|x*s|))

ElliottSymmetric

Fast (symmetric sigmoid like) activation function defined by David Elliott

  • span: -1 < y < 1

  • y = (x*s) / (1 + |x*s|)

  • d = s*1/((1+|x*s|)*(1+|x*s|))

LinearPiece

Bounded linear activation function.

  • span: 0 <= y <= 1

  • y = x*s, d = 1*s

LinearPieceSymmetric

Bounded linear activation function.

  • span: -1 <= y <= 1

  • y = x*s, d = 1*s

SinSymmetric

Periodical sine activation function.

  • span: -1 <= y <= 1

  • y = sin(x*s)

  • d = s*cos(x*s)

CosSymmetric

Periodical cosine activation function.

  • span: -1 <= y <= 1

  • y = cos(x*s)

  • d = s*-sin(x*s)

Sin

Periodical sine activation function.

  • span: 0 <= y <= 1

  • y = sin(x*s)/2+0.5

  • d = s*cos(x*s)/2

Cos

Periodical cosine activation function.

  • span: 0 <= y <= 1

  • y = cos(x*s)/2+0.5

  • d = s*-sin(x*s)/2

Methods

impl ActivationFunc

fn from_fann_activationfunc_enum(af_enum: fann_activationfunc_enum) -> FannResult<ActivationFunc>

Create an ActivationFunc from a fann_sys::fann_activationfunc_enum.

fn to_fann_activationfunc_enum(&self) -> fann_activationfunc_enum

Return the fann_sys::fann_activationfunc_enum corresponding to this ActivationFunc.

Trait Implementations

Derived Implementations

impl PartialEq for ActivationFunc

fn eq(&self, __arg_0: &ActivationFunc) -> bool

fn ne(&self, __arg_0: &ActivationFunc) -> bool

impl Eq for ActivationFunc

impl Debug for ActivationFunc

fn fmt(&self, __arg_0: &mut Formatter) -> Result

impl Clone for ActivationFunc

fn clone(&self) -> ActivationFunc

fn clone_from(&mut self, source: &Self)

impl Copy for ActivationFunc