Activation Functions in TensorFlow

Perceptron is a simple algorithm which, given an input vector x of m values (x1, x2, …, xm), outputs either 1 (ON) or 0 (OFF), and we define its function as follows:



Here, ω is a vector of weights, ωx is the dot product, and b is the bias. This equation reassembles the equation for a straight line. If x lies above this line, then the answer is positive, otherwise it is negative. However, ideally we are going to pass training data and let the computer to adjust weight and bias in such a way that the errors produced by this neuron will be minimized. The learning process should be able to recognize small changes that progressively teach our neuron to classify the information as we want. In the following image we don’t have “small changes” but a big change, and the neuron is not able to learn in this way because ω and bias will not converge into the optimal values to minimize errors.


Tangent to this function indicates that our neuron is learning; and, as we deduct from this, the tangent in x=0 is INFINITE. This is not possible in real scenarios because in real life all we learn step-by-step. In order to make our neuron learn, we need something to progressively change from 0 to 1: a continuous (and derivative) function.
When we start using neural networks we use activation functions as an essential part of a neuron. This activation function will allow us to adjust weights and bias.

In TensorFlow, we can find the activation functions in the neural network (nn) library.

Activation Functions




Mathematically, the function is continuous. As we can see, the sigmoid has a behavior similar to perceptron, but the changes are gradual and we can have output values different than 0 or 1.


>>> import tensorflow as tf
>>> sess = tf.Session()
>>> x = tf.lin_space(-3., 3., 24)
>>> print(
 [ 0.04742587 0.06070346 0.07739628 0.09819958 0.12384397 0.15503395
 0.1923546 0.23614843 0.28637746 0.34249979 0.40340331 0.46743745
 0.53256249 0.59659666 0.65750021 0.71362257 0.76385158 0.80764538
 0.84496599 0.87615603 0.90180045 0.92260367 0.9392966 0.95257413]

The sigmoid function is the most common activation function; however, this is not often used because of the tendency to 0-out the backpropagation terms during training.

ReLU (Rectified Linuear Unit)



This function has become very popular because it generates very good experimental results. The best advantage of ReLUs is that this function accelerates the convergence of SGD (stochastic gradient descent, which indicates how fast our neuron is learning), compared to Sigmoid and tanh functions.

This strength is, at the same way, the main weakness because this “learning speed” can make the neuron’s weights to be updated and oscillating from the optimal values and never activate on any point. For example, if the learning rate is too high, the half of neurons can be “dead”, but if we set a proper value then our networks will learn, but this will be slower than we expect.


>>> import tensorflow as tf
>>> sess = tf.Session()
>>> x = tf.lin_space(-3., 3., 24)
>>> print(
 [ 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0.13043475
 0.39130425 0.652174 0.9130435 1.173913 1.43478251 1.69565201
 1.95652151 2.21739101 2.47826099 2.7391305 3. ]




It seems this function was introduced in “Convolutional Deep Belief Networks on CIFAR-10” (page 2). Its main advantage, compared to simple ReLU, is that it is computationally faster and does not suffer from vanishing (infinitesimally near zero) or exploding values. As you can be figuring out, it will be used in Convolutional Neural Networks and Recurrent Neural Networks.


>>> import tensorflow as tf
>>> sess = tf.Session()
>>> x = tf.lin_space(-3., 9., 24)
>>> print(
 [ 0. 0. 0. 0. 0. 0.
 0.13043475 0.652174 1.173913 1.69565201 2.21739101 2.7391305
 3.2608695 3.78260851 4.30434799 4.826087 5.347826 5.86956501
 6. 6. 6. 6. 6. 6. ]

Hyperbolic Tangent


This function is very similar to sigmoid, except that instead of having a range between 0 and 1, it has a range between -1 and 1. Sadly, it has the same vanishing problem than Sigmoid.


>>> import tensorflow as tf
>>> sess = tf.Session()
>>> x = tf.lin_space(-5., 5., 24)
>>> print(
 [-0.99990922 -0.9997834 -0.99948329 -0.99876755 -0.99706209 -0.9930048
 -0.98339087 -0.96082354 -0.90900028 -0.79576468 -0.57313168 -0.21403044
 0.21402998 0.57313132 0.79576457 0.90900022 0.96082354 0.98339081
 0.9930048 0.99706209 0.99876755 0.99948329 0.9997834 0.99990922]


These activation functions help us to introduce nonlinearities in neural networks; if its range is between 0 and 1 (sigmoid), then the graph can only output values between 0 and 1.

We have some other activation functions implemented by TensorFlow, like softsign, softplus, ELU, cReLU, but most of them are not so frequently used, and the ithers are variations to the already explained functions. With the exception of dropout (which is not precisely an activation function but it will be heavily used in backpropagation, and I will explain it later), we have covered all stuff for this topic in TensorFlow. See you next time!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s