Part 5 : Introduction to Gradient Descent and Newton’s Algorithms with Tensorflow

Free of Confines
1 min readNov 3, 2018

So Far

This article (Part 5) is a continuation of the series, i.e., Part 1, Part 2, and Part 3, on neural networks with Tensorflow. And yet, this article can be read as a standalone and deals with some fundamental concepts of optimization in general.

In this article, you can expect to learn about

  • The concept of derivatives and why it is important
  • Simple optimization algorithm called Gradient Descent algorithm
  • Another optimization algorithm called Newton’s algorithm

We will work through examples over simple functions coded with Tensorflow. For simplicity, examples are picked to have only one unknown although the concept of derivatives/ gradients is much more general. Don’t be discouraged by math notations and equations, for I have used them to add clarity to the material.

Working Along

To facilitate working along with the material, you can download an easy to use the Jupyter Notebook file from

https://github.com/FreeOfConfines/ExampleNNWithKerasAndTensorflow/blob/master/Review_of_Functions%2C_Derivatives%2C_and_Gradient_Descent_Algorithm.ipynb

--

--

Free of Confines

Striving for a tombstone that reads more than “Here lies a nice guy”.