visit
Photo on
Photo on Deep learning is also a large part of machine learning methods based on learning data presentations — “as opposed to task-specific algorithms.”
Photo on @
When a native computation is done in many programming languages, it is usually executed directly. If you type a = 3*4 + 2
in a Python console, you will immediately have the result. Running a number of mathematical computations like this in an IDE also allows you to set breakpoints, stop the execution and see intermediate results. This is not possible in TensorFlow, what you actually do is specifying the computations that will be done.
This is accomplished by creating a computational graph, which takes multidimensional matrices called “Tensors” and does computations on them. Each node in the graph denotes an operation. When creating the graph, you have the possibility to explicitly specify where the computations should be done, on the GPU or CPU. By default it will check if a GPU is available, and use that.
The Graph is run in a Session, where you specify what operations to execute in the run
-function. Data from outside may also be supplied to placeholders in the graph, so you can run it multiple times with different input. Furthermore, intermediate result (such as model weights) can be incrementally updated in variables, which will retain their values between runs.
You see that the GPU (a GTX 1080 in my case) is much faster than the CPU (Intel i7). Back-propagation is almost exclusively used today when training neural networks, and it can be stated as a number of matrix multiplications (backward and forward pass). That’s why using GPU:s are so important for quickly training deep-learning models.
CPU & GPU
CPU time in green and GPU time in blue. The initial GPU delay at the first iteration is perhaps due to TensorFlow setting starting up stuff.
If you want to know more things about Machine Learning,Data Science or Deep learning.so start follow our Publication Hacker Noon .
Thanks to everyone.