Some Deep Learning courses to keep in mind

By ridhigrg |Email | Feb 19, 2020 | 1227 Views

Deep Learning with Python and PyTorch
Learn how to use Python and its popular libraries such as NumPy and Pandas, as well as the PyTorch Deep Learning library. You'll then apply them to build Neural Networks and Deep Learning models.
IBM

About this course
The course will teach you how to develop Deep Learning models using Pytorch while providing the necessary deep-learning background.

  • We'll start off with PyTorch's tensors and its Automatic Differentiation package. Then we'll cover different Deep Learning models in each section, beginning with fundamentals such as Linear Regression and logistic/softmax regression.
  • We'll then move on to Feedforward deep neural networks, the role of different activation functions, normalization and dropout layers.

In the final part of the course, we'll focus on Convolutional Neural Networks and Transfer Learning (pre-trained models). Several other Deep Learning methods will also be covered.

What you'll learn
  • Explain and apply knowledge of Deep Neural Networks and related machine learning methods;
  • Know how to use Python, and Python libraries such as Numpy and Pandas along with the PyTorch library for Deep Learning applications;
  • Build Deep Neural Networks using PyTorch.

Deep Learning with Tensorflow
Much of the world's data is unstructured. Think images, sound, and textual data. Learn how to apply Deep Learning with TensorFlow to this type of data to solve real-world problems.
IBM
About this course
Traditional neural networks rely on shallow nets, composed of one input, one hidden layer, and one output layer. Deep-learning networks are distinguished from these ordinary neural networks having more hidden layers, or so-called more depth. These kinds of nets are capable of discovering hidden structures within unlabeled and unstructured data (i.e. images, sound, and text), which constitutes the vast majority of data in the world.

TensorFlow is one of the best libraries to implement deep learning. TensorFlow is a software library for numerical computation of mathematical expressional, using data flow graphs. Nodes in the graph represent mathematical operations, while the edges represent the multidimensional data arrays (tensors) that flow between them. It was created by Google and tailored for Machine Learning. In fact, it is being widely used to develop solutions with Deep Learning.

In this TensorFlow course, you will learn the basic concepts of TensorFlow, the main functions, operations, and the execution pipeline. Starting with a simple "Hello Word" example, throughout the course you will be able to see how TensorFlow can be used in curve fitting, regression, classification, and minimization of error functions.

This concept is then explored in the Deep Learning world. You will learn how to apply TensorFlow for backpropagation to tune the weights and biases while the Neural Networks are being trained. Finally, the course covers different types of Deep Architectures, such as Convolutional Networks, Recurrent Networks, and Autoencoders.

What you'll learn
  • Explain foundational TensorFlow concepts such as the main functions, operations, and execution pipelines.
  • Describe how TensorFlow can be used in curve fitting, regression, classification, and minimization of error functions.
  • Understand different types of Deep Architectures, such as Convolutional Networks, Recurrent Networks, and Autoencoders.
  • Apply TensorFlow for backpropagation to tune the weights and biases while the Neural Networks are being trained.

Using GPUs to Scale and Speed-up Deep Learning
Training complex deep learning models with large datasets takes along time. In this course, you will learn how to use accelerated GPU hardware to overcome the scalability problem in deep learning.
IBM
About this course
Training a complex deep learning model with a very large dataset can take hours, days and occasionally weeks to train. So, what is the solution? Accelerated hardware.

You can use accelerated hardware such as Google's Tensor Processing Unit(TPU) or Nvidia GPU to accelerate your convolutional neural network computations time on the Cloud. These chips are specifically designed to support the training of neural networks, as well as the use of trained networks(inference). Accelerated hardware has recently been proven to significantly reduce training time.

But the problem is that your data might be sensitive and you may not feel comfortable uploading it on a public cloud, preferring to analyze it on-premise.In this case, you need to use an in-house system with GPU support. One solution is to use IBM's Power SystemswithNvidia GPU andPowerAI. ThePowerAIplatform supports popular machine learning libraries and dependencies including Tensorflow, Caffe, Torch, and Theano.

In this course, you'll understand what GPU-based accelerated hardware is and how it can benefit your deep learning scaling needs. You'll also deploy deep learning networks on GPU accelerated hardware for several problems, including the classification of images and videos.

What you'll learn
Explain what GPU is, how it can speed up the computation, and its advantages in comparison with CPUs.
Implement deep learning networks on GPUs.
Train and deploy deep learning networks for image and video classification as well as for object recognition.

Professional Certificate in Deep Learning
IBM
What you will learn
  • Fundamental concepts of Deep Learning, including various Neural Networks for supervised and unsupervised learning.
  • Use of popular Deep Learning libraries such as Keras, PyTorch, and Tensorflow applied to industry problems.
  • Build, train, and deploy different types of Deep Architectures, including Convolutional Networks, Recurrent Networks, and Autoencoders.
  • Application of Deep Learning to real-world scenarios such as object recognition and Computer Vision, image and video processing, text analytics, Natural Language Processing, recommender systems, and other types of classifiers.
  • Master Deep Learning at scale with accelerated hardware and GPUs.

Source: HOB