Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
This project accompanies my Deep Learning with TensorFlow 2 and Keras training. It contains the exercises and their solutions, in the form of Jupyter notebooks.
WARNING: TensorFlow 2.0 preview may contain bugs and may not behave exactly like the final 2.0 release. Hopefully, this code will run fine once TF 2 is out. This is extreme bleeding edge stuff people! :)
The focus of this course will be on understanding artificial neural networks and deep learning algorithmically (discussing the math behind these methods on a basic level) and implementing network models in code as well as applying these to real-world datasets. Some of the topics that will be covered include convolutional neural networks for image classification and object detection, recurrent neural networks for modeling text, and generative adversarial networks for generating new data.
This course will cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning. Recent advances in generative models have made it possible to realistically model high-dimensional raw data such as natural images, audio waveforms, and text corpora. Strides in self-supervised learning have started to close the gap between supervised representation learning and unsupervised representation learning in terms of fine-tuning to unseen tasks. This course will cover the theoretical foundations of these topics as well as their newly enabled applications.
This class provides a practical introduction to deep learning, including theoretical motivations and how to implement it in practice. As part of the course, we will cover multilayer perceptrons, backpropagation, automatic differentiation, and stochastic gradient descent. Moreover, we introduce convolutional networks for image processing, starting from the simple LeNet to more recent architectures such as ResNet for highly accurate models. Secondly, we discuss sequence models and recurrent networks, such as LSTMs, GRU, and attention mechanism. Throughout the course, we emphasize efficient implementation, optimization, and scalability, e.g. to multiple GPUs and to multiple machines. The goal of the course is to provide both a good understanding and a good ability to build modern nonparametric estimators.
The course introduces students to the design of algorithms that enable machines to learn based on reinforcements. In contrast to supervised learning where machines learn from examples that include the correct decision and unsupervised learning where machines discover patterns in the data, reinforcement learning allows machines to learn from partial, implicit and delayed feedback. This is particularly useful in sequential decision-making tasks where a machine repeatedly interacts with the environment or users. Applications of reinforcement learning include robotic control, autonomous vehicles, game playing, conversational agents, assistive technologies, computational finance, operations research, etc.
This course is taken almost verbatim from CS 224N Deep Learning for Natural Language Processing ‚?? Richard Socher‚??s course at Stanford. We are following their course‚??s formulation and selection of papers, with the permission of Socher.
This course examines the use of natural language processing as a set of methods for exploring and reasoning about the text as data, focusing especially on the applied side of NLP - using existing NLP methods and libraries in Python in new and creative ways (rather than exploring the core algorithms underlying them.)
This is an applied course; each class period will be divided between a short lecture and in-class lab work using Jupyter notebooks (roughly 50% each). Students will be programming extensively during class and will work in groups with other students and instructors. Students must prepare for each class and submit preparatory materials before class; attendance in class is required.
This is a collection of course material from various courses that I've taught on machine learning at UBC, including material from over 80 lectures covering a large number of topics related to machine learning. The notation is fairly consistent across the topics which makes it easier to see relationships, and the topics are meant to be gone through in order (with the difficulty slowly increasing and concepts being defined at their first occurrence). I'm putting this in one place in case people find it useful for educational purposes.