Neural Networks and Deep Learning: A Textbook 1st ed. 2018 Edition, Kindle Edition
by Charu C. Aggarwal
This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Applications associated with many different areas like recommender systems, machine translation, image captioning, image classification, reinforcement-learning based gaming, and text analytics are covered.
The book is written for graduate students, researchers, and practitioners. Numerous exercises are available along with a solution manual to aid in classroom teaching. Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.
Fundamentals of Neural Networks: Architectures, Algorithms, and Applications, 1e Paperback - 2004
Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together. Ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either independently.
- Based on the latest version of MATLAB
- More than 30 graphs in color in the chapter "MATLAB Graphics"
- List of commands at the end of the chapter for quick recapitulation
- Appendices on graphic user interface and control system analysis using the LTI viewer
- Approximately 250 figures and screenshots
- Programming tips to highlight good programming practices
- More than 250 solved examples and approximately 200 end-of-chapter exercises.
Neural Network Projects with Python: The ultimate guide to using Python to explore the true power of neural networks through six projects 1st Edition, Kindle Edition
by James Loy
- Discover neural network architectures (like CNN and LSTM) that are driving recent advancements in AI
- Build expert neural networks in Python using popular libraries such as Keras
- Includes projects such as object detection, face identification, sentiment analysis, and more
Neural networks are at the core of recent AI advances, providing some of the best resolutions to many real-world problems, including image recognition, medical diagnosis, text analysis, and more. This book goes through some basic neural networks and deep learning concepts, as well as some popular libraries in Python for implementing them.
It contains practical demonstrations of neural networks in domains such as fare prediction, image classification, sentiment analysis, and more. In each case, the book provides a problem statement, the specific neural network architecture required to tackle that problem, the reasoning behind the algorithm used, and the associated Python code to implement the solution from scratch. In the process, you will gain hands-on experience using popular Python libraries such as Keras to build and train your own neural networks from scratch.
By the end of this book, you will have mastered the different neural network architectures and created cutting-edge AI projects in Python that will immediately strengthen your machine learning portfolio.
The Math of Neural Networks Kindle Edition
by Michael Taylor
There are many reasons why neural networks fascinate us and have captivated headlines in recent years. They make web searches better, organize photos, and are even used in speech translation. Heck, they can even generate encryption. At the same time, they are also mysterious and mind-bending: how exactly do they accomplish these things? What goes on inside a neural network?
On a high level, a network learns just like we do, through trial and error. This is true regardless of the network is supervised, unsupervised, or semi-supervised. Once we dig a bit deeper though, we discover that a handful of mathematical functions play a major role in the trial and error process. It also becomes clear that a grasp of the underlying mathematics helps clarify how a network learns.
In the following chapters, we will unpack the mathematics that drives a neural network. To do this, we will use a feedforward network as our model and follow input as it moves through the network.