A carefully-curated list of 5 free ebooks to help you better understand the various aspects of what machine learning, and skills necessary for a career in the field.
Note that, while there are numerous machine learning ebooks available for free online, including many which are very well-known, I have opted to move past these "regulars" and seek out lesser-known and more niche options for readers.
Interested in a career in machine learning? Don't know where to start? Well, there's always here, a collection of tutorials on pursuing machine learning in the Python ecosystem. If you are looking for something more, you could look here for an overview of MOOCs and online lectures from freely-available university lectures.
Of course, nothing substitutes rigorous formal education, but let's say that isn't in the cards for whatever reason. Not all machine learning positions require a PhD; it really depends where on the machine learning spectrum one wants to fit in. Check out this motivating and inspirational post, the author of which went from little understanding of machine learning to actively and effectively utilizing techniques in their job within a year.
Looking to strike a balance between what you would learn in an introductory graduate school machine learning regimen and what you can get from online tutorials? As they have been for hundreds of years, books are a great place to turn. :) Of course, today we have instant access to freely-available digital books, which makes this a very attractive alternative. Have a look at the following free ebooks, all of which are appropriate for an introductory level of understanding, but which also cover a variety of different concepts and material.
Nils J. Nilsson of Stanford put these notes together in the mid 1990s. Before you turn up your nopse at the thought of learning from something from the 90s, remember that foundation is foundation, regardless of when it was written about.
Sure, many important advancements have been made in machine learning since this was put together, as Nilsson himself says, but these notes cover much of what is still considered relevant elementary material in a straightforward and focused manner. There are no diversions related to advancements of the past few decades, which authors often want to cover tangentially even in introductory texts. There is, however, a lot of information about statistical learning, learning theory, classification, and a variety of algorithms to whet your appetite. At < 200 pages, this can be read rather quickly.
This book covering machine learning is written by Shai Shalev-Shwartz and Shai Ben-David. This book is newer, longer, and more advanced than the previous offering, but it is also a logical next step. This will delve deeper into more algorithms, their descriptions, and provide a bridge toward practicality as well. The focus on theory should be a clue to newcomers of its importance to really understand what is powering machine learning algorithms. The Advanced Theory section covers some concepts which may be beyond the scope or desire of a newcomer, but the option exists to have a look.
This introductory text on Bayesian machine learning is one of the most well-known on the topic as far as I am aware, and happens to have a free online version available. An Amazon review from Arindam Banerjee of the University of Minnesota has this to say:
The book has wide coverage of probabilistic machine learning, including discrete graphical models, Markov decision processes, latent variable models, Gaussian process, stochastic and deterministic inference, among others. The material is excellent for advanced undergraduate or introductory graduate course in graphical models, or probabilistic machine learning. The exposition throughout the book uses numerous diagrams and examples, and the book comes with an extensive software toolbox...
It should be noted that the toolbox being referred to is implemented in MATLAB, which is no longer the default machine learning implementation language, at least not generally. The toolbox is not the book's only virtue, however.
This provides a great jumping off point for those interested in probabilistic machine learning.
This is the soon-to-be-released-in-print deep learning book by Goodfellow, Bengio and Courville, which has a freely-available final draft copy on its official website.
The following 2 excerpts are from the book's website, one providing an overview of its contents, the other putting almost everyone interested in reading the book at ease:
The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free. The print version will be available for sale soon.
One of these target audiences is university students(undergraduate or graduate) learning about machine learning, including those who are beginning a career in deep learning and artiļ¬?cial intelligence research. The other target audience is software engineers who do not have a machine learning or statistics background, but want to rapidly acquire one and begin using deep learning in their product or platform.
You would be hard-pressed to find a better resource from which to learn all about deep learning.
Sutton and Barto's authoritative classic is getting a makeover. This is a link to the second draft, which is currently in progress (and freely-available while it is).
Reinforcement learning is of incredible research interest these days, and for good reason. Given its recent high-profile success as part of AlphaGo, its potential in self-driving cars and similar systems, and its marriage with deep learning, there is little reason to believe that reinforcement learning, which is undoubtedly to play a major role in any form of "General AI" (or anything resembling it), is going anywhere. Indeed, these are all reasons that a second draft of this book is in the works.
You can get a sense of the importance of this book in the field of reinforcement learning given that it is referred to simply as "Sutton and Barto." This Amazon review from David Tan sums the book up nicely (and allays any fears related to "is it too complex for me to understand?"):
The book starts with examples and intuitive introduction and definition of reinforcement learning. It follows with 3 chapters on the 3 fundamental approaches to reinforcement learning: Dynamic programming, Monte Carlo and Temporal Difference methods. Subsequent chapters build on these methods to generalize to a whole spectrum of solutions and algorithms.
The book is very readable by average computer students. Possibly the only difficult one is chapter 8, which deals with some neural network concepts.
Do keep in mind the above is in regards to the first edition; it should generalize to the second, however.