Machine Learning is a subset of Artificial Intelligence technique which uses statistical methods to enable machines to improve with experience. But what were the issues with the people which make the machine learning come into the existence.
Most significant Machine Learning Advances in the year 2017-18:
So much has happened in the world of AI and machine learning that it is hard to fit in a single answer. Here is my attempt. Don't expect too many details, but do expect a lot of links to follow up on them.
If I have to pick my main highlight of the year, that has to go to AlphaGo Zero (paper). Not only does this new approach improve in some of the most promising directions e.g. deep reinforcement learning, but it also represents a paradigm shift in which such models can learn without data. We have also learned very recently that the Alpha Go Zero generalizes to other games such as chess. You can read more about my interpretation of this advance in my Quora answer.
A recent meta-study found systematic mistakes in reporting metrics on GANs-related research papers. Despite this, it is undeniable that GANs have continued to present impressive results, especially when it comes to their applications to the image space (e.g. Progressive GANs, Conditional GANS in pix2pix, or CycleGans) . NLP is another area that has seen very impressive advances due to Deep Learning this year is NLP, and, in particular, translation. Salesforce presented an interesting non-autoregressive approach that can tackle full sentence translation. Perhaps even more groundbreaking are the unsupervised approaches presented by Facebook and UPV. Deep Learning is also having a huge impact in an area that hits close to home: recommender systems. However, a recent paper also questioned some recent advances by showing how much simpler methods like kNN were competitive with Deep Learning. It is also not a surprise that, as in the case of GAN research, the incredible fast pace of AI research can also lead to some loss in scientific rigor. Let me also point out that while it is true that many or most AI advances are coming from the Deep Learning field, there is continuous innovation in many other directions in AI and ML.
Somewhat related to some of the issues mentioned above, many criticize this lack of rigor and investment in setting the theoretical foundations of the methods. Just this week, Ali Rahimi described modern AI as 'alchemy' in his NIPS 2017 Test of Time talk. This was quickly responded by Yann Lecun in a debate that is unlikely to be resolved any time soon. I think you might agree though that this year has seen many interesting efforts in trying to advance the foundations of Deep Learning. For example, researchers are trying to understand how deep neural networks generalize. Tishby's Information Bottleneck theory was also debated at length this year as a plausible explanation to some of the Deep Learning properties. Hinton, who is being celebrated for his career this year, also keeps questioning foundational issues such as the use of backpropagation. Renowned researchers such as Pedro Domingos soon picked up the glove and developed Deep Learning methods that used different optimization techniques. A final, and very recent, fundamental change proposed by Hinton is the use of capsules as an alternative to Convolutional Networks.
If we look at the engineering side of AI, the year started with Pytorch picking up steam and becoming a real challenge to Tensorflow, especially in research. Tensorflow quickly reacted by releasing dynamic networks in Tensorflow Fold. The 'AI War' between big players has many other battles though, with the most heated one happening around the Cloud. All the main providers have really stepped up and increase their AI support in the cloud. Amazon has presented large innovations in their AWS, such as their recent presentation of Sagemaker to build and deploy ML models, or their Gluon library, released together with Microsoft . It is also worth mentioning that smaller players are also jumping in. Nvidia, has recently introduced their GPU cloud, which promises to be another interesting alternative to train Deep Learning models. Despite all these battles, it is good to see that industry can come together when necessary. The new ONNX standardization of neural network representations is an important and necessary step forward to interoperability.
2017 has also seen the continuation of social issues around AI. Elon Musk continues to fuel the idea that we are getting closer and closer to killer AIs, to many people's dismay. There has also been a lot of discussion about how AI will affect jobs in the next few years. Finally, we have seen a lot more focus being put on transparency and bias of AI algorithms.
Finally for the past few months I have been working on AI for medicine and healthcare. I am also happy to see that the rate of innovation in less "traditional" domains like healthcare is quickly picking up. AI and ML have been applied to medicine with years, starting with expert and Bayesian systems in the 60s and 70s. However, I often find myself citing papers that are only a few months old. Some of the recent innovations presented this year include the use of Deep RL, GANs, or Autoencoders to represent patient phenotypes.
A lot of recent AI advances have also focused on Precision Medicine (highly personalized medical diagnosis and treatment) and genomics. For example David Blei's latest paper addresses causality in neural network models by using bayesian inference to predict whether an individual has a genetic predisposition to a disease. All the big players are investing in AI in Healthcare. Google has several teams, including Deepmind Healthcare, who have presented several very interesting advances in AI for medicine, especially in automating medical imaging or the work that Fei Fei's group is doing between Google and Stanford. But, also Apple is finding healthcare applications for their Apple Watch, and Amazon is "secretly" investing in healthcare. It is clear the space is ripe for innovation.