Every year we see the same endless product cycles for smartphones. While consumers like seeing phones with new designs, improved cameras and AR/VR capabilities, a silent revolution that has been unfolding since 1959 is now changing the world around us. Machine learning, a field evolved from the study of pattern recognition and computational learning theory in artificial intelligence, has leaped ahead in the last decade. Major improvements to data storage, processing power and accessibility of machine learning tools have served as catalysts of this silent revolution.
When I first looked into the field of machine learning back in 2005, it was a highly theoretical and expensive pursuit. A 120GB hard drive was considered fairly large and costly. The processing power of CPUs available back then meant that even basic training tasks took weeks. An implementation of even a relatively simple machine learning algorithm required days of reading scientific papers, research and coding.
Gradual improvements in the following three key areas have finally culminated in a new landscape for machine learning. And this new landscape will be a backdrop to how we run our businesses from now on.
The first key to a successful machine learning project is an ability to collect, store and quickly access large volumes of data. More data means more side cases and more nuanced and precise models. Based on our findings at AppLovin, data storage costs have gone down tremendously in the last decade, accelerating machine learning efforts:
The introduction of solid-state storage and falling RAM prices are reducing latencies and increasing storage capacities. It also means that we can process more data faster than ever. A terabyte of raw stats may now be considered a fairly manageable amount for learning.
Data storage costs and read/write latencies are rarely emphasized during big consumer electronics announcements -- we tend to take those improvements for granted. However, without the ability to store and analyze petabytes of data, we would not be able to achieve the personalized experiences our users are looking for.
The second key to a successful machine learning (ML) project is an ability to process collected data. The introduction of general-purpose GPUs in 2006 and their continued evolution has unlocked processing power unmatched by regular CPUs. These specialized chips, originally created for 3-D graphics, are invaluable when it comes to scientific computing and machine learning.
The hardware innovation in processing power is rampant. Apple incorporated its "neural engine" as a part of the new A11 processor to accelerate artificial intelligence software. In August 2017, Microsoft debuted Brainwave, a new system dedicated to ‚??high-speed, low-latency serving of machine learning,‚?? meaning machine learning models are easily distributed and can perform better than with a CPU or GPU. And a few months back, Google entered the hardware game with its Pixel Visual Core co-processor designed from scratch to deliver maximum performance at low power for ML applications.
Improvements to GPUs and co-processors are rarely covered and analyzed by the media, yet these new chips unlock the ability to execute tasks that were reserved for high-power servers on a smartphone. These chips make selfies more expressive, voice assistants smarter and AR/VR more realistic.
Storage would be useless and all the processing power would remain idle without the third key to a successful machine learning project: the software. Programs that execute machine learning algorithms are truly the magic behind modern innovations. The more people have access to these tools, the more fascinating applications we are bound to see.
Google's TensorFlow and TensorFlow Lite and Apple's Core ML, as well as work done by numerous other companies, have made ML and AI learning curves less steep, allowing more developers the ability to leverage these technologies. Improvements in data storage and processing power allow engineers to test their models on their development boxes, which speeds up development times.
Many teams often overlook these newly available tools, focusing on improvements to the frameworks they are familiar with. This could be a mistake since more and more companies find benefits in leveraging ML techniques to improve user experience or internal business processes.
Storage, processing power and availability of software tools have improved a lot over the last decade. Quantitative changes have finally led to a major qualitative transformation: ML/AI algorithms that process unprecedented amounts of data on handheld devices are now available to a wide range of industries, and they do not require grand expenditures or highly specialized knowledge.
I believe machine learning will be the biggest technological revolution since the smartphone. ML is more than a one-off feature, and it's not just another clever idea or a hot company. It is bigger than any single industry or application. Today‚??s advances in machine learning are the result of a gradual, unstoppable growth in the key areas of technology over the last half-century. This growth will change every piece of technology and will touch every business and individual in ways we can't yet imagine.