How Machine Learning Algorithms Different From Traditional Algorithms?
- ML algorithms do not depend on rules defined by human experts. Instead, they process data in raw form - for example, text, emails, documents, social media content, images, voice, and video.
- An ML system is truly a learning system if it is not programmed to perform a task, but is programmed to learn to perform the task
- ML is also more prediction-oriented, whereas Statistical Modeling is generally interpretation-oriented. Not a hard and fast distinction, especially as the disciplines, converge, but in my experience, most historical differences between the two schools of thought fall out from this distinction
- In classical algorithms, statisticians emphasis on p-value more and a solid but comprehensible model
- Most ML models are uninterpretable, and for these reasons, they are usually unsuitable when the purpose is to understand relationships or even causality. The mostly work well where one only needs predictions.
- Traditional learning methodologies such as training a model-based on historic training data and evaluating the resulting model against incoming data are not feasible as the environment is in a constant change.
- As compared to the classical approach, traditional ML approaches as in most cases these approaches are too expensive within web-scale environments and their results are too static to cope with dynamically changing service environments
- As opposed to the classical approach, spending a lot of computational power on learning a very complex model of a highly dynamic network environment is not cost-effective
- Gradually, "statistical modeling" will move towards "statistical learning" and employ good parts about and creating tools for better interpreting the models in the process, Pekka Kohonen, assistant professor at the Karolinska Institutet pointed out
- One of the key differences is that classical approaches have a more rigorous mathematical approach while machine learning algorithms are more data-intensive