Current generation machine learning systems are still very far away from something that could legitimately be called artificial intelligence. The systems we have right now are phenomenal at pattern recognition from lots of data (even reinforcement learning systems are mostly about memorizing and recognizing patterns that worked well during training). This is certainly a necessary step, but it is very far away from an intelligent system. In analogy to human cognition, what we have now is analogous to the subconscious processes that allow split-second activation of your sympathetic nervous system when your peripheral vision detects a predator approaching or former significant other turning around a corner in other words, pattern-based, semiautomatic decisions that our brain does in hardware.
Traditional programming is not going away. There is always a claim that most traditional programming will be replaced by machine learning systems. Machine Learning excels where dealing with the messiness of the real world, lots of data is available and no reasonably fundamental model is available. If any of these are not true, you generally don't have to settle for a machine learning model the traditional alternatives are superior. A fun way to think about it is that humans are the most advanced natural general intelligence around, but yet we invented computers to do certain specialized tasks were too slow at. Why should we expect artificial intelligence to be any different?
Machine Learning will augment the most traditional tasks. That being said, anytime one of these traditional systems needs to interact with a human, there is an opportunity for machine learning based augmentation. For example, a programmer looking at error messages could use a machine learning system that looks at the error messages and suggests a course of action. Computers are a lot more patient than humans. You cant make a human watch millions of hours of programming sessions to remember the common resolutions to problems, but you can make a machine learning system do the same. In addition, machine learning systems can learn from massive amounts of data very rapidly and on a global scale (if any instance of the machine learning system has ever encountered a particular set of circumstances, it can almost instantly share that knowledge globally, without being limited to the speed of human communication).
Well, get better at learning from small or noisy data. At the moment machine learning systems mostly require large amounts of relatively clean, curated, data sets. There are various promising approaches to relax this requirement in one or the other direction (or combinations of small, well-curated data sets to get started with a larger corpus of more noisy data).
There are expected to be perfect in the future.
Well, see ML systems be combined with traditional approaches. At the moment it seems fairly common to see ML systems that are entirely made of neural networks trying to solve end to end problems. This often works ok, because you can recover many traditional signal processing techniques from these building blogs (e.g. Fourier transforms, edge detection, segmentation, etc.), but the learned versions of these transforms can be significantly more computationally expensive than the underlying approach.
Nobody would be surprised to see these primitives making a bit of a come back (as part of a neural network architecture). Similarly, people are excited about physics-based ML approaches where you combine a neural network with knowledge of an underlying physical model (e.g. the differential equations governing a certain process) in order to outperform both pure ML approaches and approaches relying solely on the physical process (i.e. simulations).