Let look upon the learning part of the machine learning, leaving aside the machine part. Do you know from where does machine learn from? It is the Generative Adversarial Network people and many times from each other. If we go in the beginning, even GANs also learn from the data which is provided by humans. Here Tom Merritt will teach you the ways with machine learning from which human error can create issues.
The Square peg bias: The first way is the square peg bias this is where you have no choice of choosing the dataset and you choose the wrong one. For example: when you want the purchases of sportswear for the store which is online, but the data which you have is what people bought at the shops of brick and motors.
Sampling bias: The other is the sampling bias where data is chosen for representing the environment. Most probably, the large and representative subset of data is chosen, in selecting the data you have to see for human biases. For the facial recognition, it can be as simple as forgetting to include the data of night time in the training set.
Bias-variance trade-off: This is the third way where bias may be caused by overcorrecting for a variance. There could be small fluctuation which could cause the immediate noise if the model is too sensitive. Complexity could be missed as too much bias to correct it.
Measurement bias: When bias is already there in the device which you are using for collecting the data, like a scale which is overestimating the weight incorrectly, so the data is sound and it could be caught by any statistical correction. You can easily prevent this by having various measuring devices.
Stereotype bias: At the workplace, for the recognition of the people algorithm of machine learning are trained. Since stereotype is getting social and it might be there in data without your involvement, this might sound mathematical. The social stereotypes need to be corrected if you want stronger machine learning.