AI terms💡
Bias
What does bias mean and what impact does it have on the use of AI? How can bias be reduced? Find out more here.
What does bias mean?
In AI,biasrefers to a systematicdistortionor bias in the results of a model. Bias can arise when training data already contains prejudices or unequal distributions or when the learning algorithm itself develops certain preferences. This results in distorted and potentially discriminatory results, for example if a face recognition model recognizes people with a certain skin color less well because they were underrepresented in the training. Bias is undesirable as it impairs the accuracy and fairness of AI systems.
Types of bias and causes
AI bias can occur at different levels:
Machine learning models often silently absorb the biases inherent in the training data. A classic example was an applicant selection tool that discriminated against women because the historical recruitment data was male-dominated – the algorithm implicitly learned this bias.
Strategies for reduction
Unaddressed bias can reinforceunfair decisionsand harm certain groups (e.g. in lending, personnel selection or the judiciary) [1]. In addition, users' trust in AI systems suffers if they are perceived as unfair.
Several approaches are used to reduce bias: carefuldata pre-processing(e.g. balancing out imbalances in the data set),fairness metricsto check the model results and iterativetests with various user groups. Technical approaches such asadversarialdebiasing are also used to minimize bias.
Finally,transparencyis important: explainable AI can make it clear why a model generates certain outputs – which helps to detect and eliminate hidden bias.