Model Bias

Bias refers to the inherent tendencies in AI output caused by biases in the training data.

Detailed Explanation:

  • Sources: Data imbalances, societal biases, or over-representation of certain perspectives in the training data.

  • Consequences: Outputs that favor or exclude certain groups, cultures, or viewpoints.

  • Mitigation: Careful curation of training data and implementing fairness algorithms.

Example: If the AI is trained predominantly on English data, it may perform poorly or offer biased responses in other languages.

Last updated