Challenging negative examples that are similar to the target but still incorrect, used during training to make the model learn more nuanced distinctions.