The learning that takes place in artificial intelligence applications uses data that may be incomplete, imprecise, distorted, or mislabeled, thereby tainting the nterpretations made by the AI algorithm. The goal of this project is to benchmark the impact of such label noise in at least four datasets, including a clinical dataset; and then test and evaluate a method to reduce its distortionary impact.