Researchers from MIT and other institutions have found that machine-learning models designed to mimic human decision-making often make different, and sometimes harsher, judgments than humans due to being trained on the wrong type of data. The right data for training these models is normative data, which is labeled by humans who were explicitly asked whether items defy a certain rule. However, most models are trained on descriptive data, where humans identify factual features instead. This can lead to models making harsher judgments than humans would, and can have serious real-world implications.
Previous ArticleAi Tool Could Predict Non-genetic Parkinson’s Onset
Next Article Hsuanwu