Mean machines? Machine learning models can overcome bias, after all

Written by Darcy Hodge, Editor

ChatRWD

Researchers from the Massachusetts Institute of Technology (MIT) have recently collaborated with Harvard University (both MA, USA) and Fujitsu Ltd. (Tokyo, Japan) to assess how machine learning (ML) models can overcome dataset bias. The research has been published in Nature Machine Intelligence. Dataset bias can mean that ML models can misclassify information based on what it initially processes. This means if the model only processes one data type, new data that appears could not be interpreted correctly. The team prepared training data to alter the artificial neural network of the ML model, said to mimic human information processing. Varied datasets...

To view this content, please register now for access

It's completely free