Marzyeh Ghassemi, 2022 principal investigator for AI and health, MIT Jameel Clinic, co-authored a research article titled, 'Judging facts, judging norms: Training machine learning models to judge humans requires a modified approach to labeling data,' published in Science Advances.
The study focused on training machine learning models based on different methods of data annotation. The models determined whether or not a dress code violation had occurred based on interpreting descriptions of the dress code vs interpreting violations based on past human judgement of what constitutes a violation. The study found that when models are trained to determine a rule violation based on a set of descriptive indicators, more violations are registered. When models learn based on human interpretation of data, human flexibility becomes part of the model's learning. With data labeling based on descriptive indicators, there is no ambiguity.
“This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Marzyeh.
Decision-making algorithms are already present in our everyday lives. Algorithms determine what appears on our news and entertainment feeds and how tasks are assigned to gig workers. They also make decisions about who will get hired for a job or admitted to a college from a pool of candidates, how bail is set, and which patients receive priority medical care. As the decision-making algorithms become ever more embedded into our daily lives, and with significant consequences attached to those decisions, data labeling joins a mounting list of ethical questions around model training, bias, and perpetuation of practices that contribute to inequality.