Start-up companies claim that new machine learning programmes similar to ChatGPT could complete doctors' paperwork for them. However, some experts are concerned that inherent bias and a tendency to fabricate facts could result in errors. Among them is Marzyeh Ghassemi, principal investigator at the MIT Jameel Clinic, the epicentre of AI and healthcare at MIT, who is worried that a rush to incorporate the latest AI technology into medicine could lead to errors and biased outcomes that may harm patients. "When you take state-of-the-art machine learning methods and systems and then evaluate them on different patient groups, they do not perform equally," Marzyeh said in an interview with NPR. "That's because these systems are trained on vast amounts of data generated by humans. And whether that data is from the internet or a medical study, it contains all the human biases that already exist in our society."