Developments in AI-enabled healthcare hold significant implications for the future of cancer care, but the promise of advanced early detection and response raises ethical and practical questions ahead of widespread deployment of technologies. Regina Barzilay, AI faculty lead at the MIT Jameel Clinic, responds to concerns within the healthcare community and broader public around diagnostic efficacy and biased algorithms.
Barzilay explains that while clinicians should refer to their own judgement, AI is capable of recognising subtle patterns and variances in images better than humans and that AI-powered machines can provide detailed information about an individual's future cancer risk, adding that this could open different pathways for care.
With regards to training algorithms equitably, Barzilay suggests one solution: creating AI machines that alert when an image or data point is out of the scope of data to indicate possible inaccuracies and prevent biased training.