New paper out in BMJ Health & Care Informatics.
We publish in collaboration with Prof Roy W Dudley an opinion piece that reflects on one understudied aspect of AI for health: proprietary algorithms trained on big data resources that, somewhat counterintuitively, lack diversity and yield performance bias with potential negative consequences for patients and health practitioners. The article’s primary author is Dr. Jeremy Moreau, who graduated from the lab.
From the paper: “Whether IBM’s Watson, Google’s DeepMind or Tencent’s WeDoctor, the last few years have been characterised by unprecedented levels of research interest and new investments in artificial intelligence (AI) and digital healthcare technology. The number of publications on applications of AI and machine learning to medical diagnosis has dramatically increased since around 2015. Correspondingly, venture capital-backed digital health and AI startups worth over US$1 billion now number in the dozens. Yet, this influx of new investment has not been without controversy. Google’s recent partnership with national health group Ascension, which gave the company access to the clinical data of around 50 million patients, has been the target of significant mediatic and congressional scrutiny. Likewise, pharmaceutical giant GlaxoSmithKline’s (GSK) US$300 million investment in direct-to-consumer genetic testing provider 23andMe has aroused similar concerns. Under the terms of their 4–5 years agreement, GSK gained access to 23andMe’s genetic data and became its exclusive collaborator for drug target discovery programmes. While much of the coverage of these partnerships has focused on issues of privacy and consent, we argue that another key consideration lies in the risks associated with exclusive or privileged access to databases of patient information and the development of proprietary diagnostic algorithms.”