TECHNOLOGY

AI in Science: A Slow and Steady Journey

Francis Crick Institute, London, UKThu Jan 09 2025
This year, IT spending is projected to reach a whopping $5. 74 trillion worldwide, with a significant chunk dedicated to generative AI (Gen AI). While this technology can speed up research, it also comes with risks that James Fleming, CIO at the Francis Crick Institute, is keenly aware of. Fleming believes that using AI in science isn't as simple as throwing a model out there and hoping for the best. Scientific AI requires proving hypotheses accurately, especially when it comes to real-world applications like medical devices. Fleming uses a metaphor of a double-edged sword to describe AI in science. On one hand, it can accelerate research; on the other, any conclusions must be indisputably correct. This is crucial in fields like medicine, where a tool claiming to predict cancer evolution needs solid proof. The challenge lies in the fact that many popular AI models are black boxes, making it difficult to explain their decisions. To tackle this, the Crick uses an iterative approach. Instead of rushing in, they take small, careful steps. They started five years ago with microscopy, using AI to analyze dense images and turn them into usable data. In a Parkinson's research project, they used AI to classify diseased cells but didn't stop there. They worked backward, figuring out why the model made its decisions and using that information for further experimentation. The key is to build trust and confidence in AI models gradually. This means testing and refining models in stages, layer by layer. For instance, Samra Turaljic's Cancer Dynamics Laboratory uses AI to predict kidney cancer evolution. This involves training multiple AIs and cross-referencing them with vast genomic databases. Each step is meticulously checked to ensure the final model is reliable. This slow and steady process isn't just about AI; it's about understanding and trust. It's about ensuring that the conclusions drawn from AI are accurate and reliable. It's a reminder that in the age of AI, rushing isn't always the best strategy. Sometimes, going slow is the way to go fast in the long run.

questions

    If AI could hallucinate, would it see pink elephants in the lab?
    Can you explain the concept of 'hallucinations' in AI models and why it is problematic for scientific research?
    How does the iterative approach help in building trust with AI models?

actions