SCIENCE

Big Brain AI in Medical Research: What Do Clinicians Really Think?

Harvard Medical School, Boston, USAWed Jan 01 2025
Recently, big breakthroughs in Natural Language Processing and Artificial Intelligence (AI) have made Large Language Models (LLMs), like GPT, popular among academic researchers. These models help with tasks like literature reviews and manuscript drafting. But, they also pose ethical challenges, such as providing questionable scientific information. A recent study took a quick look at how researchers from around the world feel about LLMs. They talked to 226 doctors and medics from 59 countries, who were part of Harvard Medical School's Global Clinical Scholars' Research Training program between 2020 and 2024. Most of these folks work in academics and have published a good number of articles on PubMed. Interestingly, researchers who knew about LLMs had more published work. Out of those who were aware, about 19% had used LLMs in their papers, mainly for fixing grammar and formatting. However, most didn't mention using it in their publications. When asked about the future, about half thought LLMs would have a positive impact, especially in areas like grammar, formatting, writing, and literature reviews. They also believed journals should allow AI use, but regulations are needed to prevent misuse. This study shows that researchers are aware of the potential of LLMs, but there's a need for clear guidelines to ensure ethical use.

questions

    In what ways can we maintain the integrity of academic research while leveraging the benefits of LLMs?
    What checks and balances should be in place to prevent the misuse of LLMs in academic research?
    If LLMs can help with grammatical errors, will they also start correcting our speeches in real-time?

actions