Researchers evaluated the potential of different large language models (LLMs) to streamline qualitative research through an artificial intelligence (AI) analysis of nurse interviews. The findings suggest that a hybrid approach can accelerate qualitative research while maintaining rigor.

This study comes out of a larger collaboration between the University of Minnesota and University of California San Francisco to develop solutions for safer home recovery for emergency abdominal surgery patients.

Led by Dr. Genevieve Melton-Meaux, Dr. Jenna Marquard, and Dr. Elizabeth Wick, this project aims to improve what is currently a vulnerable period of recovery where patients often receive fragmented care.

To achieve this goal, the research team conducted interviews with nurses at both sites to identify barriers and facilitators to recovery in-hospital and after discharge. They then used a rapid qualitative analysis framework to code and synthesize themes from the results. The interview data was processed through one manual workflow and three different LLM–assisted workflows, which were synthesized into one codebook by AI.

Workflows that incorporated LLM tools took significantly less time than analysis that was done completely by hand. In a blinded survey, the nurses who were interviewed largely preferred the LLM-generated codebooks, particularly one based on human notes taken during interviews. The findings suggest that a human-in-the-loop approach to LLM-assisted qualitative analysis can balance both efficiency and rigor.

A paper on the study was recently published in the International Journal of Qualitative Methods.