Artificial intelligence models, including OpenAI’s ChatGPT and Google’s Gemini, are generating fictitious journal references, prompting alarm from the International Committee of the Red Cross (ICRC). In a recent statement, the ICRC highlighted concerns over popular AI tools leading users to request records and references for publications that do not actually exist, such as the fictitious “Journal of International Relief” and the “International Humanitarian Digital Repository.” This situation has created significant challenges for students, researchers, and archivists alike.

The confusion caused by these AI-generated citations has been particularly troublesome for institutions like the Library of Virginia. Chief of researcher engagement, Sarah Falls, reported that approximately 15 percent of reference inquiries they receive are generated by ChatGPT, many of which include made-up citations for both published works and unique primary source documents. She pointed out that this phenomenon complicates verification processes for library staff who now face the difficulty of disproving the existence of non-entities. “For our staff, it is much harder to prove that a unique record doesn’t exist,” Falls remarked.

This is not a novel issue, as AI models are known to create false citations regularly. In response, the ICRC has urged researchers and users to consult actual online catalogs and validated scholarly publications to identify real studies rather than relying on potentially fabricative AI outputs. The Library of Virginia is adapting its approach by encouraging researchers to verify their sources when submitting queries, particularly when AI-generated information may have been involved. Falls emphasized the library’s need to limit the time spent on verification processes, a development that signifies a broader shift in how academic and research institutions might handle inquiries in the age of AI.