“How do I know if I can trust the information produced by a chatbot?”
A lot of people have concerns about this question. Gen-AI tools produce a lot of information and often present the information confidently or authoritatively. The question of whether you can trust a chatbot's output is fundamentally a question about information literacy.
The Association for College and Research Libraries' (ACRL) Framework for Information Literacy for Higher Education defines Information Literacy as “the set of integrated abilities encompassing the reflective discovery of information, the understanding of how information is produced and valued, and the use of information in creating new knowledge and participating ethically in communities of learning.”
This definition is supported by the following framework of six interconnected concepts that point to knowledge practices and dispositions related to information literacy:
Furthermore, Critical Information Literacy is “a theory and practice that considers the sociopolitical dimensions of information and production of knowledge, and critiques the ways in which systems of power shape the creation, distribution, and reception of information” (Drabinski and Tewell, 2019).
Where does the information from a chatbot come from?
Chatbots like Open AI’s GPT series or Google’s Gemini are examples of tools powered by Large Language Models (LLMs). These LLMs are proprietary, and are trained on vast amounts of data using algorithms that are often invisible to us, the users.
Because the inner workings of these tools are invisible, it is difficult to understand how we can effectively determine the authority of information created by LLMs. Information created by humans “reflect their creators’ expertise and credibility, and are evaluated based on the information need and the context in which the information will be used” (ACRL Framework for Information Literacy, “Authority is Constructed and Contextual”). However, LLMs operate much like a “black box” - where users “can feed the system an input and receive an output, but you cannot examine the system’s code or the logic that produced the output” (Bagchi, 2023). In other words, the exact mechanisms that LLMs use to produce information are hidden.
And yet, Gen-AI tools can generate information that can appear credible. They can be prompted to cite sources. They may utilize language that conveys authority. Many times, the information produced is accurate, or at least partially accurate. But there are also ways in which the information can be wrong, even if it seems correct:
Misinformation is not unique to AI-generated content. Humans can also produce misinformation. However, as Emily Bender, a noted scholar on Generative AI, puts it: “Large language models are designed to make stuff up.” (Bender, 2023, emphasis added).
The following screenshot demonstrates how a chatbot can provide seemingly authoritative sources, mentioning academic databases like “PubMed, JSTOR, and Google Scholar” and thereby implying that the cited sources came from these databases. And yet, when asked for clarification, the chatbot admits that the source was “hypothetical and does not correspond to a specific, traceable academic publication.

Evaluating AI-generated information can be challenging. At a minimum, it is important to try to fact-check the information, because chatbots are known to “hallucinate” both information and sources.
One practical method that professional fact-checkers employ to verify online information is called “lateral reading,” where a source is evaluated against external sources, often by opening up multiple tabs laterally in a browser window. Traditionally, one of the first steps of lateral reading involves researching the author of the information to establish credibility or unearth specific biases. However, because AI-generated information has no “author,” researchers lose an important evaluation criteria. It can become difficult to determine which information to evaluate or verify unless the researcher already has some expertise in the content.
This short video demonstrates lateral reading in response to an essay generated by UCI’s ZotGPT (2024) about the role of the anteater in UCI’s history.
While this video is of an older version of ZotGPT, it demonstrates some of the evaluation strategies that are needed to evaluate any AI-generated information today.
Association of College and Research Libraries. “Framework for Information Literacy for Higher Education." Accessed June 17, 2024. https://www.ala.org/acrl/standards/ilframework.
James, Amy B., and Ellen Hampton Filgo. "Where Does ChatGPT Fit into the Framework for Information Literacy? The Possibilities and Problems of AI in Library Instruction" College & Research Libraries News, Volume 84 Number 9 (2023): 334-341. https://doi.org/10.5860/crln.84.9.334.
Bagchi, Saurabh. “What Is a Black Box? A Computer Scientist Explains What It Means When the Inner Workings of AIs Are Hidden.” The Conversation, May 22, 2023. http://theconversation.com/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888.
Bender, Emily. “ChatGP-Why: When, If Ever, Is Synthetic Text Safe, Appropriate, and Desirable?” Presented at the Global Research Alliance for AI in Learning and Education (GRAILE), Global Research Alliance for AI in Learning and Education (GRAILE), August 8, 2023. https://www.youtube.com/watch?v=qpE40jwMilU.
Park, Peter S., Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks. “AI Deception: A Survey of Examples, Risks, and Potential Solutions.” arXiv (2023). https://doi.org/10.48550/arXiv.2308.14752.
Walters, William H., and Esther Isabelle Wilder. “Fabrication and Errors in the Bibliographic Citations Generated by ChatGPT.” Scientific Reports 13, no. 1 (2023): 14045. https://doi.org/10.1038/s41598-023-41032-5.
White, Jeremy. “See How Easily A.I. Chatbots Can Be Taught to Spew Disinformation.” The New York Times, May 19, 2024. https://www.nytimes.com/interactive/2024/05/19/technology/biased-ai-chatbots.html.
Off-campus? Please use the Software VPN and choose the group UCIFull to access licensed content. For more information, please Click here
Software VPN is not available for guests, so they may not have access to some content when connecting from off-campus.