“How do I know if I can trust the information produced by an AI tool?”
A lot of people have concerns about this question. Generative AI tools produce a lot of information and often present the information confidently or authoritatively. The question of whether you can trust this information is fundamentally about information literacy.
The Association for College and Research Libraries' (ACRL) Framework for Information Literacy for Higher Education defines Information Literacy as “the set of integrated abilities encompassing the reflective discovery of information, the understanding of how information is produced and valued, and the use of information in creating new knowledge and participating ethically in communities of learning.”
This definition is supported by the following framework of six interconnected concepts that point to knowledge practices and dispositions related to information literacy:
Furthermore, Critical Information Literacy is “a theory and practice that considers the sociopolitical dimensions of information and production of knowledge, and critiques the ways in which systems of power shape the creation, distribution, and reception of information” (Drabinski and Tewell, 2019).
Information produced by Gen-AI tools is fundamentally different from information produced by human authors in important ways. With human-created information, we understand that “information creation is a process,” where a human has made choices of how to (or how not to) research, create, revise, and communicate ideas. Human-created information leaves a trail that can be fact-checked, or checked for other biases, motivations, or contexts.
Information created by Gen-AI tools, on the other hand, is the immediate output of predictive algorithms responding to a specific prompt. The information output can change based on how a prompt is phrased, even if the idea behind the prompt is the same. Many times, the information produced is accurate, or at least partially accurate.
However, there are many ways in which the information produced by a Gen-AI tool can be wrong, even if it seems correct:
Where does the information from a Gen-AI tool come from?
Gen-AI tools like Open AI’s GPT series or Google’s Gemini are examples of Large Language Models (LLMs). These LLMs are proprietary, and are trained on vast amounts of data using algorithms that are often invisible to us, the users.
Because the inner workings of these tools are invisible, it is difficult to understand how we can effectively determine the authority of information created by LLMs. Information created by humans “reflect their creators’ expertise and credibility, and are evaluated based on the information need and the context in which the information will be used” (ACRL Framework for Information Literacy, “Authority is Constructed and Contextual”). However, LLMs operate much like a “black box” - where users “can feed the system an input and receive an output, but you cannot examine the system’s code or the logic that produced the output” (Bagchi, 2023). In other words, the exact mechanisms that LLMs use to produce information are hidden.
How do I evaluate information from a Gen-AI tool?
Lateral reading is a strategy used by professional fact checkers where you open up multiple new tabs and windows on your browser to verify information using outside sources. This is a useful strategy for evaluating any kind of online information.
When evaluating information created by a Gen-AI tool, it is important to take several additional steps. This is because LLMs may be drawing from factual and false source materials indiscriminately, and also because LLMs are responding to specific prompts, and those prompts may also contain misinformation or assumptions.
Evaluating ZotGPT using Lateral Reading
This short video demonstrates lateral reading in response to a prompt asking UCI’s ZotGPT about the role of the anteater in UCI’s history.
Association of College and Research Libraries. “Framework for Information Literacy for Higher Education." Accessed June 17, 2024. https://www.ala.org/acrl/standards/ilframework.
James, Amy B., and Ellen Hampton Filgo. "Where Does ChatGPT Fit into the Framework for Information Literacy? The Possibilities and Problems of AI in Library Instruction" College & Research Libraries News, Volume 84 Number 9 (2023): 334-341. https://doi.org/10.5860/crln.84.9.334.
Bagchi, Saurabh. “What Is a Black Box? A Computer Scientist Explains What It Means When the Inner Workings of AIs Are Hidden.” The Conversation, May 22, 2023. http://theconversation.com/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888.
Park, Peter S., Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks. “AI Deception: A Survey of Examples, Risks, and Potential Solutions.” arXiv (2023). https://doi.org/10.48550/arXiv.2308.14752.
Walters, William H., and Esther Isabelle Wilder. “Fabrication and Errors in the Bibliographic Citations Generated by ChatGPT.” Scientific Reports 13, no. 1 (2023): 14045. https://doi.org/10.1038/s41598-023-41032-5.
White, Jeremy. “See How Easily A.I. Chatbots Can Be Taught to Spew Disinformation.” The New York Times, May 19, 2024. https://www.nytimes.com/interactive/2024/05/19/technology/biased-ai-chatbots.html.
Off-campus? Please use the Software VPN and choose the group UCIFull to access licensed content. For more information, please Click here
Software VPN is not available for guests, so they may not have access to some content when connecting from off-campus.