There are numerous ways to evaluate information to determine whether it is the most relevant or best information to be used or applied for a specific need. There is not a one-size-fits-all approach that will work in every situation or for every information need. With that said, these are two popular acronyms that can help you remember things to do when evaluating information.
Learn more about Information Valuation and Evaluation in this Tutorial designed for ENGR 190W
The CRAAP test prioritizes reading "vertically" within a source as a means of evaluation. Reading vertically means that you can find information about the source within the source itself.
The CRAAP test is particularly helpful when thinking about questions like "is this work relevant to my information need?" or "what is the purpose or audience for this information?" In addition to Currency, Relevancy, Authority, Accuracy and Purpose, some also add "Ease of Use, "which reflects the ability to easily cite, download, copy, export or print the content.
CURRENCY: The timeliness of the information
RELEVANCE: The importance of the information for your needs
AUTHORITY: The source of the information
ACCURACY: The reliability, truthfulness, and correctness of the content
PURPOSE: The reason the information exist
The SIFT method prioritizes reading “laterally” to evaluate a source. Reading laterally means opening up external sources to evaluate the information at hand.
The SIFT method is used by professional fact-checkers and is particularly helpful in validating the authority or the accuracy of information. It also asks you to “stop” and conduct a metacognitive check-in first to see if you have existing beliefs or biases that may cause you to want to trust or discredit the information.
STOP. Check your emotions. How does this source make you feel? Make sure you are aware of your own biases.
INVESTIGATE THE SOURCE. Open up more tabs or windows. Look up the source in Wikipedia, or use other fact-checking sites like Politifact or Snopes. Check out the author. See if you can figure out how the source has been funded.
FIND BETTER COVERAGE. Open up more tabs or windows. What are other sources saying about this same topic? How does this source fit in with other conversations about this topic?
TRACE CLAIMS BACK TO THE ORIGINAL CONTEXT. Things get misquoted, falsified, or taken out of context all the time. If you find a quote, claim, or data that is cited in your source, go "upstream" and look it up in the original context. If there are links, open them. If there is an image, try to figure out where it's from. If you can't trace things back, then this may influence how much you want to trust this source.
Information created by Gen-AI tools is the immediate output of predictive algorithms responding to a specific prompt. The information output can change based on how a prompt is phrased, even if the idea behind the prompt is the same. Many times, the information produced is accurate, or at least partially accurate.
However, there are many ways in which the information produced by a Gen-AI tool can be wrong, even if it seems correct:
Lateral reading is a strategy used by professional fact checkers where you open up multiple new tabs and windows on your browser to verify information using outside sources. This is a useful strategy for evaluating any kind of online information.
When evaluating information created by a Gen-AI tool, it is important to take several additional steps. This is because LLMs may be drawing from factual and false source materials indiscriminately, and also because LLMs are responding to specific prompts, and those prompts may also contain misinformation or assumptions.
The Large Language Models (LLMs) powering Gen-AI tools are trained on large data sets that contain biases that we (the end user) are not able to evaluate. The seminal research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" demonstrates how large data sets "overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalized populations" (Bender, et al., 2021, p. 610). In other words, LLMs are trained on data that likely reproduce historical biases or may include overt hate speech or misinformation.
Stolen Intellectual Property: Training sets for LLMs contain intellectual property by creators who did not consent to their work being used
Exploitative Labor Practices: Some companies have outsourced training of AI models to workers in the Global South, who label disturbing toxic content for less than $2/hr.
Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.
Learn more about Predatory Publishing in this 5-minute tutorial designed for ENGR 190W
Off-campus? Please use the Software VPN and choose the group UCIFull to access licensed content. For more information, please Click here
Software VPN is not available for guests, so they may not have access to some content when connecting from off-campus.