Skip to Main Content

Generative AI and Information Literacy


Email this link:

Critical information literacy and bias

Critical Information Literacy is "a theory and practice that considers the sociopolitical dimensions of information and production of knowledge, and critiques the ways in which systems of power shape the creation, distribution, and reception of information" (Drabinski and Tewell, 2019).

Information, in other words, is never "unbiased." Human creators of information have biases based on their own lived experiences or perspectives. Human receivers of information also have biases (e.g. "confirmation bias" when the information you read reinforces your existing beliefs, or "cognitive dissonance" when information does not align with your beliefs). Biases are embedded in the ways that information is distributed - for example, who decides what kinds of information gets published or archived, or how search engines rank pages to display in a results list.

Bias, DEI, and technology

Multiple studies have documented how technologies perpetuate systemic biases and inequalities in our societies. In her pioneering book Algorithms of Oppression: How Search Engines Reinforce Racism (2018), digital media scholar Safiya Noble famously analyzed Google search results from 2009-2015 to demonstrate how search engines were not neutral, but reinforced racist and sexist biases. 

The Large Language Models (LLMs) powering Gen-AI tools are trained on large data sets that contain biases that we (the end user) are not able to evaluate. The seminal research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" demonstrates how large data sets "overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalized populations" (Bender, et al., 2021, p. 610). In other words, LLMs are trained on data that likely reproduce historical biases or may include overt hate speech or misinformation. 

When used in Gen-AI chatbots, these LLMs can produce or amplify sexist, abelist, racist, or other harmful ideologies when responding to user queries (Bender et al, 2021, p. 617). Omiye, et al's "Large Language Models Propagate Race-Based Medicine" (2023) demonstrates how the integration of LLMs into healthcare systems can further discriminate against persons of color in medicine. When used in workforce recruitment and resume screening, LLMs can perpetuate gender, age, and ableist biases (Glazko et al., 2024). 

Selected readings

On Critical Information Literacy and Critical AI Literacies

Drabinski, Emily, and Eamon Tewell. “Critical Information Literacy.” In The International Encyclopedia of Media Literacy, edited by Renee Hobbs and Paul Mihailidis, 1st ed., 1–4. Wiley, 2019. https://doi.org/10.1002/9781118978238.ieml0042.

Gupta, Anuj, Yasser Atef, Anna Mills, and Maha Bali. “Assistant, Parrot, or Colonizing Loudspeaker? ChatGPT Metaphors for Developing Critical AI Literacies.” Open Praxis 16, no. 1 (2024): 37–53. https://doi.org/10.55982/openpraxis.16.1.631.

On Bias in Technology and LLMs

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21), March 3–10, 2021, Virtual Event, Canada.  https://doi.org/10.1145/3442188.3445922.

Browne, Grace. “AI Is Steeped in Big Tech’s ‘Digital Colonialism.’” Wired, May 25, 2023. https://www.wired.com/story/abeba-birhane-ai-datasets/.

Buolamwini, Joy. Unmasking AI: A Story of Hope and Justice in a World of Machines. New York: Random House, 2023.

Glazko, Kate, Yusuf Mohammed, Ben Kosa, Venkatesh Potluri, and Jennifer Mankoff. “Identifying and Improving Disability Bias in GPT-Based Resume Screening.” In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24), June 03–06, 2024, Rio de Janeiro, Brazil. https://doi.org/10.1145/3630106.3658933.

“How Artificial Intelligence Bias Affects Women and People of Color.” UCB-UMT, December 8, 2021. https://ischoolonline.berkeley.edu/blog/artificial-intelligence-bias/.

Lizarraga, Lori. “How Does a Computer Discriminate?” NPR Code Switch, November 8, 2023. https://www.npr.org/2023/11/08/1197954253/how-ai-and-race-interact.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: University Press, 2018.

Omiye, Jesutofunmi A., Jenna C. Lester, Simon Spichak, Veronica Rotemberg, and Roxana Daneshjou. “Large Language Models Propagate Race-Based Medicine.” NPJ Digital Medicine 6, no. 1 (2023): 1–4. https://doi.org/10.1038/s41746-023-00939-z.

Questions to consider

  • What is your reaction to learning about systemic bias and how it influences and impacts technology? Does it change the way you approach or use Gen-AI tools?
  • How might our own (often unconscious) biases affect how we prompt a chatbot? How would that affect the output?