Skip to Main Content

Generative AI and Information Literacy


Email this link:

Generative AI and the Environment

Many people are not aware of the extent of Generative AI’s energy consumption. Data centers that currently house Gen-AI servers consume enormous amounts of water in order to provide cooling for the servers, and the energy demands of Gen-AI are driving tech companies to seek out more energy sources, and therefore putting more strain on an environment already strained by climate change. 

As companies continue to compete for market share of Gen-AI technologies and as Gen-AI tools become more customized and more multimedia, the energy demands only increase: a study from MIT calculates that by 2028, AI-specific power consumption will “rise to between 165 and 326 terawatt-hours per year…That could generate the same emissions as driving over 300 billion miles—over 1,600 round trips to the sun from Earth” (O’Donnell and Crownhart, 2025). 

Even studies that argue that Gen-AI tools introduce benefits in that they may use less energy than human labor to do the same amount of work still concede that these technologies have substantial environmental impacts, and that it is improbable for Gen-AI tools to simply “replace human labor” entirely.

Generative AI and Intellectual Property

Recently, a $1.5 billion dollar class-action settlement between Claude Anthropic and book authors confirmed how LLMs can be trained on copyrighted works without permission. In the case of Claude Anthropic, an initial ruling found that Anthropic had downloaded 7 million books that they knew had been pirated. It is not clear whether other LLMs also train on pirated works, as the training datasets are not made public to users. The case against Anthropic is just one of many ongoing class action suits that authors have brought forth against Gen-AI companies. 

Gen-AI authored content has raised questions about intellectual property and copyright frameworks in general. Existing models assume human authorship, but as humans employ more AI-generated content and tools into their work, the lines of authorship become blurred. The questions of authorship and training go beyond texts: AI-generated images and media present additional challenges to existing legal frameworks of intellectual property.

Generative AI and Research / Education

Increasingly, researchers have been investigating the impact of Gen-AI tools on research and education. 

Within academic publishing and research, Gen-AI tools such as Perplexity, Research Rabbit, and others are increasingly being utilized by researchers to find and organize large amounts of information. At the same time, a growing number of publications are also suspected of including text generated by LLMs without including any statements disclosing the use of these tools. There is also concern that peer-reviewers are utilizing LLMs without disclosure, and consequently diluting the overall rigor of scientific publishing.

Within education, evidence is still mixed. Systematic reviews of studies suggest both positive and negative effects of student use of chatbots when it comes to student engagement and various measurements of student learning. Other studies suggest that Gen-AI tools provide opportunities for novel teaching approaches and enhanced collaborative learning, but emphasize the need for human oversight

While Gen-AI tools are frequently credited with improving efficiency, a growing area of research asks whether this efficiency comes with measurable cognitive costs. In June 2025, a team of researchers from MIT published a groundbreaking study comparing brain waves of students who used chatbots with students who used search engines or students who used “brain alone” to write essays. It found “significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity” (Kosmyna et al., 2025).

Selected readings

Environment and Generative AI

Crawford, Kate. "Generative AI's environmental costs are soaring -- and mostly secret." Nature, February 20, 2024. https://www.nature.com/articles/d41586-024-00478-x.

Luccioni, Sasha, et al. “The Environmental Impacts of AI – Policy Primer,” Hugging Face Blog, 2024, https://doi.org/10.57967/hf/3004

O’Donnell, James, and Casey Crownhart. “We Did the Math on AI’s Energy Footprint. Here’s the Story You Haven’t Heard.” MIT Technology Review, 20 May 2025. Accessed June 11, 2025. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/.

Ren, Shaolei, Bill Tomlinson, Rebecca W. Black, and Andrew W. Torrance. "Reconciling the contrasting narratives on the environmental impact of large language models." Scientific Reports 14, no. 1 (2024): 26310. https://www.nature.com/articles/s41598-024-76682-6

Intellectual Property and Generative AI

Appel, Gil, Juliana Neelbauer, and David A. Schweidel. “Generative AI Has an Intellectual Property Problem.” Harvard Business Review, April 7, 2023. https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem.

Metz, Cade. “Anthropic Agrees to Pay $1.5 Billion to Settle Lawsuit With Book Authors.” Technology. The New York Times, September 5, 2025. https://www.nytimes.com/2025/09/05/technology/anthropic-settlement-copyright-ai.html

Schmelzer, Ron. “What Is The Future Of Intellectual Property In A Generative AI World?” AI. Forbes, July 18, 2024. https://www.forbes.com/sites/ronschmelzer/2024/07/18/what-is-the-future-of-intellectual-property-in-a-generative-ai-world/.

“Understanding the AI Class Action Lawsuits.” The Authors Guild, May 6, 2025. https://authorsguild.org/news/ai-class-action-lawsuits/.

Research / Education and Generative AI

Deng, Ruiqi, Maoli Jiang, Xinlu Yu, Yuyan Lu, and Shasha Liu. “Does ChatGPT Enhance Student Learning? A Systematic Review and Meta-Analysis of Experimental Studies.” Computers & Education 227 (April 2025): 105224. https://doi.org/10.1016/j.compedu.2024.105224.

“How Much Research Is Being Written by Large Language Models?” Stanford University Human-Centered Artificial Intelligence, May 13, 2024. https://hai.stanford.edu/news/how-much-research-being-written-large-language-models

Kosmyna, Nataliya, Eugene Hauptmann, Ye Tong Yuan, et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task.” arXiv:2506.08872. Preprint, arXiv, June 10, 2025. https://doi.org/10.48550/arXiv.2506.08872.

Liang, Weixin, Yaohui Zhang, Zhengxuan Wu, et al. “Mapping the Increasing Use of LLMs in Scientific Papers.” arXiv:2404.01268. Preprint, arXiv, April 1, 2024. https://doi.org/10.48550/arXiv.2404.01268.

Lo, Chung Kwan, Khe Foon Hew, and Morris Siu-yung Jong. “The Influence of ChatGPT on Student Engagement: A Systematic Review and Future Research Agenda.” Computers & Education 219 (October 2024): 105100. https://doi.org/10.1016/j.compedu.2024.105100.

Peláez-Sánchez, Iris Cristina, Davis Velarde-Camaqui, and Leonardo David Glasserman-Morales. “The Impact of Large Language Models on Higher Education: Exploring the Connection between AI and Education 4.0.” Frontiers in Education 9 (June 2024). https://doi.org/10.3389/feduc.2024.1392091.

Stadler, Matthias, Maria Bannert, and Michael Sailer. “Cognitive Ease at a Cost: LLMs Reduce Mental Effort but Compromise Depth in Student Scientific Inquiry.” Computers in Human Behavior 160 (November 2024): 108386. https://doi.org/10.1016/j.chb.2024.108386.

Questions to consider

  • Does learning about the environmental impacts of Generative AI affect the way you will use it in the future? Why or why not?
  • What surprised you most from this section on challenges with Generative AI? How do you think these challenges might impact you? 
  • What has been your experience with using Generative AI in learning? In what ways do you think it helps you learn? In what ways do you think it prevents you from gaining skills or cognitive processes?