Skip to Main Content

AI in Research

This guide offers advice on AI-powered tools and functionality created for or used in academic research.

Email this link:

Digital Scholarship Services

DSS fosters the use of digital content and transformative technology in scholarship and academic activities. We provide consultative and technical support for a wide range of tools and platforms. We work with the campus community to publish, promote, and preserve the digital products of research through consultation, teaching, and systems administration. Our areas of expertise include data curation, research data management, computational research, digital humanities, and scholarly communication.

Note

Use of AI is fraught with complications involving accuracy, bias, academic integrity, and intellectual property and may not be appropriate in all academic settings. This guide is meant more for academic researchers looking to utilize AI tools in their research.

Students are strongly advised to consult with their instructor before using AI-generated content in their research or coursework. For information on Generative AI take a look at the Generative AI and Information Literacy guide.

Ethical Dilemmas of AI

Ethical issues related to artificial intelligence are a complex and evolving field of concern. As AI technology continues to advance, it raises various ethical dilemmas and challenges. Here are some of the key ethical issues associated with AI:

  • Bias and Fairness: AI systems can inherit and even amplify biases present in their training data. This can result in unfair or discriminatory outcomes, particularly in hiring, lending, and law enforcement applications. Addressing bias and ensuring fairness in AI algorithms is a critical ethical concern.
  • Privacy: AI systems often require access to large amounts of data, including sensitive personal information. The ethical challenge lies in collecting, using, and protecting this data to prevent privacy violations.
  • Transparency and Accountability: Many AI algorithms, particularly deep learning models, are often considered “black boxes” because they are difficult to understand or interpret. Ensuring transparency and accountability in AI decision-making is crucial for user trust and ethical use of AI.
  • Autonomy and Control: As AI systems become more autonomous, concerns about the potential loss of human control exist. This is especially relevant in applications like autonomous vehicles and military drones, where AI systems make critical decisions.
  • Job Displacement: Automation through AI can lead to job displacement and economic inequality. Ensuring a just transition for workers and addressing the societal impact of automation is an ethical issue.
  • Security and Misuse: AI can be used for malicious purposes, such as cyberattacks, deepfake creation, and surveillance. Ensuring the security of AI systems and preventing their misuse is an ongoing challenge.
  • Accountability and Liability: Determining who is responsible when an AI system makes a mistake or causes harm can be difficult. Establishing clear lines of accountability and liability is essential for addressing AI-related issues.
  • Ethical AI in Healthcare: The use of AI in healthcare, such as diagnostic tools and treatment recommendations, raises ethical concerns related to patient privacy, data security, and the potential for AI to replace human expertise.
  • AI in Criminal Justice: The use for predictive policing, risk assessment, and sentencing decisions can perpetuate biases and raise questions about due process and fairness.
  • Environmental Impact: The computational resources required to train and run AI models can have a significant environmental impact. Ethical considerations include minimizing AI’s carbon footprint and promoting sustainable AI development.
  • AI in Warfare: The development and use of autonomous weapons raise ethical concerns about the potential for AI to make life-and-death decisions in armed conflicts.
  • Bias in Content Recommendation: AI-driven content recommendation systems can reinforce existing biases and filter bubbles, influencing people’s views and opinions.
  • AI in Education: The use of AI in education, such as automated grading and personalized learning, raises concerns about data privacy, the quality of education, and the role of human educators.

From: https://annenberg.usc.edu/research/center-public-relations/usc-annenberg-relevance-report/ethical-dilemmas-ai