<aside> <img src="notion://custom_emoji/22cd7b3b-41d5-44bd-a831-5116e0fb9be1/2dc7b904-f28d-80c7-82b7-007ac459a9b4" alt="notion://custom_emoji/22cd7b3b-41d5-44bd-a831-5116e0fb9be1/2dc7b904-f28d-80c7-82b7-007ac459a9b4" width="40px" />
This document reflects the opinions of me, Natalie Dowling, as an individual. It is not intended to serve as a policy for AI use in any class. It does not represent the views of the University of Chicago or the MA Program in the Social Sciences.
This was originally drafted in 2023. As the capacities and limitations of LLMs evolve, our collective understanding and my personal feelings evolve as well. These guidelines cannot reflect all the tangled and ever-changing facts, opinions, and ethics of the topic.
These are the guidelines I set for myself and the reasoning behind them. They are intended to be prescriptive or universally applicable. Instead, I hope they prompt you to reflect on your own AI use, so that you can determine for yourself how to use (or not use) AI in a way you believe is both ethical and productive.
</aside>
Large language models and other generative AI tools like ChatGPT, Bard, Copilot, and Gemini have become ubiquitous in and outside of academia. Whatever your views and principles, there’s no going back. I do not believe there is any one “correct” way students should (or shouldn’t) use AI. It will depend on many factors like the content and structure of the class, the level of instruction, the course’s learning objectives, and the pedagogical styles of individual instructors (among other things).
Although what is appropriate and acceptable use of AI will be somewhat contextually dependent, there are two issues that apply across the board:
This document will offer some general guidelines for using AI ethically and constructively. Not all guidelines will be useful in all contexts, but this may be a useful starting point to reflect on your own philosophy of AI use. It may also help you understand the motivations behind AI policies that can vary significantly from class to class.
Academic integrity has become inextricably linked with artificial intelligence in the classroom, but they are not equivalent. AI tools are not inherently unethical, nor are they inherently ethical. They can be used to cheat, but they can also be used to learn. They can be used to misrepresent, but they can also be used to clarify. They can be used to plagiarize, but they can also be used to create.
Students in my courses are expected to follow UChicago’s Academic Honesty & Plagiarism policy. To add clarity to this general policy, I use Oxford University’s explanation of plagiarism:
Plagiarism is presenting work or ideas from another source as your own, with or without consent of the original author, by incorporating it into your work without full acknowledgement. All published and unpublished material, whether in manuscript, printed or electronic form, is covered under this definition, as is the use of material generated wholly or in part through use of artificial intelligence (save when use of Artificial Intelligence AI for assessment has received prior authorisation e.g. as a reasonable adjustment for a student’s disability). Plagiarism can also include re-using your own work without citation.
The relationship between AI and plagiarism is complicated, to put it mildly. ChatGPT doesn’t care if you plagiarize it. Bard’s career isn’t going to suffer if you benefit from its work without giving due credit. Dall-E doesn’t have artistic integrity. It doesn’t matter to AI, but it should matter to you.
The work you produce as a student is a representation of you. It not only demonstrates the skills you’ve gained in your education, it is often the only “you” some professors, employers, and peers will know before making important judgments and decisions that directly affect you. If you, for example, apply to a PhD program with a writing sample and personal statement that someone or something else wrote, you will be expected to produce comparable intellectual contributions on the spot at interviews and throughout your career.
This kind of misrepresentation cuts both ways. AI seems extraordinarily intelligent and capable, but it is not nearly as smart and capable as you are when it comes to telling the difference between reality and fantasy. You may be prone to human error, less-than-perfect writing, or other academic struggles, but AI is prone to hallucinations. It asserts completely inaccurate things with extreme confidence. When you claim AI’s work as your own, you are taking ownership over any inaccurate, confusing, bizarre, offensive, or otherwise problematic material it has created.
I believe AI can be used ethically, but I caution you to give serious thought to whether and how you choose to use it, apart from whether and how you are permitted to do so. The ethical complications of AI extend well beyond plagiarism and misrepresentation concerns; any use of large language models will have global consequences. It is your responsibility to weigh the risks and benefits with regard to your personal use, within the bounds of what is explicitly allowed.
LLMs are actively harming the environment and exploiting workers, in both cases disproportionately so in low-income and already exploited regions. These models perpetuate racist and classist societal bias and misrepresent non-Western cultures. We as users are complicit, whether or not we hold these biases ourselves.