91热爆

Features

Can generative AI and academic integrity co-exist?

The line on generative AI in academia is blurred. Should the use of AI be discouraged, or is compromise possible?

No matter how it鈥檚 done, passing off someone else鈥檚 intellectual property as your own is plagiarism. However, the rule has become less clear with generative artificial intelligence (AI).

On , OpenAI released ChatGPT, a language model-based chatbot. Anyone can give ChatGPT a prompt that specifies the topic, main points, length, or degree of formality, and the AI will do the writing for them. If the response isn鈥檛 exactly what鈥檚 needed, they can request adjustments or fine-tune their original prompt until the desired content and quality are achieved. While it鈥檚 much faster than researching and writing an essay on your own, the work produced is not yours.

Since the creation of ChatGPT, plagiarism has become a larger concern. Because generative AI is easily accessible and often hard to notice if used, people differ in opinion on whether tools such as ChatGPT should be banned or encouraged as a tool in academia. So, the question is: can generative AI such as ChatGPT co-exist with academic integrity? Or, is compromise the only option, as ChatGPT is already so widely-used? 

鈥淚’m really excited about the potential and the ways in which we can harness this tool to do amazing things,鈥 vice-provost (learning initiatives) and AI taskforce chair says

The University of Alberta鈥檚 Office of the Provost and vice-president (academic)鈥檚 is chaired by Vice-provost (learning initiatives)  . The taskforce, made up of U of A experts, aims to 鈥渉elp us think through what the arrival of tools like ChatGPT and tools that work in the visual realm mean for our learning environment,鈥 M眉ndel said. 

The taskforce is making recommendations on dealing with generative AI in the U of A鈥檚 learning environment. As well, it provided specific guidance for courses in winter 2023. The recommendations are based on consultations with students and U of A stakeholders.

The topic of academic integrity and generative AI has come up, M眉ndel said, but the conversation sometimes focuses on students using AI to cheat.

鈥淚 don’t think students are setting out to [cheat]. This is a new tool. Wikipedia was a new tool. We need to think about how to incorporate these new tools into our learning environments,鈥 M眉ndel said.

鈥淏ut I would say [academic integrity is] actually not the biggest part of the conversation. It鈥檚 really about, how do we prepare our students for life after the U of A? How do we incorporate generative AI into all of those different realms of our lives afterwards?鈥

For students seeking clarity on the use of generative AI such as ChatGPT in class, M眉ndel said that asking questions is key.

鈥淚f it’s not clear from the syllabus, course outline, or an assignment sheet, have a conversation with the professor or teaching assistant. Instructors are incorporating [AI]. That also signals to students really clearly, 鈥榟ere’s how I want you to engage with generative AI in this course, in this assignment.鈥

Academic integrity and generative AI can co-exist, according to M眉ndel. He added that through generative AI tools such as Grammarly and suggestions on Google Apps, the two are already co-existing. However, when it comes to a tool like ChatGPT, other factors should be considered.

鈥淚 think it really depends on how one is using it. Are you using that tool for idea generation? Are you using that tool to think through a bunch of data? Or are you using it to write your assignment?鈥 M眉ndel said. 

He added that another aspect students should consider is the security of generative AI. M眉ndel said that everyone should be careful about the information they input in such tools.

鈥淵ou鈥檝e written this amazing essay and you鈥檝e put it into generative AI to give you some feedback. How is that using your intellectual property, your work, in the development of that model?鈥

The states that 鈥渘o student shall represent another鈥檚 substantial editorial or compositional assistance on an assignment as the student鈥檚 own work.鈥 M眉ndel said that this still applies to generative AI such as ChatGPT, in the case that a student is 鈥渕isrepresenting the assistance of ChatGPT.鈥

鈥淚f you read the code, it doesn鈥檛 explicitly say generative AI. We will make sure it does, so that there鈥檚 no question for anybody,鈥 M眉ndel said. Ultimately, M眉ndel is excited about the opportunities that generative AI tools can bring.

鈥淭here is this sky-is-falling narrative out there about generative AI. I’m really excited about the potential and the ways in which we can harness this tool to do amazing things.鈥

鈥淚 think generative AI can be used in a way that sparks students鈥 curiosity,鈥 philosophy PhD candidate says

, assistant lecturer and PhD candidate in the philosophy department, will be teaching : Ethics and AI in winter 2024 91热爆. According to Yoldas, it鈥檚 important to consider the ethics of using generative AI.

鈥淓thics is important because living with other people and other sentient beings requires our moral consideration. For us to live well and treat other living beings and non-living beings [well], that requires our moral consideration,鈥 Yoldas said. 鈥淲hat is the right thing to do in this situation? What would be wrong?鈥

The course will cover ethical challenges caused by AI technologies such as generative AI. Yoldas said that there are two dangerous assumptions being made about the use of AI in academia: the assumption that students will cheat when given the chance and the assumption that instructors want to control students. 

鈥淸Cheating] shouldn鈥檛 be the starting point of conversations about honestly using generative AI. The goal of instructors and higher education institutions is not, I think, to control students on what to do or what they don’t,鈥 Yoldas said. 

However, according to Yoldas, using generative AI ethically in an academic setting requires transparency and collaboration from both the instructor and the student. For unethical uses such as plagiarism, she thinks the definition should be reassessed.

鈥淎t this point at least, [AI] is not somebody, but it is an entity. It鈥檚 a technological entity and artifact. So we should look at reassessing our understanding of plagiarism,鈥 Yoldas said. She added that there are additional challenges with generative AI such as ChatGPT, including a lack of regulations.

鈥淲e are asking students not to plagiarize, but we don鈥檛 have any regulations in place for these AI systems to ethically use the data that’s available to them. I do feel like there should be more initiatives about regulating generative AI systems’ use of all the data all over the internet鈥 Yoldas said.

She added that the issue of using generative AI ethically in academia is 鈥渁n issue of lifelong learning,鈥 related to 鈥渢he student’s ability to ask and inquire about things that they are curious about.鈥

鈥淚 think generative AI can be used in a way that sparks students鈥 curiosity, that makes students think about which generative AI tools they can use in specific disciplines. And also how they can use it later on in the workplace, and in their life in general.鈥

鈥淭here’s lots of people claiming right now that we’re two or three or five years away from AI being human level intelligence, and that it will take over. That’s absolute garbage nonsense,鈥 U of A assistant professor says

is an assistant professor in the U of A鈥檚 department of computing science. He researches computational creativity, or the intersection between machine learning and creativity. 

Guzdial believes that AI can be integrated into academia. In certain instances, it makes sense to use machine learning tools like ChatGPT. However, he said it鈥檚 complicated and there are 鈥減luses and minuses.鈥

He mentioned that in computing science, there are AI classes where students are expected to use AI for assignments. He also mentioned students using AI as an ideation tool as a starting point for their assignments. However, he doesn鈥檛 recommend that students use AI tools directly. 

鈥淭here’s issues around copyright. There’s [concerns that AI] will make up fake things. There’s issues around it just not being particularly good at certain things,鈥 Guzdial said. 鈥淚n those cases [where] people will attempt to use it rather than learning themselves, which is obviously not ideal.鈥

So far, Guzdial has seen a big impact from AI on academia, particularly when it comes to professors who are afraid of students plagiarizing. He also mentioned people are exploiting that fear by saying they have tools to detect AI-generated text and work. Guzdial said there are a lot of 鈥渇alse positives鈥 from these detection tools, as even small edits can impact detection. 

However, Guzdial still believes that academic integrity and AI can coexist. He said that this is an issue of good versus bad pedagogy and creating assignments that aren鈥檛 鈥渃heatable or hackable.鈥  

鈥淚f 鈥 a professor or instructor is saying, 鈥榳rite me a five-page essay on blank.鈥 I think that’s a terrible idea, in terms of how you actually teach someone something. That’s the kind of thing that you could use ChatGPT or any of these other large language models to produce content for.鈥

For a series of videos posted by the U of A鈥檚  official Instagram account called Fact or Cap, Guzdial quizzed students on their knowledge of AI. 

Guzdial said that these videos serve to improve AI literacy and educate students and the general public. He added that it鈥檚 important for people to understand what the real risks are amidst the 鈥渁pocalyptic nonsense that people are throwing around.鈥

Right now, there is a lot of fear surrounding the future of AI, which is creating lots of misconceptions. 

鈥淭here’s lots of people claiming right now that we’re two or three or five years away from AI [having] human level intelligence, and that it will take over. That’s absolute garbage nonsense. There’s no evidence that we’re going to get there. It’s just ridiculous,鈥 Guzdial said. 

On the other hand, there are people who are undermining the risks of AI by thinking that it鈥檚 not as good as largely believed, Guzdial said.

Guzdial considers AI to have mostly positive potential for the world and academia, but thinks that 鈥渨e won鈥檛 get there for free.鈥  

Guzdial is worried about AI鈥檚 impact on the workforce. He believes that AI will begin replacing jobs held by humans, but humans will still have to go in and check over the work done by AI. The result would be underpaid and undervalued work done by people. 

鈥淚 am worried about people misusing artificial intelligence or misusing the fear of artificial intelligence. If we leave it alone, it is certainly a big problem,鈥 Guzdial said. 

鈥淏ut I’m not worried for example, about artificial intelligence becoming sapient or sentient and destroying all humans.鈥  

Vice-president (academic) cautions students to 鈥渟afeguard themselves鈥 when considering using AI in assignments by setting clear expectations with professors

Pedro Almeida, vice-president (academic) (VPA) of the U of A  Students鈥 Union (UASU), sits on the AI taskforce with M眉ndel, representing the student view. 

Since he鈥檚 joined the taskforce, they鈥檝e largely focused on creating a list of recommendations that center around transparency and communication. Almeida said that professors will have different expectations on whether or not AI is allowed in their courses. However, they need to disclose that information to students ahead of time. To Almeida, it鈥檚 vital that professors set expectations so students know exactly when it鈥檚 appropriate to use AI on assignments. 

He added that it鈥檚 important to 鈥淸understand] the context of AI and [bring] it back to the UASU鈥 so they can properly advise students on how to approach AI in their academics. Providing students with potential courses of action is a priority for the UASU, he said. 

Almeida believes that it鈥檚 鈥渢oo early to tell鈥 the impacts of AI, which will come in the next few years. However, he has seen a narrative of cautioning students regarding the use of AI.  

鈥淭he main thing that I’ve seen as VPA is just this push towards students to be careful as they approach the use of AI and ensure they communicate well with instructors to avoid any issues of academic integrity. Especially as the new academic integrity policy comes into place.鈥

When it comes to students using AI in their assignments, Almeida said it鈥檚 course-specific. Some instructors are okay with it, while others are not, so it鈥檚 up to students to clarify what鈥檚 allowed and what isn鈥檛. 

As AI develops and becomes more integrated into academia, policies that apply to all courses should be considered, Almeida said. But, because the U of A offers such a wide variety of courses, some decisions should be left up to professors. In the end, however, it鈥檚 鈥渆veryone鈥檚 responsibility鈥 to ensure students are prepared to engage with AI ethically and responsibly, Almeida said. 

Almeida likened the use of AI in academia to when calculators were first introduced. 

鈥淎t first, there was a lot of fear around that. But as time went on, instructors were better able to adapt, students were better able to adapt. Still, in some courses, you may not always be allowed to use certain calculators or a calculator, but the expectation is in the communication.鈥

Currently, the taskforce is drafting an academic integrity policy, which Almeida has been providing input for based on student experiences. When it comes to academic integrity, Almeida said their primary objective is providing as much clarity as possible to students, since it isn鈥檛 possible to lay out the punishments for each offence. If they did that, the university would be removing the nuance from each situation, Almeida said. 

Above all else, Almeida cautions students to 鈥渟afeguard themselves鈥 ahead of using AI in their assignments. If a student is considering using AI technology, he suggests they ask their professor over email, so they have the response in writing. That way, they are entirely protected in case they are accused of cheating. 

鈥淥n our end, we’ve done our best so that the academic integrity policy protects students better. We also want to make sure that students don’t have to go through unnecessary stress, if they can help it. The best way to do that is through communication in writing ahead of time.鈥 

For many, the direction of AI is unclear, especially in an academic context. There is no clear path to follow when it comes to AI integration, and there are many valid concerns. However, it鈥檚 hard to plan for something when it hasn鈥檛 fully happened yet.

鈥淎 lot of the answers will unveil themselves as both processes develop. As the policy is in place and we see what happens and we fine tune it over time, and as AI develops. I think that will unfold and it will make clear what the future looks like,鈥 Almeida said.

For now, all students and professors can do is prepare and communicate with each other. Until we have a clear understanding of what the future of AI in academia looks like, there鈥檚 no point in fearing the worst.

Lily Polenchuk

Lily Polenchuk is the 2024 91热爆-25 Editor-in-Chief of The 91热爆. She previously served as the 2023-24 Managing Editor, 2023-24 and 2022-23 91热爆 Editor, and 2022-23 Staff Reporter. She is in her third year of a double-major (honours) in English and political science.

Katie Teeling

Katie Teeling was the 2023-24 Editor-in-Chief and the 2022-23 Opinion Editor at The 91热爆. She鈥檚 in her fifth year, studying anthropology and history. She is obsessed with all things horror, Adam Driver, and Lord of the Rings. When she isn鈥檛 crying in Tory about human evolution, Katie can be found drinking iced capps and reading romance novels.

Related Articles

Back to top button