“ChatGPT Is Really Helpful”

0
56

Universities are continuing to navigate the challenges and opportunities posed by artificial-intelligence (AI) tools such as ChatGPT. While many are wary of its power and capacity to enable student cheating, others point out that AI has legitimate research uses. Students (and faculty), however, need to be taught how to use it well. There are times in the research process when consulting AI is appropriate and times when it is not. Students need to understand the difference. This is what three experts at UNC-Chapel Hill are working to accomplish through the creation of online information modules.

An effort of Carolina AI Literacy, the modules were created by Daniel Anderson, director of the Carolina Digital Humanities and of the digital innovation lab; Dayna Durbin, undergraduate teaching and learning librarian; and Amanda Henley, head of Digital Research Services. The project was funded through one of six seed grants awarded by UNC-Chapel Hill’s School of Data Science and Society. The grant is designed to encourage interdisciplinary research in data science.

The Martin Center spoke with Anderson and Durbin to discuss their work on these modules in greater detail. The following transcript has been edited for clarity and length.

Boosting Students’ Confidence in Using AI Tools

Martin Center: AI is a powerful tool, but it can be used poorly or incorrectly. What are you hoping students will get out of these tutorials?

Dayna Durbin: One of the things that we were hearing from students was that they were intrigued by generative AI tools. But they weren’t quite sure if they were staying within the bounds of the academic honor code and if they would, perhaps, be accused of cheating or plagiarism if they used some of these tools. We wanted to boost their confidence in using AI tools and [for them to] feel like they had the skills to make those calls. So we created the three modules to start initially. The first one focuses on how to prompt AI or how to craft the prompts that you put into these AI tools. The module that I worked on focused on fact-checking and bias, making sure that you’re avoiding misinformation when you use these tools, because they can “hallucinate” or create information that’s not factually correct. [We wanted to make] sure that students understand that piece.

And then the third module focuses on avoiding plagiarism. Many of the same skills that you would use with other types of tools and other types of resources also apply when using these generative AI tools. We just wanted to help students have a better understanding of how the tools work and how they could use them as tools—rather than replacing thinking or writing that the students do on their own. But they can be used as tools to help with those processes.

Daniel Anderson: Building on what Dayna said, we did want students to have a sense of being in charge of the decisions they make with AI, getting enough background understanding and enough practice with AI so that they don’t feel like they’re just consumers in this AI space but can produce their own knowledge by making smart choices.

Using Background Knowledge to Guide AI

Martin Center: Yes, one of the modules addresses the importance of students’ background knowledge on a given topic and emphasizes the importance of active thinking, how it’s directly related to an AI’s effectiveness. Can you give us some examples of how students can use their knowledge of subject matter to guide the AI to give them more specific and useful information?

Daniel Anderson: One of the things that we did in the first-year writing course that I taught—the course focuses a lot of times on different genres, and one genre is a literature review, where you take a research topic and review it and summarize different perspectives, different bits and pieces of information. So we were experimenting with ChatGPT and came up with a sample topic: noise pollution from server farms. If you’re in a rural community and there’s a server farm, it turns out it creates a whole bunch of noise pollution. And this is what Dayna and my other librarian colleagues might call a nice narrow topic. It’s not just something like “happiness,” it’s a very focused topic.

So we asked ChatGPT to give us a literature review on this topic, and it came up with this wonderful set of sources. There was an article actually about noise pollution in rural North Carolina from the Journal of Audio Aesthetics, something like this. And we thought, “Wonderful, this saved us all this labor.” As soon as we went to the library and started to look for it, it turned out that not only was there not an article with that name, but there wasn’t even a Journal of Audio Aesthetics. That had all been made up. What the students were able to do at that moment was decide that it made more sense for them to go straight to the library databases and use what they knew about the topics they were interested in: find articles, survey them, do traditional research, than untangle what was legitimate and what was not legitimate from the AI output.

Then, in other instances, if they found a legitimate topic and realized that parts of it were relevant but some weren’t, they could ask AI to summarize that piece for them quickly. That turned out to be an appropriate use of the technology. What they ended up doing was almost intuitively saying, “This isn’t going to be that helpful for me in one instance,” but in another instance recognizing, “This is how I want to use it.” And that’s the kind of literacy that we’re hoping for, a kind of situational awareness that students are able to develop.

Misinformation and Scholarly Sources

Martin Center: I would like you to elaborate on the problem of misinformation a little bit more. If a student wanted to find a reliable scholarly source, should they go the traditional route of a database like Google Scholar and only use AI to then summarize it?

Dayna Durbin: That’s kind of what we have been recommending in the library. We’re finding particularly that the free tools like the earlier versions of ChatGPT, because they work on a prediction model, if you ask them a question and they can’t find the information, the tool will just create some information out of thin air. And it can be very convincing-sounding. As Dan mentioned, [AI] can come up with article titles and journal titles that sound very realistic. And so what I’ve been kind of coaching students to do is use the AI tools to help them fill in their background info, or to even narrow in on a research topic. Maybe they’re interested in something very broad. They can talk with a tool like ChatGPT and say, “Here’s my wider research interest, can you give me some more narrow topics that I might investigate?,” and that can help them focus on what they want to research.

Another great use of it that I’ve coached students on is coming up with keywords or search terms that they can use in a tool like Google Scholar or a library database. Sometimes just coming up with the right words to find the articles that you want can be a difficult process, especially if you’re new to a research topic and you haven’t quite gotten a handle on the language that experts use to describe that topic. ChatGPT is really helpful for that. Those are some ways that I’ve used it and coached students to use it: [Use AI at] the beginning stages of the research process, and then take those keywords and search terms to Google Scholar or another library database and use them to find legitimate sources that actually exist that the AI tool hasn’t cobbled together and created based on its training data.

Daniel Anderson: I think that makes a lot of sense. Dayna’s describing a research and composition ecosystem, where there are library databases, there’s Wikipedia, there’s AI that can help you generate keywords. There’s a whole bunch of different options that you can use to explore. Thinking about the stage that you’re in [and] what’s going to be the most useful at any given moment, and then, in terms of misinformation, knowing what stage is going to be more prone or less prone to provid[ing] helpful guidance for you, I think is useful. [We are] establishing a baseline for students, which is this “trust but verify” mode.

You might be asking [the AI] to summarize some of the major battles of the Civil War, to provide some background information before you dive into your specific topic. [The AI is] going to be pretty good at that, it has reasonably accurate historical information. You can save yourself some time by doing that. Then double-check if there’s some battle that looks like it doesn’t ring true, [so] you can track that down. It’s this kind of ecosystem model of “How does intellectual work happen?” And “when can you fit in some helpful tools?” And then it can save you some labor if you have all these Civil War battles [to learn about]. And then you say, “I’d also like to know the date and the location, please add those to the list.” It’ll do that for you very quickly. And you realize, “I want to be able to sort that, can you please format that [in] a spreadsheet? It makes perfect sense to do that, rather than you copy[ing] and past[ing] every one of those into a spreadsheet. Knowing that that’s a little labor-saving move at a certain moment but not appropriate at a different moment.

Proper Citation of AI Sources

Martin Center: AI is such a new tool available to students. How do you properly cite an AI source?

Dayna Durbin: That’s a great question. As you said, these are very new tools. I’ve only been using ChatGPT [for] a year or so. They change so rapidly that it’s hard to keep up with the influx of tools that are coming out. It’s interesting advising students on how to cite their sources or how to cite their use of AI tools. What we are recommending at a campus level is [to] double-check

LEAVE A REPLY

Please enter your comment!
Please enter your name here