UWM Lubar Entrepreneurship Center

AI Goes Unregulated at UWM Over a Year After ChatGPT’s Release

The university recently created an AI task force to discuss how it can address student needs – but currently, there is no school-wide policy.

ChatGPT has faced much controversy for its ability to generate written work, and quickly, the artificial intelligence tool found a home in the classroom. But whether ChatGPT and other forms of AI should be allowed to remain there is an issue that has been hotly debated since the launch of the platform. 

UW-Milwaukee is one university where the debate continues. It does not currently employ a school-wide policy on the use of generative AI in the classroom. Instead, UWM offers suggestions for professors managing its use within the classroom, such as making expectations clear or teaching students how to properly cite work. With ChatGPT’s rising prevalence comes calls for a universal policy to manage its use consistently within the institution.

ChatGPT, the most well-known form of AI in education, is a large language model based on a vast amount of data, with a breadth of information and questionable reliability. Its intelligence is based on prediction rather than thought. 


 

Submit your projects for our annual Home & Design Awards!


“What it actually does is, given a sequence of words, it predicts what the next word could be,” said Dr. Kaushal Chari, dean of UWM’s Lubar College of Business. “ChatGPT can generate new content from existing content. It is based on the large language model, and the large language model basically is trained on a sequence of words.”

Chari was appointed to Governor Tony Evers’ task force on workforce and artificial intelligence in October 2023. The dean is part of a subcommittee to advise the governor with informed predictions on AI and its future impact, specifically in education. 

“We have to draw a fine line. As long as we use ChatGPT to generate new ideas, I’m fine with that; that is a legit use of the tool,” said Chari. “And then, of course the students would have to do some additional work to use those ideas and build upon those ideas using their own creative thought process … the fine line is that they can use ChatGPT to generate ideas to work on their assignments.” 

Though viewpoints vary on where the line should be drawn, virtually all agree there should be a policy to manage its use coming from the department or university. That policy could come in the form of one universal policy or more specific policies for each area of study.

“We need to provide guidance on how we expect our students to use ChatGPT,” said Sarah Riforgiate, director of UWM’s Center for Excellence in Teaching and Learning. “Because you’re working with very different disciplines across various schools and colleges, different students have different needs … one blanket policy is not going to get at, ‘What are students’ needs?’” 

UWM recently created an AI task force to address these needs, designed to effectively integrate AI into the classroom. The task force then created a series of workgroups within it, with different areas of focus across campus.

“How do we establish a safe set of AI capabilities that the campus can broadly leverage?” said UWM chief intelligence officer Scott Genung, who co-chairs the AI task force. “And how do we prepare students for that workforce?”

Most advocates of AI believe the understanding of its value combined with education of students on its limitations and consequences to be the biggest reason for a policy. Students must become familiar with the evolving field of AI to continue to adapt to the modern world.

“I see that there will be greater adoption of AI in all spheres of life. From banks and companies to educational institutions adopting AI for the delivery of their services,” said Chari. “They can also be an aide, a personal aide to people, as an AI assistant, helping persons. I have seen some demos of products like a personal assistant based on AI that could really help somebody, empower somebody to do things. So, I see more and more advanced AI technologies appearing on the horizon in the next few years. We have to be prepared.” 

Others hope for a policy that allows access, but with limits.

“The university should have a policy against it,” said Eric Lohman, teaching faculty in the journalism, advertising and media studies department at UWM. He later clarified that the policy should prohibit unrestricted use – there are uses for the technology but there should also be limits. “[I hope we] make it something where ChatGPT wouldn’t be that attractive to students.”


Despite the lack of regulation, AI is already being found in daily work, with some instructors finding ways to integrate it into the classroom. 

“I have seen applications of generative AI in creating new text for poems or essays; from textual description, it can create images,” said Chari. “I think it is a great tool for doing a lot of creative work.” 

Instructors see ChatGPT’s best uses to be idea generation and information gathering – helping students with the early stages of an assignment or project.

“It gives us a starting place to move forward and further nuance our ideas,” said Riforgiate. “I encourage my students to use it for idea generation.”

Because ChatGPT can aid in learning if used properly, many professors believe it should be allowed but only when used in the proper context, specifically to generate ideas or provide the framework for creative work, but not the actual product.

“I help my students see AI and how it can be used … ChatGPT and AI more broadly can help them, so I want them to use them,” said Sunwall, learning and technology consultant in the Center for Excellence in Teaching and Learning. “I’ve encouraged them to use it.”

But ChatGPT can create quotes and present false information. Because it is a large language model, it has no discernment between truth and falsity. Beyond that, it sometimes creates fake quotations or incorrect information to support an argument it presents.

Instructors and students have experienced this firsthand.

“I’ve seen [instructors] using ChatGPT to pull up information on a topic, and then talking directly with their students about where this information comes from and trying to locate quotes’ sources,” said Riforgiate. “[They] realize ChatGPT has completely fabricated the quotations.” 

Not only does ChatGPT often fabricate quotations, it also can provide information that is simply untrue.

“Seventy to 80% of the text it produces tends to be problematic,” said Andrew Larsen, senior teaching faculty member in UWM’s history department. “The current ChatGPT has less than minimal value.” 

Larsen said that many professors he has spoken with advocate forbidding its use in the classroom. His conservative position on forbidding ChatGPT may be a minority view among professors, but there is nearly mutual agreement that the generative AI program must not be used too liberally. And even if ChatGPT presents reliable information, according to Larsen its writing ability is questionable, even in the most optimistic view.

Beyond questions of its reliability and writing ability, plagiarism is another reason many believe AI should be regulated at UWM.

“I think that something like AI technology could be useful as a pedagogical tool,” said Lohman. “It just so happens that right now it’s mostly being used for cheating purposes.”

“If they just use the output and use that as deliverable, there is a problem then – that should not be allowed,” Chari said. “The fine line is that they can use ChatGPT to generate ideas and get some ideas to work on their assignments … if students just turn in the output from ChatGPT and submit that as an assignment, then that would be an honor code violation.”

Because ChatGPT builds its database on the creative work of previous intellectuals, some see it even as plagiarizing the work of an individual. 

“If you go to AI and you say, ‘I want you to write an essay for me,’ that’s clearly not your work – that is the work of lots of different people that have been amalgamated together by AI,” said Sunwall.


But a year and a half after ChatGPT’s public release, amidst the burgeoning yet controversial field of AI, UWM currently holds no policy on it. While guidelines on AI in its entirety are in the works, a specific policy on ChatGPT is not.

“It doesn’t matter where Scott and I fall on that [ChatGPT],” said Dr. Purushottam Papatla, co-chair of the AI task force, alongside Genung. “It’s something for the university to find out where it should fall … At this point, I don’t think there’s a clear-cut policy anywhere in any university. And that’s the reason why we launched the task force – one of the goals is to come up with those policies.”

Genung added further explanation for the university’s current lack of policy.

“There are certainly other universities that are further along in their journey than we are,” said Genung. “Our purpose [of the task force] is to try and capture in both the things that we see here, the things we see elsewhere, but perhaps also the things that we’d like to see here and use that as a way to identify some key things that we’ll bring back to the task force … That will be a starting point. And then the task force will disband and that governance will be stood up to manage going forward.”

Genung did not give a timeline on the guidelines he, along with Papatla, hoped to implement.

And front and center at the debate over AI’s place in education is the importance of ethics and prioritizing the human experience.

“We need to teach students the ethics and the fundamental ways of thinking and doing in order to make the tools work most properly,” said Riforgiate. “I think it actually makes education much more valuable.” 

Sunwall pointed out the increasing need for discernment in an age of artificial intelligence.

“One of Wisconsin’s mottos is sift and winnow,” said Sunwall. “We need to sift and winnow the information that we get, because not all of it is going to be good. We’ve got to separate that chaff from the wheat … but I think there’s a danger, not about cheating but more about scrubbing ourselves out of the work. If you read things like the founding documents of the United States, AI would certainly change some of the run on sentences or phrasing. But that’s the poetry – that’s why we read it or come back to it.”

These ethics are being strongly considered by instructors and policy-makers alike. Papatla and Genung emphasized the need for strong ethics amidst changing times.

“We want to make sure that whatever we do is ethical and advances the well-being of the university community,” said Papatla.

“What we believe and what many other institutions believe is that these technologies are a means to enhance the work that we do,” Genung said. “What we’re trying to do is make sure that the way that we’re exploring these are opportunities for how different groups across campus can leverage these technologies to extend their reach and their creativity and outcomes … And if we can accomplish that, then that’s really the work that we’re pursuing.”