Kansas State hosted its third annual AI symposium in Hale Library from Oct. 14-16. This event focused on expanding participants’ knowledge about recent changes and adaptations to artificial intelligence through panels, workshops, discussions and debates, according to the official website of the K-State AI symposium.
Associate professor Cydney Alexis said participating in these events is crucial for faculty and students.
“People need to have a base level of understanding of what AI is, AI literacy or fluency as people are calling it, and also understand things like risks, benefits and dangers [of using AI],” Alexis said.
As the use of AI continues to grow, understanding how to both safely and ethically use the machine is a necessary skill, Alexis said.
“I want people to have basic AI literacy or fluency, I want people to know how to use the tools and to be able to make strong decisions about how they want to use them,” she said.
Having a strong sense of AI literacy and ethics is not only about knowing how to use AI, but also when to use it. Alexis said uploading documents they do not own is an example of unethical use.
“I don’t take … copyrighted PDFs and put them into an LLM [large language model] because they are not my property,” Alexis said. “I think anyone who’s using the tool should have the right to be able to think through the questions of privacy or access.”
After attending the symposium, Madeline Harp, junior in computer science and math, said understanding how generative AI processes the uploaded information is important.
“I don’t think people talk about how generative AIs, like ChatGPT, for example, learn through user habits and uploads enough, at least not our generation,” Harp said. “I think that even if it doesn’t change how you use it [AI], you should still know that your information is no longer only yours.”
Generative AI has quickly incorporated itself into the workforce and is viewed as a valuable skill set to understand, according to the World Economic Forum. While the state of Kansas has not enforced any AI policy for its universities, other state universities, such as Northeastern University, welcome AI with open arms.
The University of Florida has partnered with NVIDIA to provide students with firsthand access and experience with AI at all levels.
“It’s an AI campus,” Alexis said. “They got a large supercomputer from NVIDIA, they are going to integrate AI at every level and every discipline. They are the leaders — one of the global leaders in AI — and students who attend UF get training in AI in every discipline. They’re very aware of it.”
Students who do not have access to AI lessons and tools may be at a disadvantage down the road, Felipe Hicks wrote in “Disadvantages of Artificial Intelligence in Education.” Employers might be more likely to select the resume of a person with a range of AI knowledge and skills compared to one without.
“Yeah, it [AI] sets them up,” said Alexis. “But you know, we just don’t know what that teetering line is or what inequalities it will produce.”
Alexis said that AI has not increased the amount of cheating, but instead how cheating takes place.
Denise Pope and Victor Lee, researchers participating in a Stanford University study “What do AI chatbots really mean for students and cheating,” reveal that the amount of students cheating has remained the same or slightly decreased since the last survey.
“For years, long before ChatGPT hit the scene, some 60 to 70 percent of students have reported engaging in at least one ‘cheating’ behavior during the previous month,” Pope said. “That percentage has stayed about the same or even decreased slightly in our 2023 surveys.”
The data also suggests that students have shifted the way they cheat to work alongside AI like ChatGPT and Claude.
“Many said they thought it should be acceptable for starter purposes, like explaining a new concept or generating ideas for a paper,” Lee said. “But the vast majority said that using a chatbot to write an entire paper should never be allowed.”