Education, politics feel impact of generative AI

Image
  • Over the last several months, AI software like ChatGPT, which can write full essays based on user-submitted prompts, and Open Art, which creates images based on user-provided descriptions, have grown in popularity, raising concerns about the ethical use thereof. Especially with video and image manipulation, there exists the opportunity for less-than-truthful things to be presented. PHOTO/YOUTUBE
    Over the last several months, AI software like ChatGPT, which can write full essays based on user-submitted prompts, and Open Art, which creates images based on user-provided descriptions, have grown in popularity, raising concerns about the ethical use thereof. Especially with video and image manipulation, there exists the opportunity for less-than-truthful things to be presented. PHOTO/YOUTUBE
Body

As artificial intelligence becomes more accessible, industries from education to politics to advertising and communications are all feeling the impact and having to work to establish policies and guidelines for its use.

Over the last several months, AI software like ChatGPT, which can write full essays based on user-submitted prompts, and Open Art, which creates images based on user-provided descriptions, have grown in popularity, raising concerns about the ethical use thereof.

According to the News Service of Florida, the University of Florida Provost Joe Glover emphasized to its Board of Trustees last month that these types of generative AI are things that universities should be keeping an eye on.

“As everyone knows, generative AI hallucinates. ChatGPT makes mistakes, it doesn’t give the right answers. It is subject to flights of fancy and depression,” Glover said. “And so, it needs a validation ecosystem, which people are working on now. It needs development and collaboration with subject matter experts. It needs ethics, security and policy built around it.”

Florida universities are taking a variety of approaches to combating student use of these generative AI programs. According to the News Service of Florida, Florida Gulf Coast University is planning to continue utilizing its subscription to TurnItIn.com, which the school said includes an application to detect “signatures of AI-generated prose.”

Locally, UCF has provided some suggestions to faculty through its Center for Teaching and Learning website, including rethinking writing assignments to make them more difficult for generative AI applications to create essays for students and providing more opportunities for in-class writing assignments, where student writing can be supervised.

These concerns are not limited to academics. As the 2024 election approaches, experts are seeing an increase in the number of political communications utilizing generative AI.

“You talk about opposition messaging, it can be created at the snap of a finger. The prompt returns information so fast that we’ll be inundated with it as the election cycle really starts to heat up,” Janet Coats, managing director of the University of Florida’s Consortium on Trust in Media and Technology, said in a recent interview.

Steve Vancore, a longtime political consultant and pollster, told the News Service of Florida that the increase in the amount of communication between politicians and voters means that we’re likely to see more generative AI being used, both for ease of communication and to mislead voters.

“To say, ‘Hey, I want a series of emails talking about my program to have after-school counseling for kids.’ … That’s a perfectly acceptable use of artificial intelligence,” Vancore said.

However, especially with video and image manipulation, there exists the opportunity for less-than-truthful things to be presented.

“One of the raps on Joe Biden is that he’s old. That’s not an unfair rap, perhaps. It’s a legitimate concern that the most powerful person on earth, or one of, is getting older, right? What if the Joe Biden campaign subtly just de-aged him a little bit? Showed him walking a little bit more gingerly, responding a little more rapidly,” Vancore said.

These concerns are especially important to note because voters are having to assess the validity of this information for themselves, without many new tools to assist, though some social media platforms are attempting to provide more context to users.

For example, Twitter has recently added a feature called Community Notes. According to Twitter support, they “aim to create a better informed world by empowering people on Twitter to collaboratively add context to potentially misleading Tweets.”

One of the biggest things individuals can do when it comes to generative AI is to be educated and look at information with a critical eye until there are additional tools to help identify these communications.

According to the News Service of Florida, Glover emphasized the importance of human intervention in policing generative AI.

“It needs a validation ecosystem, which people are working on now. It needs development and collaboration with subject- said.