There is no doubt that Artificial Intelligence poses dangers to personal security, employment and classroom learning. Nevertheless, many professionals in business and educational fields are resolute that tools such as chatbots have the potential to become the most useful resource for problem solving, customer service and mundane tasks that most workers would rather forfeit. As time passes and AI rapidly improves in imitating human intelligence, the line between useful and ethical has become increasingly blurred.
“[AI chatbots] can communicate with people in a way that’s almost indistinguishable between AI and a human,” Drake University Associate Professor Chris Snider said. “But, you know, hopefully we’ll create some new jobs down the road for humans because of these tools.”
Companies that run chatbots such as Botpress, Intercom and OpenAI, have full reign over the information given to them via human interaction. People who use technology like ChatGPT might think their conversations are confidential, but if they were to read the fine print, they would recognize that is not the case.
“If you’re entering information into [AI Chatbots] right now as a free user, they have the right to use, use and learn from that information,” Snider said. “If you pay for these models, a lot of the time, you can turn that off so that they do not retain your data. But I think we need to be aware of the information you put into these tools, especially if you’re using them for free.”
Currently, one of the biggest concerns with AI is the possibility of chatbots taking over jobs, most notably customer service, research fields and potentially even education positions. Some people fear that AI companies will overstep their bounds and society will slowly transform into a place completely run by computers. While some argue this is completely impossible due to government intervention, some people think that a society run by AI would be a beneficial development.
“Technology is changing the way we do things, especially with jobs, but our educational institutions are almost concrete,” Daniel Lamb, FHN history and psychology teacher said. “It’s almost like we refuse to adapt in certain directions. As a teacher who would be sacrificing my own job, if an AI is teaching class in a way that better uses our resources and kids learn better that way, then I’m all for it. But I do think, at some point, missing out on social interaction with a human will lead to consequences we may not see immediately.”
Chatbots can also be damaging on a more social level. The prospect of AI acting as a real person to comfort someone in a dark place has been around for as long as the development of AI itself. According to Lamb, AI friends, boyfriends or girlfriends can potentially be damaging to someone’s psyche.
“When you need something or somebody to communicate with, and for some reason you can’t find a friend to relate to, you create an artificial one,” Lamb said. “In really formative years, if you miss out on social interaction, it could really permanently alter you.”