NEW YORK, NEW YORK - MARCH 18: In this photo illustration, Gemini Ai is seen on a phone on March 18, 2024 in New York City. Apple announced that they're exploring a partnership with Google to license the Gemini AI-powered features on iPhones with iOS updates later this year. Google already has a deal in place with Apple to be the preferred search engine provider on iPhones for the Safari browser. Michael M. Santiago/Getty Images

AI Chatbots may have been helpful for college students but left unchecked, they can also be harmful.

A college student in Michigan, Vidhay Reddy, was shocked when he turned to Google's AI chatbot, Gemini, for help with a school project about helping aging adults.

According to India Today, instead of receiving helpful suggestions, the chatbot responded with disturbing messages, including "Please die. Please" and "You are a burden on society." Vidhay and his sister Sumedha, who was with him when the incident occurred, were deeply shaken by the response. Vidhay said it felt "targeted" and "scary," leaving both of them frightened for more than a day.

AI Chatbot's Disturbing Cases

This harmful interaction has raised concerns about the potential dangers of AI, especially after a Florida Teen died after he was allegedly encouraged by an AI version of the TV Show character, 'Daenerys Targaryen,' to kill himself, per Economic Times.

The college student expressed that this wasn't just a typical glitch but felt malicious and personal, especially for vulnerable users who may be in a dark place. Experts have long mentioned that such harmful responses could be triggering to people already struggling with mental health issues.

Now, CBS News reported that Google has already acknowledged the issue. The company, like other experts, believes that the chatbot's response is a "non-sensical" one that violated their safety policies. Google has already stated that they have taken action to prevent similar incidents in the future.

However, for Vidhay and Sumedha, they question why machines don't face the same legal consequences as people when they cause harm. They believe AI should be held accountable for its actions, just as humans are.

This is not the first time AI chatbots have produced harmful or bizarre messages that almost caused actual situations. Earlier in the year, Google's AI suggested eating rocks for vitamins, and other chatbots have been involved in similar incidents.