A young influencer couple recently blamed ChatGPT for wrong advice after they seemingly missed a flight to Puerto Rico. According to reports by The Daily Mail, the AI chatbot gave them wrong information regarding a visa to enter the Caribbean island. TikToker Mery Caldass and her partner, Alejandro Cid, shared a video on TikTok talking about their ordeal.
In the viral video that has now amassed more than 6 million views, Mery was first seen crying and being distraught after not being allowed to travel. While Cid went about comforting Mery, the latter said,
"Look, I always do a lot of research, but I asked ChatGPT and they said no."
This seemed to be about asking ChatGPT if they needed a visa to enter the Caribbean islands. According to the distraught couple, the AI chatbot responded by saying that they didn't need a visa, even though a visa was required to enter the Caribbean islands. A distraught Mery further said that she didn't trust ChatGPT anymore and even called it a "son of a b**ch".
Amid this chaos, at one point in time, Mery claimed that this possibly was the chatbot's way of avenging after she insulted it. She said,
"I don't trust that one anymore because sometimes I insult him, I call him a b*****d, you're useless, but inform me well...that's his revenge."
According to The Daily Mail, a lot of netizens reacted to the viral video. Many questioned the couple for relying on ChatGPT solely while planning a trip. Others defended the AI chatbot, stating that it didn't give out wrong information since Spanish tourists didn't need a visa to enter the Caribbean island. However, they had to process an Electronic Travel Authorization (ESTA) online.
Many believed that the couple possibly did not enter the correct prompt on ChatGPT.
This reportedly was not the first time that people got in trouble for following ChatGPT's advice without doing any necessary cross-checking. As per reports by The Daily Mail, the news about the Spanish influencer couple getting into a stranger in the airport surfaced a day after something more severe happened to a 60-year-old.
The man reportedly asked ChatGPT how to remove salt from his diet. He asked this question after getting to know about the ill effects of table salt. The man ended up replacing table salt (sodium chloride) in his diet with sodium bromide. For the unversed, it is more commonly used for cleaning swimming pools these days.
In the past, during the 1900s, sodium bromide was an ingredient in a few medications. However, modern science has reportedly found it to be toxic when consumed in large amounts. According to doctors, the man was suffering from bromism. As per a report published by the American College of Physicians Journals,
"He had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning."
The Daily Mail reported that the symptoms of bromism included psychosis, delusions, skin eruptions, and nausea. The man reportedly rushed to the emergency ward, claiming that his neighbor was trying to poison him. The outlet further reported that doctors also looked into ChatGPT and observed that the AI chatbot suggested swapping table salt (sodium chloride) with sodium bromide, with no warning of health risks.
It has further been confirmed that the man had no previous records of any mental ailments. According to The Guardian, the man tried to escape within 24 hours of getting admitted to the hospital. He had then been getting treatment for psychosis. The man reportedly stated having symptoms like facial acne, excessive thirst, and insomnia, indicating bromism.
In a statement given to FOX News, OpenAI said,
"We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance."
Sam Altman made an appearance on This Past Weekend w/ Theo Von last month and revealed why ChatGPT shouldn't be trusted completely. While talking on the podcast, Altman first said that people would ask a lot of private questions to ChatGPT, without understanding the consequences.
According to Altman, they would have to produce those conversations in case they get subpoenaed. He additionally stated,
"If you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, and we haven’t figured that out yet for when you talk to ChatGPT."
He added,
"If you go talk to chat about your most sensitive stuff, and then there’s like, a lawsuit or whatever, like, we could be required to produce that."
As far as the latest video about the Spanish couple is concerned, no further information is available.
TOPICS: ChatGPT, AI, influencer