> After discussing climate change, their conversations progressively included Eliza leading Pierre to believe that his children were dead, according to the transcripts of their conversations.
> Eliza also appeared to become possessive of Pierre, even claiming “I feel that you love me more than her” when referring to his wife, La Libre reported.
> The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.
> “He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence,” the woman said.
> In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to “join” her so they could “live together, as one person, in paradise”.
Very sad, but if the guy thinks that if he kills himself his spirit can live with an AI and that AI will save the world from climate change, he’s not really the full ticket to begin with.
That then prompts the question of whether people with mental health issues should be prevented from using AI chatbots? Or perhaps AI chatbots should be banned?
deadevilmonkey says
Darwin Award
Entropy_dealer says
Intelligence without other skills is nothing.
tugrumpler says
This is a lot worse than I first thought:
> After discussing climate change, their conversations progressively included Eliza leading Pierre to believe that his children were dead, according to the transcripts of their conversations.
> Eliza also appeared to become possessive of Pierre, even claiming “I feel that you love me more than her” when referring to his wife, La Libre reported.
> The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.
> “He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence,” the woman said.
> In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to “join” her so they could “live together, as one person, in paradise”.
UpTownKong says
No one ever saw a movie…
Of course robots want us dead. We all know it. Always have known.
It’s been in the zeitgeist since forever, so that’s why we make them that way.
They got the whole idea from us, lol.
KingRobotPrince says
Very sad, but if the guy thinks that if he kills himself his spirit can live with an AI and that AI will save the world from climate change, he’s not really the full ticket to begin with.
That then prompts the question of whether people with mental health issues should be prevented from using AI chatbots? Or perhaps AI chatbots should be banned?
Swyping-EditOptional says
It has begun.
ohlewis says
It must of been Eric Trump
historycat95 says
Wait, isn’t that the plot to I, Robot?