AI simulations of dead people risk “unwanted digital hauntings”, researchers have warned.

A new study by ethicists at Cambridge University found that AI chatbots capable of simulating the personalities of people who have passed away – known as deadbots – should require safety protocols in order to protect surviving friends and relatives.

Some chatbot companies are already offering customers the option to simulate the language and personality traits of a deceased loved one using artificial intelligence.

Ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence say such ventures are “high risk” due to the psychological impact they can have on people.

“It is vital that digital afterlife services consider the rights and consent, not just of those they recreate, but those who will have to interact with the simulations,” said co-author Dr Tomasz Hollanek, from the Leverhulme Centre, said:

“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.”

The findings were published in the journal Philosophy and Technology in a study titled ‘Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry’.

The study details how AI chatbot companies that claim to be able to bring back the dead could use the technology to spam family and friends with messages and adverts using the deceased person’s digital likeness.

Such an outcome would be the equivalent of being “stalked by the dead”, the researchers warned.

“Rapid advancements in generative AI mean that nearly anyone with internet access and some basic know-how can revive a deceased loved one,” said study co-author Dr Katarzyna Nowaczyk-Basinska.

“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.

“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

Recommendations from the study include safeguards around terminating deadbots, as well as improved transparency in how the technology is used.

Similar to the Black Mirror episode ‘Be Right Back’, chatbot users are already utilising the technology in an effort to emulate dead loved ones. In 2021, a man in Canada attempted to chat with his deceased fiancée using an AI tool called Project December, which he claimed emulated her personality.

In 2022, New York-based artist Michelle Huang fed childhood journal entries into an AI language model in order to have a conversation with her past self.

Ms Huang told The Independent that it was like “reaching into the past and hacking the temporal paradox”, adding that it felt “very trippy”