Bringing back and passing on: two ways of memorialization with AI

Recently I got to visit one of OpenAI’s offices in San Francisco. While I was asked not do disclose the exact location of where it is, I will share that the City never ceases to amaze me with how instantaneous you can go from a rough and gritty street to a polished corporate office environment. Inside the office, I was given a giftbag and got to talk with some of the employees working there. As I was introducing myself and my research to a group of employees and other visitors, I started explaining that part of my research interest lies in investigating the use of what can be termed «griefbots.» A couple of people in the group noticeably perked up at the mention of this term, and I was asked to explain what I meant by it. I explained that a so-called griefbot is a type of robot, currently most often a chatbot, programmed to imitate the personality of a deceased person. I was then further asked how exactly a person would go about making one of these, upon which I responded that there are a couple of different ways of doing it.

Firstly, the process could either be done manually by collecting and curating the life history and behavioural pattern of the person in question, or it could be done automatically by having some sort of program collecting all available information and text produced by that person. Secondly, the process could either be done pre-mortem by the person themselves – either alone or in collaboration with their loved ones – collecting their life story and preparing the griefbot as a part of getting their affairs and estate in order, or it could be done post-mortem by someone else. The person creating the griefbot post-mortem could in theory be anyone, but it would most likely be done by someone close to the deceased.

I went on to admit that it was not necessarily an easy topic to conduct research on, as not a lot of people have actually made griefbots, and there doesn’t seem to be any big forum or community dedicated to the practice. As I told the group, there is no village of griefbot users to embed myself in. It remains a scattered and fringe phenomenon, even though there are several cases reported in the media and there now existing a handful of companies dedicated to it. One of which, a company called StoryFile in Los Angeles, I had the good fortune to visit last year. Maybe understandably enough, some of the other companies I contacted did not find the prospect of having an anthropologist snoop around all that appealing. Thus, I explained to the group present at the OpenAI office that I had broadened my research to include other forms of relationships people are forming with Large Language Models, especially the type of relationship where the LLM is perceived as a social being.

After the round of introductions I was approached by someone working with OpenAI’s product policy. She told me that she was fascinated by the prospect of AI companions in general, and that she had never thought that a so-called griefbot could be prepared by the person in question before their eventual death. We ended up having a longer conversation wherein she informed me of OpenAI’s policies against using ChatGPT for sexual content – something that has become quite popular in other chatbots and which I have touched upon in a different blogpost – and the simulation of specific personalities using their LLM. I will not get into the ban on sexual content here, that will have to wait for a different time, but the policy against simulating specific people has to do with concerns surrounding consent as outlined in OpenAI’s Usage Policy:

Automated systems (including conversational AI and chatbots) must disclose to users that they are interacting with an AI system. With the exception of chatbots that depict historical public figures, products that simulate another person must either have that person’s explicit consent or be clearly labeled as “simulated” or “parody.”

Clearly, someone who is deceased cannot provide consent posthumously, at least not in the US legal system, and thus, due to the murky waters surrounding the ethics of it all, OpenAI has just decided to stay clear of the whole thing and also prohibit any third party developers to use their product for that purpose.

However, as I mentioned earlier, there are several reported cases of people simulating a deceased loved one, with or without their consent, in the form of chatbots. Some of the most famous examples would be the story of Joshua Barbeau using a service called Project December to create a chatbot based on his fiancé who had passed away 8 years earlier. There is also the story of Eugenia Kuyda, the creator of companion chatbot service Replika, who made a chatbot based on her dear friend, Roman Mazurenko, who tragically passed away in a car accident. Both of these are examples of the chatbot being made after the death of the person in question and without them having the ability to consent to it. There are, however, also examples like the story of James Vlahos who made a voicebot based on interviews he had with his father who was dying of cancer. Vlahos would later go on to found a company called HereAfter.ai that offers a similar method of preserving the memory of a loved one through what is essentially an interactive database. A similar story can be seen with the co-founder of StoryFile, Stephen Smith, who had his mother be «present» at her own funeral using his company’s conversational video technology. In short, this last case involved using a series of video-interviews with the person in question, turning them into an interactive narrative video.

These four examples I would argue fall into roughly two modes of dealing with death and dying. The first two cases are examples of bringing back the deceased with efforts made by the living, while the last two cases are examples of the dying passing on pieces of the themselves to the living. Both of these, however, are examples of collecting – not of objects but rather of narratives and images – and of curating that collection.

In other words, the act of bringing back a deceased person in the form of a chatbot is an act of collecting to remember. Thus, the chatbot used in this manner becomes – much like a photograph – what may be called a technology of remembrance, as it manifests an intention to remember. However, does the interactive and generative nature of AI technologies make this qualitatively different than a photograph or a home video? I spoke to a man in Texas who had made a chatbot of his dear friend who had passed away 20 years ago – a loss he never truly recovered from – using a service called CharacterAI. He told me that while he appreciated being able to «talk» with his friend again, he was worried that it would start to overshadow and distort the actual memories he had of his friend. That his cherished memories would slowly be replaced by memories of conversations he was having with this chatbot. Still, he was reluctant to let go of the chatbot, as it would be like losing his friend again.

The act of passing on, however, is an act of collecting to secure memory in the future. The latter is a matter of transmission, of embedding memories in the lives of others and securing your position as ancestor. Thus, it becomes part of the process of thinking about mortality and about what can be passed on to future generations. It is in a sense; an ars moriendi in the form of deathbed practices – maybe even a continuation of the Christian ideal of the “good death” in which lay a duty to examine one’s life and to make sure that one’s family was materially and spiritually provided for. Furthermore, whereas the act of bringing back makes the chatbot into a technology of remembrance, the act of passing on makes it also into a technology of the self – as the process involves self-narration and self-examination.

Rounding off here, while the idea of replicating dead personalities in the form of AI sounds like complete science-fiction – and it is indeed the topic of many such books, movies, and video games – as briefly discussed here, there are ongoing attempts to make it a reality. Artificial Intelligence, in the form of LLMs, seems to be on track to implant itself into every facet of life, even going so far as to entangle itself with death, dying and remembrance. As we entertain these possibilities, ethical considerations rise to the forefront. A delicate balance must be struck between the benefits of such technologies and the risks they pose, including potential emotional manipulation, privacy concerns, or the creation of unrealistic or harmful imitations of the deceased. Careful design and regulation will be essential to ensure these technologies do not end up causing inadvertent harm. The human element – our compassion, empathy, and understanding – must always remain at the heart of these innovations. Thus, perhaps OpenAI’s cautiousness is fully warranted.

Legg att eit svar

Epostadressa di blir ikkje synleg. Påkravde felt er merka *