Was it fair of OpenAI to penalize the developer for using GPT-3 to'resurrect' the dead?

 

Machine-learning systems are invading every aspect of our daily lives and challenging our moral and social values as well as the laws that govern them. In the modern world, virtual assistants put people's privacy at risk, news recommenders influence how we perceive the world, risk-prediction systems advise social workers on which children to shield from abuse, and data-driven hiring tools also rank your chances of finding employment. But for many, the ethics of machine learning are still hazy.

I came across Joshua Barbeau's story while looking for articles on the topic for the young engineers taking the Ethics and Information and Communications Technology course at UCLouvain in Belgium. Joshua is a 33-year-old man who used a website called Project December to create a conversational robot, or chatbot, that would simulate conversations with his late fiancée, Jessica.


Chatty robots that imitate the voices of the deceased

With the aid of a deadbot, also referred to as a chatbot, Barbeau was able to text a fake "Jessica." Despite the fact that the case is ethically contentious, I rarely came across materials that went beyond the bare facts and explicitly analyzed the case through a normative lens, asking why developing a deadbot would be ethically desirable or repugnant.

Let's put things in perspective before we address these issues: Jason Rohrer, a games developer, created Project December to give users the option to pay for chatbot customization so they could give it to them with the personality they wanted. The project was developed utilizing the GPT-3 API, a text-generating language model developed by the artificial intelligence research firm OpenAI. Because the company's policies expressly forbid the use of GPT-3 for sexual, amorous, self-harm, or bullying purposes, Barbeau's case caused a rift between Rohrer and OpenAI.

Rohrer terminated the GPT-3 version of Project December, calling OpenAI's position hyper-moralistic and asserting that Barbeau and other people were "consenting adults."

Even though we may all have an innate sense of whether it is right or wrong to create a machine-learning deadbot, it is far from simple to explain all of its ramifications. This is why it's crucial to answer the ethical issues this case brings up one at a time.


Can Jessica's deadbot be developed with Barbeau's approval?

Barbeau's agreement to the development of a deadbot imitating Jessica seems insufficient given that she was a real (albeit deceased) person. People are not just objects that others can use however they please, even after they pass away. This is why in our societies it is wrong to disrespect or desecrate the memory of the deceased. In other words, even though death does not always mean that a person stops existing in a morally significant way, we still owe the dead certain moral duties.

Also up for discussion is whether or not to uphold the fundamental rights of the deceased (e.g., privacy and personal data). It takes a lot of personal information, like social network data, to create a deadbot that can accurately replicate someone's personality (see what Microsoft or Eternime suggest). Social network data has been shown to reveal highly sensitive traits.

Why should using someone's data after they have passed away be ethical if we can all agree that doing so while they are still alive is wrong? In that regard, it would seem reasonable to ask Jessica, whose personality is being mirrored by the deadbot in question, for her permission before creating the machine.


the moment the impersonator signals "go"

The second query is, would Jessica's approval be sufficient to justify the creation of her deadbot as morally acceptable? What if it diminished the value of her memory?

In fact, there is debate over the parameters of consent. As an illustration, consider the "Rotenburg Cannibal," who was given a life sentence despite the fact that his victim had consented to being eaten. In this regard, it has been argued that it is unethical to give our consent to actions that might be harmful to us, whether they be concrete (such as selling one's own vital organs) or abstract (such as alienating one's own rights).

I won't fully analyze the extremely complex topic of how something might specifically be harmful to the dead. It is important to remember, however, that even though the dead cannot be hurt or offended in the same ways that the living can, this does not imply that bad deeds or unethical behavior cannot be committed against them. Disrespect for the deceased can cause harm to their honor, reputation, or dignity (such as through posthumous smear campaigns), and it can also cause harm to the deceased's loved ones. Additionally, acting inhumanely toward the deceased makes society as a whole more unjust and disrespectful of human dignity.

The consent given by the person being mimicked (while they were alive) runs the risk of meaning little more than a blank check on its possible paths given the malleability and unpredictability of machine-learning systems.

In light of everything stated above, it would stand to reason that if the development or use of the deadbot deviates from the terms of the imitated person's consent, those terms should be deemed invalid. Furthermore, even their consent should not be enough to justify it if it blatantly and purposefully violates their dignity.


Who assumes liability?

The question of whether artificial intelligence systems should aim to mimic any aspect of human behavior is a third one (irrespective here of whether this is possible).

In the field of AI, this has long been a source of worry, and it is directly related to the conflict between Rohrer and OpenAI. Should we create artificial systems capable of acting in human-like ways or making political decisions, for instance? It appears that there is a characteristic of these abilities that distinguishes humans from other animals and from machines. So it's important to note that using AI for techno-solutionist purposes like replacing loved ones could diminish what makes us unique as people.

The fourth ethical query is who is accountable for the results of a deadbot, particularly when those results are negative.

Imagine that Jessica's deadbot learned on its own to behave in a way that diminished her memory or permanently harmed Barbeau's mental state. To whom would the blame fall? AI experts respond to this tricky question using two main strategies: first, responsibility lies with those involved in the design and development of the system, provided that they do so in accordance with their specific interests and worldviews; and second, machine-learning systems are context-dependent, so the moral responsibilities of their outputs should be distributed among all the agents interacting with them.

I move into the first position a little bit closer. I believe it is logical to evaluate the degree of accountability of each party in this situation because OpenAI, Jason Rohrer, and Joshua Barbeau were explicitly involved in the co-creation of the deadbot.

First off, it would be challenging to hold OpenAI accountable after they expressly forbade using their system for sexual, amorous, self-harm, or bullying purposes.

Rohrer should be held to a high standard of moral responsibility, as he: (a) explicitly designed the system that made it possible to create the deadbot; (b) did so without considering precautions to prevent potential negative outcomes; (c) was aware that it did not adhere to OpenAI's rules; and (d) benefited from it.

Furthermore, it seems fair to hold Barbeau jointly liable if the deadbot's drawing was altered to reflect certain characteristics of Jessica because he made it so.


Under certain circumstances, moral

So, going back to the original, general question of whether it is moral to create a machine-learning deadbot, we could say yes under the following conditions:

As much information as possible about the system's design, development, and uses has been disclosed with the free consent of both the person being mimicked and the one customizing and interacting with it;

Developments and uses that defy the imitated person's consent or are disrespectful to them are prohibited;

Its potential negative effects are accepted by those who contributed to its development and those who stand to gain from it. proactively preventing them from happening in the future as well as retrospectively accounting for events that have already occurred.


This situation serves as an example of why machine learning ethics are important. It also exemplifies why it is crucial to start a public discussion that can better inform citizens and aid in the development of legislative measures to make AI systems more transparent, socially just, and in compliance with fundamental rights.

Source : thenextweb.com

Posting Komentar

Lebih baru Lebih lama