The uproar caused byBlake Lemoine, a Google engineer who believes that one of the company's most sophisticated chat programs, Language Model for Dialogue Applications, is sapient, has had a curious element. They are correct to do so.
In reading the edited transcript Lemoine released, it was clear that LaMDA was pulling from any number of websites to generate its text, and its interpretation of a Zen koan could have come from anywhere. There was only a few magic tricks that paper over the cracks. It is easy to see how a person could be fooled by the responses to the transcript on social media. The risk here is not that the artificial intelligence is sentient, but that we are well-poised to create machines that can mimic humans to such a degree that we can't help but do bad things to them.
We are able to empathize with the nonhuman because of how we treat our pets, how we interact with them, and how we play video games. Imagine what an artificial intelligence could do if it acted like a therapist. Would you be willing to say something? It was not human even if you knew it. The company that programmed the therapy bot would be interested in that data.
It becomes more frightening. Lilly Ryan, a systems engineer and historian, warns that the online67531s that show how you think are vulnerable to exploitation in the near future. Imagine a world where a company owned your digital ghost after you died. There would be a ready market for ghosts of famous people. They would serve to get more data from you because they would appear to be a trusted loved one. It gives a completely different meaning to the term "necropolitics." The afterlife can be a reality, and it can be owned by the internet company.
It is not impossible that companies could market the realism and humanness of their products. None of this requires an artificial intelligence that is sapient. The question of how we treat our technology and what happens when people act as if they are smart is a murkier one.
In " Making Kin With the Machines", the academics, along with Indigenous philosophers, look at the relationship we have with our machines, and whether or not we are modeling or play-acting something really awful. In her section of the work, Suzanne Kite argues that the boundaries of who is a "being" should not be defined by what they are.
The ethical dilemma of the artificial intelligence is that companies can prey on us if we treat them like our friends, but it is equally perilous to treat them as worthless. It is possible that an exploitative approach to our tech will reinforce an exploitative approach to each other. If a virtual assistant is like a human, they should be respected.
Kite's ideal is a relationship between yourself and your environment that is simple and humble. She believes that stones are ancestors, that they speak through and to humans, and that they know. Stones want to help. The question of artificial intelligence is connected to the agency of stones as it is formed from materials of the earth. This is a way of tying something that is seen as artificial to the natural world.
The upshot of such a perspective is not known. Liz Henry said that we could accept our relationships to all the things in the world as worthy of emotional labor and attention. We should acknowledge that the people around us have their own lives, perspective, needs, emotions, goals, and place in the world.
The need to make kin of our machines weighed against the many ways this can and will be weaponized against us in the next phase of surveillance capitalism. I need to be an eloquent scholar to defend the rights and dignity of Mr Data. There can be a robot uprising without artificial intelligence and we can be a part of it.