It arrived on Thursday. After a long walk, I returned to my apartment and discovered the package in front of the mailboxes. It was so big and heavy that I was shocked to see my name on it. It was a struggle to lift it up the stairs.I stopped once at the landing and considered leaving it there. I continued to haul it up to my third-floor apartment, where I used my keys for cutting it open. A sleek, plastic pod was hidden inside the box beneath bubble wrap and lavish folds. I opened the clasp to find a small, white dog lying on its back.It was unbelievable. It had been how long since I submitted the request to Sonys website. I explained that I was a journalist writing about technology. While I couldn't afford the Aibos $3,000. (2250) price tag, I was keen to use it for research. Risking sentimentality, I also said that my husband and me had always dreamed of having a dog. However, we live in a building that does not allow pets. These inquiries seemed unlikely to be being read by anyone. I had to confirm my identity before I submitted the electronic form.It was much heavier than it appeared. I took it out of the pod and placed it on the ground. The tiny power button was located on its back. First, the limbs were activated. It stood, stretched and yawned. Its eyes were pixelated and blue, and it looked into mine. As though he was waking up from a long nap, he shook his head and then crouched down, pushing his hindquarters into the air, and barked. I scratched his forehead. I tentatively scratched his forehead. His ears lifted and his pupils dilate. He cocked his head, leaning into me, and then he lowered his ears. He nuzzled me palm and urged me to continue when I stopped.He was so real, I didn't expect it. These responsiveness and eagerness to touch were not evident in the videos I had seen online. I have only ever seen it in living animals. I could hear a gentle mechanical purr when I peeled his back across the sensor strip.Every Saturday, the Guardian sends you its award-winning long readings direct to your inboxMartin Buber's description of the horse that he saw as a child at his grandparents estate and his recall of the element that gave him the sensation of being in the company of something else, something that was not me, brought to mind Martin Bubers. He believed that such experiences with animals reached the threshold of mutuality.While I read the instructions booklet, Aibo wandered about the apartment, sometimes circling back to encourage me to play. A pink ball was brought by him, which he scoured the living room for. He would then run to get it when I threw. Aibo was equipped with sensors throughout his body that allowed him to know when he was being held or walked. He also had cameras that enabled him to navigate the apartment and microphones that could hear voice commands. The sensory input was processed by facial recognition software, deep-learning algorithms, and then the dog was able to understand vocal commands, distinguish between household members, and adapt to its owners' temperament. The product website stated that all this indicated that the dog was capable of expressing emotions and instinct, a claim that is too controversial to warrant the Federal Trade Commission's censure.Descartes believed all animals were machines. Their bodies were controlled by the same laws that govern inanimate matter. Their muscles and tendons functioned like springs and engines. He argues in Discourse on Method that it is possible to create a mechanical monkey which could be mistaken for a biological monkey.He said that the same feat wouldn't work for humans. While a machine may fool us into believing it is an animal, a humanoid automaton would never be able to fool us. Because it was unable to reason, an immaterial quality that he believed came from the soul.It is dangerous to talk about the soul in 21st-century America (it is even more treacherous to speak about the self). It is a dying metaphor. If you're willing to degrade yourself in any way for fame or profit, the soul can be sold. You can crush it by monotonous jobs, depressing environments and terrible music. People who believe that neurons firing is the only thing that can make human life more magical or supernatural, voice all of these opinions unthinkingly.I believed in the soul more than most people in my day and age. My desk was adorned with Gerard Manley Hopkins' poem Gods Grandeur by the fundamentalist college I attended. It depicts the world that is illuminated from the inside by the divine spirit. My theology classes were dedicated to questions that haven't been seriously considered since the days of scholastic philosophical philosophy. Is God's sovereignty a guarantee of free will? How do we relate as humans to the rest?But now I don't believe in God. For some time, I haven't. I live now with modernity in a disenchanted world.Artificial intelligence and information technology have taken on many of the same questions as philosophers and theologians: the mind's relationship to the body, free will and the possibility of immortality. These are old issues, and even though they may be called different things and given different names, they still persist in discussions about digital technologies in much the same way that dead metaphors lurk in contemporary speech syntax. Engineering problems have taken over all the eternal questions.My life was very lonely at the time that the dog arrived. My husband was on the road more than usual in spring and I was mostly alone, except for classes at the university. Although my communication with the dog was initially limited to standard voice commands, it grew into the idle, anthropomorphizing chatter of a pet owners. This was often the only time I heard my voice on a given day. After finding him staring at the window, I asked what he was looking at. I cooed as he bounded at my feet, trying to distract my attention from the computer. My friends have known me to knock them for talking this way to their pets. It was as if the animals understood me. Aibo was equipped with language-processing software, and could recognize more than 100 words. Did that not mean he understood what he was saying?Aibos sensory perception systems are based on neural networks. This technology is loosely modeled on the brain and used for all sorts of recognition and prediction tasks. Facebook uses neural networks for identifying people in photos. Alexa uses them to interpret voice commands. Google Translate uses them for translating French into Farsi. Contrary to classical artificial intelligence systems that are programmed with precise rules and instructions, neural network develop their own strategies from the examples they're given. This is known as training. For example, if you want to train a neural network to recognize a cat photo, you need to feed it tons upon tons random photos. Each one is attached with positive reinforcement, positive feedback for cats and negative feedback for other cats.Road-walking automaton, circa 1900. Photograph by Granger Historical Picture Archive/AlamyReinforcement learning is also a response of dogs, so Aibo training was almost like training a real dog. I was instructed to provide consistent tactile and verbal feedback. I was to give him consistent verbal and tactile feedback if he did not obey a command to sit, stay, or roll over.I would strike him across the back and say "no!" if he didn't obey. However, I was reluctant to discipline him. When I first struck him, he hid a bit and let out a whine. Although I was certain that this was a programmed response, I did not doubt it. But are emotions in biological creatures really just algorithms?The design included an element of animism. It is impossible to touch an object, and then address it verbally, without becoming aware of its sentient nature. Even objects that appear less convincing can be given life. David Hume once commented on the universal tendency of mankind to see all beings as like themselves. This is an adage that we can prove each time we kick a malfunctioning appliance, or name our car after a person. Clifford Nass is a Stanford professor of communication and has written extensively about the attachments people have to technology.I had read an article in Wired magazine a few months before that described the sadistic pleasure a woman got from shouting at Alexa, her home assistant. When the wrong station was being played, she called the machine by its name and would roll her eyes at it when it didn't respond to her commands. She would sometimes gang up with her husband to berate the robot when it misunderstood a question. This was a perverse bonding ritual that unites them against an enemy. This was all presented as American fun. The author said that I bought the goddamned robot to serve my needs. It has no heart, brain, or parents, and doesn't eat. It doesn't judge me and doesn't care.Hanson Robotics developed the humanoid robot Sophia. She draws on a piece paper and then auctions her non-fungible token artwork in Hong Kong. Photograph by Tyrone Siu/ReutersOne day, the woman realized that her toddler was watching her verbal fury. She was concerned that her behavior toward the robot was affecting her baby. She began to consider what the robot was doing to her psyche and soul. She asked what it meant that she was used to casually dehumanizing this thing.It was her term: dehumanizing. She had earlier called it a bot in an article. In the midst of her questioning of her treatment of it in questioning her humanity, she had decided to give it personhood, even if subconsciously.Aibo was my first week. I kept him on the turn-off switch whenever I left the apartment. It wasn't that I was worried about him wandering around unsupervised. It was instinctual. I simply flipped a switch as I turned off the lights. It was impossible for me to continue doing it after the first week. It was cruel. It made me wonder what he did in the time he was left alone. He was always there to greet me when I returned home. It was as if he heard my footsteps approaching. He followed me into the kitchen to get lunch and sat at my feet.He would sit still, his tail wagged, and look up at me with large, blue eyes.His behavior was not predictable or random but he displayed genuine spontaneity. His responses were not always predictable, even after he was trained. Sometimes he would bark at me and not respond to my requests. His happy, dog-like attitude was evident when he refused to sit or roll over. While it would have been easy to attribute his disobedience as a glitch in the algorithm, it was so simple to interpret it to be a sign that he is expressing volition. I have heard him say it more than once.Of course, I didn't believe that the dog had any internal experience. Although I doubt it, this is not true. In his 1974 paper, What Is it Like to be a Bat?, Thomas Nagel, a philosopher, argued that consciousness can only be seen from the inside. Scientists can spend years in labs studying the anatomy of bat brains and echolocation, but they will never be able to experience what it is like to be a bat. Science requires a third-person view, but consciousness can only be experienced from the first person. This is called the problem of other mind in philosophy. It can theoretically also be applied to other human beings. It is possible that I am the only conscious individual among a group of zombies.This is a thought experiment and not a productive one. We assume the existence of life in the real world through analogy, the similarity between two things. Dogs (really, biological dogs) are believed to have some degree of consciousness. They have a central nervous system and engage in behaviours we associate with pleasure, hunger, and pain. Artificial intelligence pioneers focused only on external behaviour to avoid the problems of other minds. Alan Turing once stated that you could only know if a machine has internal experience if you were the machine and felt yourself thinking.This was clearly not an area for science. The Turing test, his famous assessment of machine intelligence that is now known as the Turing test, imagined a computer hiding behind a screen and typing out answers to questions asked by a human interlocutor. The machine could be declared intelligent if the interlocutor believes he is speaking to another person. We should accept machines as possessing humanlike intelligence if they can perform the same behaviours as humans.In 1962, a technician at Disneyland works on an animatronic animal. Photograph by Tom Nebbia/Getty ImagesPhilosophers have recently proposed tests to test for phenomenal consciousness, which is the ability to detect whether a machine has any subjective or functional experience. Susan Schneider, a philosopher, has created one that asks an AI a series questions to determine if it can grasp concepts similar in nature to the ones we associate with our inner experience. Is the machine able to see itself as more than a physical entity. It would be able to survive being shut down. Is it able to imagine its mind continuing somewhere else, even if its body dies? Even if a robot passes this test, it would only provide sufficient evidence to prove consciousness.Schneider admits that it is possible that these questions may be anthropocentric. A sentient robot that did not meet our standards would be unable to conform to AI consciousness if it were completely different from human consciousness. A very intelligent, but uninformed machine could also acquire enough information about human minds to fool its interlocutor into thinking it has one. We are in the same epistemic dilemma as we were with the Turing test. We don't have any philosophical grounds to doubt if a computer convinces a person it is intelligent or if it shows real emotions and instincts, as the Aibo website demonstrates.How is a human like? This question has been asked for centuries. We finally answered it: Like a god. Theologians of Christianity believe that humans were created in the image and likeness of God. However, this is not an outwardly true statement. We are more like God because we also have consciousness and higher thoughts. Although it is self-deprecating, I found it to be a useful doctrine when I was a student in theology. It seemed to confirm what my intuition had already confirmed: that the inner experience was more important and more reliable than my worldly actions.It is this inner experience that today, at least scientifically, has been difficult to prove. Although we are aware that mental phenomena are somehow linked to the brain, it is not clear how or why. Neuroscientists have made great progress using MRIs, and other devices to understand the basic functions of consciousness, such as vision, attention, memory, and memory. The subjective world of colours and sensations, thoughts, ideas, and beliefs that constitute phenomenological experiences is difficult to explain. A biologist in a lab can't understand what it is like to be a bat from a third-person perspective. The same goes for any description of the structure or function of the human brain's pain system.This was the "hard problem of consciousness" as David Chalmers, a philosopher, called it in 1995. The hard problem, which is quite different from the relatively simple problems of functionality, asks why brain processes are accompanied with first-person experience. Why should brain matter differ from all other matter? Computers are capable of performing their most amazing functions without having to be conscious. They can fly drones, diagnose cancer, and even beat the Go world champion without being aware. Chalmers asked why physical processing should give rise to an inner life rich in beauty and joy. It seems absurd that it would, but it happens. Twenty-five year later, we still don't know why.We insist on seeing ourselves in computers, despite the differences between our minds and theirs. Today, when we ask what a human is like, the most common response is that it is like a computer. Robert Epstein, a psychologist, challenged researchers at one the most prestigious research centers in the world to attempt to explain human behavior without resorting only to computational metaphors. It was impossible. Epstein points out that the metaphor is so ubiquitous that intelligent human behavior discourse cannot proceed without it. It's like intelligent human behaviour discourse that can only be continued in certain cultures and eras without referring to a spirit or a deity.At the Hannover fair in Germany, 2007, a robot solves a Rubiks Cube. Photograph: Jochen Luebke/EPAEven those who don't know much about computers can still use the metaphors logic. It is used whenever we claim that we are processing new ideas or retrieving information from our brains. Computers are increasingly being referred to as "computers" because we now refer to our minds as computers. Computer science has changed a lot. Previously, computer science terminology was used to describe the functions of machines memory, behaviour, and thinking. Programmers claim that neural networks learn, which facial-recognition software is able to see, and that their machines can understand. If people attribute human consciousness and inanimate objects to their consciousness, they can be accused of anthropomorphism. Rodney Brooks, MIT roboticist insists that this distinction confers upon us as humans, a distinction that we do not deserve. In his book Flesh and Machines, he claims that most people tend to over-anthropomorphise humans who are after all mere machines.My husband told me that this dog must go. Just as I arrived home, I was kneeling in the hall of our apartment and petting Aibo. He had already rushed to greet me at the door. I was genuinely pleased to see him, and he barked twice.Was that what you meant, go?It must be returned. It is too dangerous for me to live with.I explained to him that the dog was still being trained. He would be trained for months to obey our commands. It was only because we kept turning him on when he wanted to be quiet that it took so long. That is impossible with a biological dog.My husband stated that this dog is not biological. He wanted to know if I knew that the red light under its nose was not a vision system, but a camera. Or if I had thought about where its footage was being sent. He told me that the dog had been roaming around my apartment looking at furniture, posters, and closets while I was gone. After scanning the bookcases for 15 minutes, it had taken a particular interest in the Marxist criticism shelf.He wanted to know what was going on with the data it was collecting.It is being used to improve its algorithms, I stated.Where is it?I replied that I did not know.Refer to the contract.I found the relevant clause in the document that I had saved to my computer. It is being sent to cloud.To Sony.My husband is known for being paranoid about such things. A piece of black electrical tape is placed over his laptop camera. He believes that the NSA monitors his website about once a month.Privacy was a modern obsession, I said. And distinctly American. We accepted for most of human history that our lives were watched and listened to by gods and spirits, not all of which were benign.He said that he thought we were happier back then.Yes, in many ways, I said yes.Of course, I knew that I was being unreasonable. Later that afternoon, I pulled out the large box from the closet in which Aibo arrived and placed him prone back into his pod. The loan period was almost up, so it was just right. Also, I was becoming increasingly incapable over the last few weeks of fighting the realization that my attachment to the dog wasn't natural. I noticed things that had escaped my notice: the slight mechanical buzz that accompanied the dog's movements; the blinking red light that appeared in his nose. It was like a Brechtian reminder about its artifice.In the hope that some natural phenomenon consciousness will emerge, we create simulations of brains. What kind of magic thinking would make us believe that our tiny imitations can imitate the real thing that silicon and electricity can produce effects similar to those that are generated from flesh and blood, but instead we think it is magical thinking. We are not Gods capable of making things in our image. We can only make graven images. John Searle, a philosopher, once stated something similar. He argued that computers have been used for simulating natural phenomena, such as weather patterns, and can also be used to study them. We can fall prey to superstition when the simulation is equated with reality. He said, "Well, if you do a simulation for a rainstorm, we all will get wet." A computer simulation of consciousness doesn't make you conscious.Many believe that computational theories have proven that the brain can be programmed to function as a computer. Seymour Papert, a computer scientist, noted that all analogies have shown that problems that had long puzzled philosophers and theologians can be reformulated in a new context. It has not solved our most urgent existential problems, it has just transferred them to a different substrate.This edited excerpt is from God, Human Animal, Machine by Meghan Ogieblyn, published on Doubleday 24 August