Artificial intelligences that wake up and become conscious are a staple of science fiction. It is a matter of great debate whether or not this is actually possible. We don't know how to make conscious machines, and given current measurement techniques, we won't know if we have created one. The existence of conscious machines would have dramatic ethical consequences.

We can't see consciousness in computers or the software that runs on them, just as we can't see consciousness in insects. This is not a problem that is impossible to solve. Instruments that measure non visible forms of light, such as x-rays, can be used to detect light. We have a theory of electromagnetism that we trust, and we have instruments that we can use to determine if something is present. A theory of consciousness could be created to determine if something is conscious or not, depending on how it worked and what it was made of.

There is no consensus theory of consciousness. According to a recent survey of consciousness scholars, the most popular theory is global workspace, which says that conscious thoughts in humans are those broadly distributed to other unconscious brain processes. The top three theories of consciousness disagree on whether or not a computer is conscious. Each measure of consciousness in machines or nonhuman animals depends on one theory or another, so the lack of consensus is a big problem. There is no independent way to test consciousness without a theory.

Advertisement

If we respect the uncertainty that we see across experts in the field, the rational way to think about the situation is that we are very much in the dark about how computers could be conscious. Depending on the theory, computers will never be conscious, they might be conscious someday, or some already are.

Most people are not trying to make conscious machines or software. The field of artificial intelligence is trying to make useful tools, and it is not clear if consciousness would help with any cognitive task we would want computers to do.

Like consciousness, the field of ethics is rife with uncertainty and lacks consensus about many fundamental issues. One common thought is that consciousness has something to do with ethics. The ability to experience pleasant or unpleasant conscious states is one of the key features that makes an entity worthy of moral consideration, according to most scholars. It's wrong to kick a dog but not a chair. What ethical obligations would we have to computers that experience positive and negative conscious states? We would have to treat a computer with moral considerations.

We make machines to do work we can't do, but also work we don't want to do. Ours is a conscious mind, so they would deserve the same ethical consideration. Just because an artificial intelligence is conscious doesn't mean that it will have the same preferences or activities. Whatever its preferences are, they would need to be considered when putting that artificial intelligence to work. Making a conscious machine work is not ethical. This is obvious, but there are deeper problems.

Artificial intelligence can be thought of at three levels. The hardware on which the software runs is a computer. The code is installed on the hardware. Every time this code is executed, we have an instance of that code running. Do we have ethical obligations? The hardware and code levels are irrelevant, and the conscious agent is the instance of the code running. Is it ethical to keep a computer running forever if someone has it?

Advertisement

It is a task of running instances of the software over and over, fixing problems and trying to make it work. What if one was obligated to keep running the conscious software even during the development process? Computer modeling is a good way to explore and test theories in psychology. It would quickly become a large computational and energy burden if you were ethical in dabbling in conscious software.

If we can help it, we probably shouldn't create conscious machines.

I'm going to turn that on its head. If machines can have positive experiences, they can be considered to have some level of welfare. Machines might be able to produce more welfare, such as happiness or pleasure, than biological beings do. For a given amount of resources, one might be able to produce more happiness or pleasure in an artificial system than in a living creature.

Scientific American newsletters are free to sign up for.

A future technology could allow us to create a small computer that could be happier than a human being, but only require as much energy as a light bulb. According to some ethical positions, the best course of action would be to create as much artificial welfare as possible, be it in animals, humans or computers. Future humans might set the goal of turning all matter in the universe into machines that can produce welfare 10,000 times more efficiently than any living creature can. The one with the most happiness might be this strange future.

The views expressed by the author or authors are not necessarily those of Scientific American.

Advertisement