Why giving AI ‘human ethics’ is probably a terrible idea

If you want artificial intelligence to have ethics, you have to teach it. A pair of researchers from the International Institute of Information Technology in Bangalore, India proposed in a pre-print paper.

The paper describes a methodology called "elastic identity", which the researchers say can be used to gain a greater sense of agency while simultaneously understanding how to avoid "collateral damage."

The researchers are suggesting that we teach artificial intelligence to be more ethically aligned with humans by allowing it to learn when it is appropriate to maximize for self and when it is necessary to maximize for a community.

The paper had this information per it.

We focus on a specific characteristic of our sense of self that may hold the key for the innate sense of responsibility and ethics in humans. The elastic sense of self is what we call the identity set.
>
Our sense of self is not limited to the boundaries of our physical being, and can encompass other objects and concepts from our environment. This forms the basis for social identity that builds a sense of belongingness and loyalty towards something other than one's physical being.

An agent could understand ethical nuances if they were able to understand altruism and selfish behavior.

There isn't a way for ethics. Humans have been trying to figure out the right way for everyone to conduct themselves in a civilized society for thousands of years, and the lack of utopian nations in modern society shows how far we have come.

It may be more of a question of philosophy than science when it comes to the measure of elasticity of an artificial intelligence model.

According to the researchers.

There are questions about the evolutionary stability of a system of agents. A small group of non-empathetic agents who don't identify with others can be successfully "invaded" by a system of empathetic agents. Is there a strategy for deciding the optimal level of one's empathy that makes it stable?

Do we really want machines that can learn ethics? Our socio-ethical point of view has been forged in the fires of many wars. We broke a lot of eggs on our way to make the omelet.

We have a lot of work to do. It could be a recipe for disaster if we teach our ethics and then train it to evolve.

It could lead to a better understanding of human ethics and civilization with artificial agents. Humans have dealt with uncertainty better than machines.

The research is worth the read. You can check it out on arXiv.