Most cultures in the west provide a lot of statues, portraits, and buildings to remember distant ancestors. It all becomes a bit vague and unimaginable after a push great-grandson.

We look at the Egyptian pyramids and think of 5000 years in the future. Science fiction is akin to fantasy. Assuming a global catastrophe doesn't happen, humanity will be around in 5,000 years and should be around 500,000 years later. We could be here in 5m or 500m years if we play our cards right.

According to the moral philosopher William MacAskill, all those numbers should make us pay attention. He believes that the deep future is something that needs to be addressed now. MacAskill said how long we last as a species and what kind of state of wellbeing we achieve may have a lot to do with what decisions we make and actions we take.

The thesis of his new book is that of a million-year view. The Dutch historian and writer Rutger Bregman calls the book a monumental event, while the US neuroscientist Sam Harris thinks that no living philosopher has had a greater impact on his ethics.

MacAskill is a young man with a disarmingly informal personality who is promoting his book on a phone call.

He co-founded the Centre for Effective Altruism to bring data-based analysis to the business of charity to make donations more effective. MacAskill is president of 80,000 Hours, a non-profit group, and co-founding with Toby Ord Giving What We Can, an organisation whose members pledge to give at least 10% of their earnings to charities.

He says that it was in the course of reasoning that he thought about issues that impact both the present generation and the long term.

MacAskill was raised in a middle class family and attended private school. He had always been kind. He decided to give half of his money away when he was a teenager because of the AIDS crisis. He did voluntary work for a disabled scout group, but it wasn't until he got to Cambridge, where he studied philosophy, that his moral outlook took on a more intellectual form. He was propelled into a lifetime of practical commitment as a result of reading Peter Singer's Famine.

MacAskill doesn't follow up on his ideas even though he talks about them. Although he wouldn't describe himself as a chauvinist, he is concerned with the best interests of the human race. He believes in reducing the suffering of animals. His argument is that the more humans there are, the happier they will be.

According to the US Population Reference Bureau, about 120 billion humans have been born so far. If we assumed that our population continues at its current size and we last as long as typical mammals, there would be 80 trillion people yet to come.

The moral argument is that the needs of our descendants should be a big part of our deliberations. MacAskill states that the climate crisis is already upon us and that we need to decarbonise. His book is not focused on this problem. He uses the climate crisis as a proof of longtermism, saying that everyone contributes to a problem that has effects for hundreds of thousands of years.

Climate change is one of the world's most important problems and there are large social movements dedicated to fixing it.

A model of how to deal with uncertainty is provided by the debate.

He argues that humanity could soon enter a phase in which values, good or bad, will become ‘locked-in’

He writes that climate change sceptics often point to our uncertainty as a reason for not taking action. The worst-case outcomes are worse than the best- so the uncertainty around climate change is not symmetric.

MacAskill believes we can use a method of probability assessment called "expected value theory" to deal with the uncertainty inherent in long term thinking. It is a way of assigning values to outcomes in different scenarios. MacAskill says the application could help guide us through the complicated contingencies ahead.

In a medium-low-emissions scenario, there will be 2.5C of warming by the end of the century. He writes that this is not certain. There is a one in 10 chance that we don't get cold weather. There is a chance that we get more than 3.5 degrees and that should not be reassuring. It would be better if the temperature was less than 2 degrees but more than 3.5 degrees would be worse. We have more reason to worry because of the uncertainty.

With the threat of an engineered Pandemic increasing, he believes there are certain steps that can be taken to avoid a break out.

Far ultraviolet C radiation is one partial solution he is excited about. Most ultraviolet light harms humans even though it sterilises the surfaces. There is a type of UVC that is safe for humans but still has sterilizing properties.

A far UVC lightbulb costs about $1,000 per bulb. He suggests that it could be made part of the building codes if it were to come down to $10 or $1. He runs through the scenario with a kind of optimism that is based on science.

The threat of artificial intelligence is less tractable. He believes that we are currently in a phase of history in which our values are plastic, but that we could soon enter a phase in which values are locked-in.

Lehel Kovács

Illustration by Lehel Kovács.

If the Nazis had won the second world war, held on to power for a few hundred years, and then got to the point of developing AGI, then you could have a future that would be guided and controlled.

The point of this analogy is not that the Nazis are bad, but that they are bad. An intelligent machine can learn and act on any intellectual task that a human can do. The ability to control ideas and social development is almost unlimited from that point.

The time is right for MacAskill to address that possibility. Most scientists working in the field think that AGI will happen, even though they can't be certain. A majority of people think it will be possible within the next 50 years. MacAskill thinks there's a 10% chance of AGI in 50 years. What can be done to stop research in the world?

He acknowledges that it's difficult. It isn't nearly as clear cut as preventing Pandemics. Some things can be done. There is an idea to slow down some areas of research. You can get a lot of the gains without going all the way with the help of artificial intelligence. Is it necessary to have systems that are engaged in long term planning? Do we need to have a system that can do a lot of different things?

He says that there are steps that need to be taken immediately. The field of interpretability research in artificial intelligence requires more support. We don't really know what's going on under the hood, because we have these black boxes with input data, this enormously complex model, and then these outputs. There are enormous challenges, but I can see a path forward.

MacAskill begins his book with a metaphor of a risky expedition. We don't know what threats await us, but we can scout out the landscape ahead of us, ensure the expedition is well resourced and well coordinated, and guard against those threats we are aware of.

MacAskill ignores some of the ideas that are held dear by many who are concerned about the future, especially those looking at things from an environmental perspective. There are arguments against economic growth, against consumption and against bringing more children into the world.

MacAskill doesn't agree with all of them. The kind of growth we have seen in the past century or so is not sustainable in the long term.

It would have to be 10m tn times as much output as our current world produces for each atom. This just doesn't seem possible, though of course we can't be certain.

He doesn't think it's time to slow growth because we are not yet at a technological stage where that's possible without calamitous effects He shows the example of where we were 100 years ago. If we stopped growth, we could either return to agricultural life or burn fossil fuels and cause a climate catastrophe.

He thinks technological development and economic growth are necessary to avoid threats of climate crisis, bioterrorism and many other things. If all 193 countries signed up to growth, it would be pointless.

One country continues to grow even though 192 was persuaded to stop it. In the long run, compound growth means that before too long, one country is the entire world economy.

He argues that donating to causes that deal with the problems created by consumption is more effective than cutting back on consumption.

He wants us to get more fine-grained. Some technologies have positive and negative effects. We can push on the ones that are better.

The book is about human values. We need to look at history to understand how they have changed over time. Slavery and its abolition is an example that MacAskill wants to return to. Slavery has been accepted in most cultures at various times.

There were no compelling arguments against it. The slave trade changed things. The contradictions between universalism and owning and mistreating fellow humans became more and more difficult to reconcile.

MacAskill says there was no economic need to end slavery. After slavery ended, sugar plantations were not mechanised for a long time. The part played by those who made the moral case should be acknowledged. It is obvious to us that no one could have objected to it. Powerful forces did. MacAskill says moral progress isn't inexorable. Once progress has been achieved, it seems like that's what it is.

The argument that rejects humanism, post-Enlightenment values and the whole liberal discourse as just being the soft power of the west is against the moral step forward. MacAskill doesn't have much time to argue relativist arguments.

He thinks that it's a mistake to link colonialism and liberalism. One of the worst things to happen to history was colonialism. Slave-owning societies and extreme patriarchal societies are just their way of being and we shouldn't tell them they're wrong. That's not true.

The west has a lot to learn from other cultures. By comparing how well off most people are in developed countries to how well off they are in non-developed countries. He places himself in the top 5% of wealth in the world because he gives away all of his earnings above post tax. If he did have children, he would allow an extra £5,000 for each one if he shared the financial burden with another parent.

I think people in rich countries could be doing a lot of good and giving a lot more than they are currently doing.

There are wisdoms to be learned from Indigenous people.

The oral constitution of the Iroquois advocated concern for the future. It seems like this was the case for many Indigenous philosophy in Africa.

He thinks that there are cultural reasons why this happened. Changes in technology took a long time in hunter-gatherer societies. It was possible to learn something from your ancestors 1000 years ago and give it to your descendants.

In societies undergoing rapid change, we don't feel connected to the future because we don't know what it will be.

He suggests that we have developed the conceptual tools to navigate our way through the unknown and should use them. It is possible to use expected value theory to hedge against uncertainties.

When a regime speaks of the longterm future, it is often to establish an epic stage to bolster their claims on governance. The first Qin emperor spoke of an empire lasting 10,000 generations and the Nazis tried to do the same with the Reich. The Qin empire was three years longer than the Nazis.

MacAskill is arguing for humility in the face of such a large amount of time. That doesn't mean that should lead to a lack of focus. Growth is predicted to be 2% a year for the next 100 years. MacAskill says that we should take into account the possibility of a catastrophe that wipes out half of the population.

We don't know what will happen, but we need to prepare for different outcomes. MacAskill says we owe it to ourselves and also to the billions that will come.

  • William MacAskill wrote What We Owe the Future. Go to guardianbookshop.com to order your copy. Delivery charges can be applied.