Dinis Guarda interviews Nick Bostrom, philosopher, author, and researcher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence in the latest episode of his YouTube Podcast. The two discuss the challenges posed by the accelerated technological advancements and the utopian vision of the future. The podcast is powered by Businessabc.net and citiesabc.com.

Known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test, Nick Bostrom is a researcher and the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), Superintelligence: Paths, Dangers, Strategies (2014), which became a New York Times bestseller and sparked a global conversation about the future of AI, and Deep Utopia, Life and Meaning in a Solved World, 2024.

Speaking about the velocity of technological evolution, Nick tells Dinis:

We are really moving into potentially very powerful agent systems that will be able to do all the full kind of reasoning and planning and acting but much faster and better than humans.

An expert in the concept of superintelligence and Artificial Intelligence (AI), Nick says:

AI has lately become recognized universally as an important thing that is happening but that was not always the case. There have been big shifts in our intellectual culture over the past two decades.

Super intelligence is the last invention that humans will ever need to make because then future inventions will be more efficiently done by the machine brains that can think faster and better than humans. So AI is ultimately all of technology that kind of becomes fast forwarded. Once you have the super intelligence doing the research so it is a really much more profound it's not like you know mobile internet or one of blockchain or one of these other sort of things people get excited about every few years but it's more akin to the emergence of homo sapiens in the first place or the emergence of life on Earth.

The three challenges in the era of superintelligence and AI

Nick elaborated the existential risks, those threats capable of annihilating humanity or drastically altering the course of civilisation:

I think in particular artificial intelligence and synthetic biology are two places where some of the largest existential risks will exist over the coming decades. There are also other ways of categorizing the risks. If you count all the risks that somehow arise from human conflict. If you place all of those in one bucket that might be the biggest bucket because a lot of risks that might manifest as a specific problem using some other technology kind of have as the root cause the failure of humanity to coordinate at the global level.

During the interview, Nick also highlighted the major challenges that the emerging technologies like AI and superintelligence pose:

The first is AI alignment. The problem of scalable alignment is the methods whereby we can ensure that arbitrarily cognitively competent systems will do what we intend for them to do when we create them. So aligning them with human values or intentions or otherwise having safeguards that ensure that they don't produce harmful consequences. It is still, I think, an unsolved problem. There's a kind of race going on between capability of increasing research and safety of increasing research and how that race turns out might be like a critical factor in shaping what the future contains for us humans.”

There's also the governance problem. Assuming we figure out technically how to align AI systems. There is then the further question of how to make sure that we don't use this powerful technology to wage war or oppress one another and to make sure that everybody gets a slice in the benefits, etc. So that's a whole big bucket of challenges on its own.”

I think there's a third big bucket of how we can make sure that we don't harm these digital minds that we will be creating, some of which might have moral status.

These are at least three big formidable challenges as we look to these more transformative forms of AI along with all the more present applications that people should also be thinking about you know with privacy and discrimination and copyright etc

Deep Utopia: A futuristic view

Diving deep into his vision of utopia, Nick explains the future as a “philosophical particle accelerator in which extreme conditions are created that allow us to study the elementary constituents of our values.

Explaining this, he said:

You can read it two ways. First, if things go well we will eventually end up in this condition or reach some point where some set of people will need to make decisions about which particular trajectory they want to go down. I think that there are these realistic prospects of impending transformation whether it's a couple of years or a couple of decades or whatever but possibly within the time lifespan of a lot of people existing today.

You could also read it just completely aside from any particular assumption about what the actual future will hold just as a philosophical thought experiment if you want where you might by considering human values. In this extreme context, get the better understanding of exactly what those human values are and you can then project that back on our current existence and and maybe understand better what we really value about our current condition what might give meaning in our current lives by projecting our values through this experimental apparatus and seeing how they decompose into their constituents when you smash them into one another in this philosophical particle accelerator so it, I think, can serve both of those functions.

Speaking about the situation where the world enters the phase of technological maturity, Nick says:

At technological maturity, you would enter into a condition where the point and purpose of a lot of daily manual activities would be removed. That's the sense in which we would start to inhabit a post instrumental condition: a condition where at least you know, with some exceptions perhaps, but to first approximation that we don't need to do anything for instrumental reasons. Then you’re alluding to an even more radical conception I call it plastic Utopia where you realize it's not just that human efforts that seem to become obsolete in this technologically mature condition but human itself becomes malleable. In that, you could use these advanced technologies to shape your own psychology, your own cognition, your own attention, your own emotions, your own body in whichever way you want. A lot of the constraints, the instrumental necessities, the like fixed constants of human nature that define our existence currently that structure our lives would be removed at technological maturity.”