Anissimov: Safeguarding Humanity Against Extinction Risk

March 26 2008 / by Venessa Posavec
Category: Technology   Year: 2008   Rating: 10

It would be great to think that the future will be better than the present, and all emerging technologies will be created to do the most good. But, the future holds no guarantees, and we’d be irresponsible and falsely idealistic to cheerlead every new development without looking at its acccompanying risks.

To help us at that task, we spoke with Michael Anissimov, a futurist blogger over at Accelerating Future, and Fundraising Director, North America of the Lifeboat Foundation. He writes extensively on existential risk (or extinction risk), which he defines as “a risk so severe it threatens to wipe out the human race or permanently curtail our potential.” The biggest potential threats come from nanotechnology, biotechnology, and AI/robotics.

Anissimov explained the mission of the Lifeboat Foundation, and gave us his views about how new technologies might impact us in the upcoming years if we don’t plan ahead. Though he’s generally optimistic, he forced us to put down our Future pom-poms for a minute, and really consider the risks that accompany powerful technology.

(cont.)

“The Lifeboat Foundation is a non-profit organization that looks at serious risks to humanity’s future and attempts to address them through its programs. It currently has nine active programs, focusing on preventing risks associated with AI, asteroids, bioweapons, and nanoweapons. There are programs to boost sousveillance (watching the watchers), Internet security, and scientific freedom. For backup plans, there are programs devoted to developing the technical knowledge for self-sustaining bunkers and space habitats.”

“To me, preventing extinction risk is the foremost moral imperative of our time. In less than a decade, humanity will likely develop weapons even more deadly than nukes – synthetic life, and eventually, nanorobots and self-improving AI. Even if we consider the likelihood of human extinction in the next century to be small, say 1%, it still merits attention due to the incredibly high stakes involved – if mankind goes extinct here on Earth, we’ll never be able to colonize the galaxy and fill it with sentient beings living worthwhile lives. This moral calculus makes lowering extinction risk a cause like no other. “

He warned that many futurists have a “happy ending bias” and tend to pepper the future with “rosy scenarios” to keep clients and the geek community blindly optimistic about the future. Though many people consider the risk of human extinction no more than a far-fetched joke, he stressed that it’s a dangerous possibility worth considering with maturity.

“The biggest dangers facing humanity are either self-replicating or self-amplifying. In the self-replicating category, we have synthetic life (coming to a lab near you in 2009), genetically modified pathogens (like an enhanced version of the 1918 Spanish flu virus), and self-replicating robotics, especially nanobots (not here yet, but likely to arrive by 2020). In the self-amplifying category, there is superintelligence in general, which can be broken down into intelligence-enhanced humans, brain-computer interfaces, and artificial intelligence. If a superintelligence got it into its head that it didn’t care about humanity or some subset of humanity, that group would have a very hard time indeed.”

Anissimov told us he doesn’t expect the truly disruptive changes to happen before 2020 or so, but also admit that his views might be considered conservative by most futurist standards. Focusing on the near-term, he shared his insight about how the next 10 years might shake out.

2008

Synthetic Life: The first synthetic organism, Mycoplasma laboratorium, will likely be created. This will be a historic moment.

(risk: “If a destructive synthetic microbe is released into the biosphere, who knows how much damage it could do. Synthetic biology could exploit pathogenic strategies that natural biology has very poor innate defenses against, having no evolutionary experience against these invaders. “)

Brain-Computer Interface: The first commercial brain-computer interface for gaming, Emotiv EPOC, will be available to the public, starting at $300 USD. The EPOC will bring the experience of using a brain-computer interface (BCI) to the common gamer, which could lead to a fundamental shift in attitudes towards BCI across society.

Space Tourism on the Rise: Space tourism will become more popular, and in 2009, the world first commercial spaceport, Spaceport America, will open.

2013

Increased Transparency: The world will become increasingly transparent, with hobbyists installing live streaming cameras in public places, and it being essentially impossible to do anything about it. Face recognition software will automatically tag every image of you and upload it to open websites.

Molecular Manufacturing & AI: The continuation of Moore’s law will mean that computers in 2013 will be about ten times more powerful than today, which will allow better molecular dynamics simulations and more lifelike virtual agents. In many ways, I think the world of 2013 will be similar to the world of today, except for being more networked and transparent.

2018

Computing Power & AI: Computers will be roughly 100 times more powerful than those of today, and there will be hundreds of supercomputers that exceed the computing power of the human brain. The time will be ripe to create general AI. Using virtual worlds as a learning environment, and skipping expensive and clumsy robotics, programmers will craft increasingly more intelligent software, informed by cognitive science on one hand and information theory on the other. If general AI is successfully created, it could quickly lead to a hard takeoff Singularity.

Wearable Computers: We will have wearables that can tell what we’re going to say before we say it (this already exists, but the vocabulary is only 150 words), project images directly onto our retina, allow us to navigate menus using just the power of our brain, and replace the functions of cell phones, mp3 players, GPS devices—you name it. These will be elegantly integrated into our clothing rather than being used as external devices.

Personalized Manufacturing: Personalized manufacturing will start to be a big deal in 10 years, whether molecular manufacturing is developed or not. If MM is developed, we will be building superproducts out of diamond. Otherwise, we will synthesize gadgets using simple plastics and electronics components. This will be a boon to the Third World, which has trouble getting ahold of centrally manufactured products.

(MM risk: “If unscrupulous governments gain control of unrestricted nanofactories, they could manufacture millions of smart missiles, tanks, UAVs, even aircraft carriers, for extremely low cost. This would radically destabilize international relations. If it turns out that nanoweapons (offense) overpowers nanodefenses (defense), then there will be a powerful first strike incentive. Because MM will automate vast sectors of military manufacturing, it has the potential to kickstart a new arms race on an unprecedented scale.)

War?: The utmost disruptive event would be World War III. This is another one of those things that many futurists ignore because it isn’t useful for pandering to the audience’s technophilia and optimism. If WWIII breaks out, it could throw our civilization behind dozens if not hundreds of years. Aside from a World War, we should watch out for an apocalyptic event unleashed by synthetic life or microscopic self-replicating machines.

To read the full interview transcript, click here

Comment Thread (1 Response)

  1. To me, preventing extinction risk is the foremost moral imperative of our time.

    I’ve been following Michael’s blog for a while and generally find his writings on existential risk to be the most though-provoking and valuable. I tend to agree with his analysis that potential catastrophe often gets skipped over in discussions of the future. Over the years I too have encountered many futurists who consciously avoid the “dark side” of futurism either due to benevolent personality, the desire not to spread powerful negative futures, or because it would impact their brand or revenue stream.

    I remember chewing over the implications of the Law of Accelerating Returns as a young lad trying to figure out what I wanted to do with my life and repeatedly simulating the myriad doomsday scenarios that come packaged with all the magical stuff. Like Anissimov, I too arrived at the realization that working actively to prevent existential risk is a natural imperative for anyone with a strong baseline of empathy whose brain tends to go to those places. This tendency led me first to more literature online, then to some Acceleration Studies Foundation (ASF)future salons out in LA, then to volunteering for the ASF, and finally here to MemeBox.

    First and foremost, the goal here is to create a forum for all styles of future-thinkers and to generally open people’s eyes to the notion that as things get faster we need to get better at looking ahead using new tools like social media. IMO, addressing the notion of existential risk is an essential part of such an equation and so I’ll continue to point people in the direction of Michael’s work and to this awesome interview/primer wherever appropriate. It’s a perspective a large sub-set of future-interested folks will find very useful, I’m sure.

    Michael’s blog, Accelerating Future is probably the best thing going short of the terrorism futures markets the U.S. govt was forced to publicly denounce back in 2003. (Call me crazy, but my bet is they still exist somewhere in the bowels. :) So go read it often.

    Posted by: Alvis Brigis   March 25, 2008
    Vote for this comment - Recommend