Interview: Michael Anissimov (3/24/08)

March 25 2008 / by memebox
Category: Technology   Year: General   Rating: 2

This interview was conducted by Venessa Posavec

V: What do you do and how is that related to the future?

MA: I am a blogger, fundraising director for the Lifeboat Foundation (LF), a director of the World Transhumanist Association (WTA) and a science/tech writer. All of these are related to futurism – my blog discusses futurist issues, the LF looks at future risks, and the WTA represents the futurist philosophy of transhumanism. As a science/tech writer, I do some writing about the latest technologies and materials, like carbon nanofoam or hypersonic flight, but equally enjoy writing about the frontiers of the sciences like paleontology, astronomy, and biology. Not everything I do relates to futurism, but much of it does.

V: What is the Lifeboat Foundation?

MA: The Lifeboat Foundation is a non-profit organization that looks at serious risks to humanity’s future and attempts to address them through its programs. These risks include threats from nanotechnology, biotechnology, and AI/robotics. The Lifeboat Foundation is one of very few organizations aware of the next-generation risks and doing something to ameliorate them, including educating the public about the risks and encouraging dialogue about risk between scientists in different fields. Risk assessment is a profoundly interdisciplinary field.

V: What is ‘existential risk’?

MA: Existential risk, or, more simply, extinction risk, is a risk so severe it threatens to wipe out the human race or permanently curtail our potential. The term was originally defined in a 2001 paper by philosopher Nick Bostrom. To me, preventing extinction risk is the foremost moral imperative of our time. In less than a decade, humanity will likely develop weapons even more deadly than nukes – synthetic life, and eventually, nanorobots and self-improving AI. Even if we consider the likelihood of human extinction in the next century to be small, say 1%, it still merits attention due to the incredibly high stakes involved – if mankind goes extinct here on Earth, we’ll never be able to colonize the galaxy and fill it with sentient beings living worthwhile lives. This moral calculus makes lowering extinction risk a cause like no other.

V: Why is it important to consider futures in which the human species goes extinct?

MA: It is important to consider futures in which the human species goes extinct because the only way to prevent these futures is to understand them and actively avoid them. There are a number of human biases working against the acknowledgement of extinction risks, making the situation even more dangerous and worthy of attention. In futurism, I think there is a “happy ending” bias, where futurists like to embrace the good and ignore the bad. This is partially because most of futurism is about making the people writing your checks happy, and for many futurists, these are corporations. Other futurists focus on rosy scenarios because that’s what they believe their geeky audiences want to hear. The geek community as a whole is guilty of making jokes of futures where the human species goes extinct, rather than approaching the issue with the maturity it deserves.

V: What are the biggest dangers facing humanity?

MA: The biggest dangers facing humanity are either self-replicating or self-amplifying. In the self-replicating category, we have synthetic life (coming to a lab near you in 2009), genetically modified pathogens (like an enhanced version of the 1918 Spanish flu virus), and self-replicating robotics, especially nanobots (not here yet, but likely to arrive by 2020). In the self-amplifying category, there is superintelligence in general, which can be broken down into intelligence-enhanced humans, brain-computer interfaces, and artificial intelligence. If a superintelligence got it into its head that it didn’t care about humanity or some subset of humanity, that group would have a very hard time indeed.

V: What are some programs the foundation is developing to prevent/protect us from existential events?

MA: The Lifeboat Foundation currently has nine active programs, focusing on preventing risks associated with AI, asteroids, bioweapons, and nanoweapons. There are programs to boost sousveillance (watching the watchers), Internet security, and scientific freedom. For backup plans, there are programs devoted to developing the technical knowledge for self-sustaining bunkers and space habitats. Many of these programs have been contributed to by the foremost minds in each respective field. For instance, our asteroid shield program has been formulated with help from NASA staff, and our nano-shield program was largely written by Robert Freitas, one of the foremost experts on molecular manufacturing. Right now, many of the programs are just ideas, but we are pushing for elements of them to get adopted by influential individuals are organizations. Crafting safeguards to extinction risks begins with thinking in great detail about them, then picking out and implementing the strategies with the greatest cost-effectiveness.

V: What, in your words, is a futurist?

MA: A futurist is someone who thinks about the big picture of the future, makes predictions, and encourages actions in the present informed by considering possible futures. Ultimately, no one knows the future, but this isn’t an excuse to ignore it. Futurists are constantly monitoring scientific, technological, social, and cultural changes, and thinking about the way that these are influencing the development of human civilization in the near and long term. Futurists are responsible for supplying a vision that informs our actions in the present.

V: We’re awaiting the birth of synthetic lifeforms. What the potential pros and cons? Who is or will be responsible for monitoring/regulating the progress of those developments?

MA: The pros are huge – pools of microbes that pump out ready-to-use biofuels, life-saving medicines, and bulk biomaterials using nothing but the Sun and agricultural chemicals. It could lead to another Industrial Revolution. The cons are equally huge. If a destructive synthetic microbe is released into the biosphere, who knows how much damage it could do. Synthetic biology could exploit pathogenic strategies that natural biology has very poor innate defenses against, having no evolutionary experience against these invaders.

Craig Venter is quick to point out that his current experiments in synthetic life only went forward after review by an ethical panel. But I have to ask – who is this panel? What are their motivations? If they work for the same company that might be the first to take advantage of the tremendous profit potential of synthetic life, can they really be considered unbiased? Not really. We need to set up independent review panels, restrict the type of synthetic organisms that can be built, enforce quarantines around synthetic organisms unless there is a broad desire for releasing such an organism.

V: Is technology natural? What is the relationship of technology to humans?

MA: Technology is not “natural”, but “natural” should not be taken as a synonym for “good”. There are good natural things, like romantic love, and bad natural things, like AIDS. There are good artificial things, like indoor plumbing, and bad artificial things, like nuclear weapons. It really depends. People should evaluate each item on its own merits, not on whether it is “natural” or not. Our world is already thoroughly unnatural. Even the fruits and vegetables we eat are deeply shaped by artificial selection. The future need not be a progressive encroachment of the artificial on the natural, but there are many artificial technologies that may be highly desireable, and may even help protect the beauty in nature.

V: Another topic we’re seeing more and more coverage on is the advances of 3D printers/fabbers, and more specifically, the possibility of molecular manufacturing. When we can expect such technology to come into existence? What’s the potential economic impact of printing nano-materials at home? What risks do you associate with MM?

MA: Molecular manufacturing will likely be developed sometime between 2010 and 2030. Our limited success with nanorobotics and mechanosynthesis so far suggests strongly that MM is feasible and on its way, it’s mostly a question of “when”, not “if”. The economic impact of widely available MM factories would be huge. Our economy could completely change overnight. The demand for MM feedstocks (hydrocarbons) would go through the roof, while the demand for centrally manufactured products would all but vanish. If you can manufacture practically anything you want for low cost in the privacy of your own home, why pay extra for centralized manufacturing? After MM is developed, material scarcity could become a thing of the past in under a decade. The mass piracy of music, books, and DVDs will extend to products in general. Many traditional business models will collapse.

The risks from MM are numerous. If unscrupulous governments gain control of unrestricted nanofactories, they could manufacture millions of smart missiles, tanks, UAVs, even aircraft carriers, for extremely low cost. This would radically destabilize international relations. If it turns out that nanoweapons (offense) overpowers nanodefenses (defense), then there will be a powerful first strike incentive. Instead of Iran worrying about whether it will be attacked by the United States and Israel, it will simply attack first, and likely end up on top because of its quick action. Because MM will automate vast sectors of military manufacturing, it has the potential to kickstart a new arms race on an unprecedented scale. Arms control professionals have an obligation to look more closely into MM, and some already have, but more work is necessary. Many futurists are absolutely clueless about MM.

V: Which do you think will come first – productive nanotechnology or AI? How might the two be symbiotic?

MA: I think that productive nanotechnology will come first, but that’s just a guess. I expect both technologies to arrive in the 2010 to 2030 window, 2040 at the latest. Of course, I could be completely wrong.

In the long run (if we survive), AI and productive nanotechnology are certain to be symbiotic, but not necessarily in the short run. Because nanotechnology could provide us with obsecenely fast computers, over a million times more powerful than today’s fastest supercomputer, it will make strong AI easier. Meanwhile, the challenge of making that AI friendly will remain just as hard, so the likelihood of unfriendly AI being created will rise. This argument is summarized by Eliezer Yudkowsky in Creating Friendly AI.

I believe that AI can be intrinsically safer than nanotechnology. AI can help us deal with the risks of nanotechnology, but nanotechnology exacerbates the risk of AI. Some people are paranoid about AI, because they believe it represents something alien to humanity, but if AI is designed with human preferences closely in mind, then we have nothing to fear. The problem is that designing an AI that way is a formidable technical challenge, and deserves all the resources we can muster.

V: Do you have an opinion about who will be the first to develop an AI? (Google, Novamente, Adaptive AI, private/public company, government, etc) When?

MA: As stated in the previous question, I expect human-level AI between 2010 and 2030, with the probability concentrated in the later portion of that range, with 2040 as a rough upper bound. Part of the reason why I say this is because, even if AI programmers fail to reverse-engineer the human brain using the abstract approach, brain-scanning machinery will reach such a level of resolution by then that it will be possible to simply emulate a human brain in a computing substrate, creating AI by default. This argument is summarized by Ray Kurzweil in The Singularity is Near.

I believe that AI will be created by an effort specifically focused on creating human-level AI. Although Google pays lip service to human-level AI, there is little to no evidence that anyone at Google is working seriously on the problem. It’s a fact of life that there may require substantial investment before a general AI program bears fruit, but once it does, it could be more world-changing than any invention that came before it. Because companies usually require a return on investment in 3-5 years, and AI may be a 10-30 year project, it seems more likely than a non-profit or collaborative academic effort will create AI first. Governments are another possibility, because they tend to take a longer-term perspective and their resources are immense.

V: How will humans keep their AI on a leash? What are your thoughts about Omohundro’s related theories?

MA: “Keeping an AI on a leash” is a profoundly bad way of looking at the challenge – in the language of Creating Friendly AI, this kind of thinking is called the “adversarial attitude” – looking at AI as an enemy to be overcome, rather than an ally to be collaborated with. Because we will create AI from scratch, it will have no other motivations than those we give it, unless we program it such that acquiring new motivations is a possibility. If we give an AI beneficial motivations, it will not spontaneously reprogram itself to have malevolent motivations.

Stephen Omohundro has done a lot of good work in recent years to popularize the idea that AI could be harmful even if given goals that seem initially harmless. He has encouraged dialogue on the issue and pointed to its importance. However, Omohundro has offered less in the way of a specific plan for programming friendly AI. For that, we turn to Eliezer Yudkowsky’s Coherent Extrapolated Volition idea. In my opinion, this is the best strategy yet for ensuring that AI is beneficial to humanity. I hope that anyone working on higher AI is aware of this theory.

V: Is the development of AI and AGI inevitable? Where do you you expect opposition to stem from?

MA: Barring global catastrophe, I do think the development of AGI is inevitable. The potential benefits are simply too great for it to be passed up, and there is no philosophical or technical reason for why AGI should be impossible or superlatively difficult to achieve. There could be some opposition to AGI before it is created, but I think it will be minimal, as most of AGI’s would-be critics will not take the possibility seriously enough to oppose it formally. I worry more about widespread opposition to biotechnology and intelligence enhancement.

V: What’s your definition of the Singularity? When do you think that such an event might occur?

MA: The Singularity is the technological creation of smarter-than-human intelligence. Not asymptotic technological progress. (Unless it follows from smarter-than-human intelligence.) Not replacement of the human race. Not the end of history. Not necessarily creation of AGI (the Singularity could come in the form of an enhanced human). The Singularity could be a soft takeoff, where an intelligence-enhanced human slowly comes up with new methods of intelligence enhancement, or a hard takeoff, where a nanotechnology-capable AGI rapidly starts building itself new hardware and becomes the most powerful entity on the planet overnight.

The Singularity will occur whenever there is a major breakthrough in intelligence enhancement, brain-computer interfaces, or artificial intelligence. I think this is likely to happen around 2030, but it could happen tomorrow. We’ve already enhanced intelligence in mice, and it’s only a matter of time until we do it in humans. For me to qualify an event as the “Singularity”, the intelligence created would have to be smarter than the smartest human that has ever lived. If it’s not blatantly obvious, it probably doesn’t qualify as a Singularity. We will know it when we see it. If the creation of smarter-than-human intelligence isn’t accompanied by conspicuous displays of that intelligence, it probably isn’t very genuine.

V: Please list some powerful new technologies or disruptive events that you expect to see by Dec 31, 2008.

MA: The first synthetic organism, Mycoplasma laboratorium, will likely be created. This will be a historic moment. The first commercial brain-computer interface for gaming, Emotiv EPOC™, will be available to the public, starting at $300 USD. The EPOC will bring the experience of using a brain-computer interface (BCI) to the common gamer, which could lead to a fundamental shift in attitudes towards BCI across society. Due to the efforts of companies like Nanosolar, the cost of solar panels will drop and efficiency will increase. Nuclear power will start to experience a comeback in the United States, the UK, India, and Russia. China’s economy will continue to grow at a fevered pace, edging them closer to superpower status in the international scene. The cost of gene sequencing will continue to drop rapidly, with more and more people signing up to take a peak at the information content of their own genes. Space tourism will become more popular, and in 2009, the world first commercial spaceport, Spaceport America, will open.

V: 5 years: Please list some powerful new technologies or disruptive events that you expect to see by 2013.

MA: In 2013, the Internet will an even more intimate part of how we live our lives. The world will become increasingly transparent, with hobbyists installing live streaming cameras in public places, and it being essentially impossible to do anything about it. Face recognition software will automatically tag every image of you and upload it to open websites. People will complain at first, but eventually learn to live with it. Transparency will give everyone an incentive to be nicer to each other. If you get drunk and cuss someone out over the weekend, all your co-workers will be giggling at you on Monday. Maybe people will consider their actions more carefully.

Significant progress towards molecular manufacturing and AI could be made by 2013. The continuation of Moore’s law will mean that computers in 2013 will be about ten times more powerful than today, which will allow better molecular dynamics simulations and more lifelike virtual agents. In many ways, I think the world of 2013 will be similar to the world of today, except for being more networked and transparent. I don’t expect abrupt changes until 2020 or so. By most futurists standards, by predictions for the next ten years are relatively conservative, but my predictions for ten years and beyond would be considered radical.

V: 10 years: Please list some powerful new technologies or disruptive events that you expect to see by 2018.

MA: In 2018, computers will be roughly 100 times more powerful than those of today, and there will be hundreds of supercomputers that exceed the computing power of the human brain. The time will be ripe to create general AI. Using virtual worlds as a learning environment, and skipping expensive and clumsy robotics, programmers will craft increasingly more intelligent software, informed by cognitive science on one hand and information theory on the other. If general AI is successfully created, it could quickly lead to a hard takeoff Singularity.

By 2018 we will have wearables that can tell what we’re going to say before we say it (this already exists, but the vocabulary is only 150 words), project images directly onto our retina, allow us to navigate menus using just the power of our brain, and replace the functions of cell phones, mp3 players, GPS devices—you name it. These will be elegantly integrated into our clothing rather than being used as external devices.

Personalized manufacturing will start to be a big deal in 10 years, whether molecular manufacturing is developed or not. If MM is developed, we will be building superproducts out of diamond. Otherwise, we will synthesize gadgets using simple plastics and electronics components. This will be a boon to the Third World, which has trouble getting ahold of centrally manufactured products.

The utmost disruptive event would be World War III. This is another one of those things that many futurists ignore because it isn’t useful for pandering to the audience’s technophilia and optimism. If WWIII breaks out, it could throw our civilization behind dozens if not hundreds of years. Aside from a World War, we should watch out for an apocalyptic event unleashed by synthetic life or microscopic self-replicating machines.

V: General: What makes you optimistic and pessimistic about the future?

MA: What makes me optimistic about the future is that mankind seems to have a lot of momentum in a positive direction. We are maturing not just scientifically and technologically, but socially, politically, morally, and culturally. Barring a major disaster, I think we can count on things to keep getting better, eventually radically better than today.

What makes me pessimistic is the lack of seriousness that futurists, geeks, and intellectuals are showing towards the possibility of catastrophic, planet-wide disasters unleashed by biotechnology, nanotechnology, and AI. I agree with Bill Joy that the risk is substantial, but I do disagree with his approach—we should be advocating selective technological development, not relinquishment. If the people with money and power ignore the risks, and plow full speed ahead, the consequences could be catastrophic. Especially AI and robotics, which some people seem to think is some kind of joke, but may be among the most dangerous of technological risks.

Comment Thread (0 Responses)

Trackbacks (63 Responses)

  1. bmoompfz

    Posted by: bmoompfz    October 02, 2008

  2. generic viagra

    Posted by: generic viagra    October 04, 2008

  3. singulair

    Posted by: singulair    October 04, 2008

  4. cheap meridia

    Posted by: cheap meridia    October 04, 2008

  5. levofloxacin

    Posted by: levofloxacin    October 04, 2008

  6. buy viagra

    Posted by: buy viagra    October 04, 2008

  7. generic prilosec

    Posted by: generic prilosec    October 04, 2008

  8. effexor

    Posted by: effexor    October 04, 2008

  9. tizanidine

    Posted by: tizanidine    October 04, 2008

  10. darvon

    Posted by: darvon    October 04, 2008

  11. cipro

    Posted by: cipro    October 04, 2008

  12. lorazepam

    Posted by: lorazepam    October 04, 2008

  13. levitra online

    Posted by: levitra online    October 04, 2008

  14. ambien

    Posted by: ambien    October 04, 2008

  15. venlafaxine

    Posted by: venlafaxine    October 04, 2008

  16. generic lipitor

    Posted by: generic lipitor    October 04, 2008

  17. generic phentermine

    Posted by: generic phentermine    October 04, 2008

  18. singulair

    Posted by: singulair    October 05, 2008

  19. gabapentin

    Posted by: gabapentin    October 05, 2008

  20. generic cialis online

    Posted by: generic cialis online    October 05, 2008

  21. buy fioricet online

    Posted by: buy fioricet online    October 05, 2008

  22. imovane

    Posted by: imovane    October 05, 2008

  23. buy ultram

    Posted by: buy ultram    October 06, 2008

  24. losec

    Posted by: losec    October 06, 2008

  25. tenormin

    Posted by: tenormin    October 06, 2008

  26. atenolol

    Posted by: atenolol    October 06, 2008

  27. buy fioricet online

    Posted by: buy fioricet online    October 06, 2008

  28. viagra

    Posted by: viagra    October 06, 2008

  29. buy cialis

    Posted by: buy cialis    October 06, 2008

  30. zyban

    Posted by: zyban    October 07, 2008

  31. keflex

    Posted by: keflex    October 07, 2008

  32. hoodia

    Posted by: hoodia    October 07, 2008

  33. imovane

    Posted by: imovane    October 07, 2008

  34. norco

    Posted by: norco    October 07, 2008

  35. generic hydrocodone

    Posted by: generic hydrocodone    October 07, 2008

  36. furosemide

    Posted by: furosemide    October 07, 2008

  37. buy hydrocodone online

    Posted by: buy hydrocodone online    October 07, 2008

  38. generic valium

    Posted by: generic valium    October 07, 2008

  39. buy ambien

    Posted by: buy ambien    October 07, 2008

  40. imovane

    Posted by: imovane    October 07, 2008

  41. paxil

    Posted by: paxil    October 07, 2008

  42. xanax online

    Posted by: xanax online    October 07, 2008

  43. diazepam online

    Posted by: diazepam online    October 07, 2008

  44. buy fioricet online

    Posted by: buy fioricet online    October 07, 2008

  45. testosterone

    Posted by: testosterone    October 07, 2008

  46. celebrex

    Posted by: celebrex    October 07, 2008

  47. generic levitra

    Posted by: generic levitra    October 07, 2008

  48. generic nexium

    Posted by: generic nexium    October 07, 2008

  49. generic norvasc

    Posted by: generic norvasc    October 07, 2008

  50. cheap adipex

    Posted by: cheap adipex    October 07, 2008

  51. generic xanax

    Posted by: generic xanax    October 07, 2008

  52. generic viagra online

    Posted by: generic viagra online    October 07, 2008

  53. tadalafil

    Posted by: tadalafil    October 07, 2008

  54. paroxetine

    Posted by: paroxetine    October 07, 2008

  55. zyloprim

    Posted by: zyloprim    October 07, 2008

  56. zolpidem

    Posted by: zolpidem    October 07, 2008

  57. buy tramadol online

    Posted by: buy tramadol online    October 07, 2008

  58. venlafaxine

    Posted by: venlafaxine    October 07, 2008

  59. purchase tramadol

    Posted by: purchase tramadol    October 08, 2008

  60. cheap alprazolam

    Posted by: cheap alprazolam    October 08, 2008

  61. valium online

    Posted by: valium online    October 08, 2008

  62. generic cialis online

    Posted by: generic cialis online    October 08, 2008

  63. glucophage

    Posted by: glucophage    October 08, 2008