February 28 2008 / by Venessa Posavec
Category: Technology Year: Beyond Rating: 10
If anyone’s cut out to build intelligent machines, it’s Steve Omohundro, President of AI Company Self-Aware systems. He’s worn the hats of scientist, university professor, software architect, and author, giving him a solid intellectual foundation. But, it’s all tempered by a spiritual core. He embraces practices that encourage him to journey inward for guidance, creativity, and transformation, and has participated in numerous workshops where he plays the role of teacher and life coach. If an AI is running the show one day, I for one could only hope that kind of compassion and humanity be built in!
A few weeks ago I had the pleasant opportunity to interview Steve (full audio transcript here). He began our phone chat with an explanation of what artificial intelligence is, and the consequences of a self-improving AI:
Omohundro: It’s a discipline where we try and understand the fundamental nature of human intelligence and build machines which can solve the same kinds of problems that people can. The particular approach to artificial intelligence that my company is taking is to try and build systems that understand their own behavior and watch themselves as they work and solve problems; notice what things are working well and which things aren’t working well, and then change themselves, improve themselves, so that they work better.
Sounds good, right? We’ll only have to build version 1.0, and the program will take it from there.
Omohundro: When a human programmer just writes a program, he understands what he wants it to do, and sometimes there are bugs, but basically the system behaves the way you expect it to. When you have a system that can change itself, basically it writes its own program, then you may understand the first version of it, but unless you’ve done a lot of analysis, it may change itself into something that you no longer understand. And so, these systems are quite a bit more unpredictable than the kinds of software we’ve been used to, so it’s very powerful, but there’s also potential dangers.
Despite such dangers, it appears there’s no stopping the development of AI in the near-term. Steve addressed the argument that perhaps we should try to slow or flat out bring to a halt the development of these risky technologies:
Omohundro: That might make perfect sense if we could actually be sure we could do that. The problem is that if a country, say, the United States, decides to stop developing this kind of technology, it just means that the future we end up with is going to be determined by some other country – and that may be North Korea, Iran, or a country with values very different from our own. And so I think there’s really no way to stop it. I think the best path is to understand it very carefully, to be very clear about our values and what we want our future society to look like, and then we can guide this technology to help us to develop that future.
So let’s just assume intelligent machines are inevitable. What kind of impact is that going to have on the economy?
Omohundro: Well, I think we’re in for a big shift, because essentially every aspect of the economy can be improved by having more intelligence there, by making decisions more effectively. One of the consequences of artificial intelligence will be robotics that can actually behave much more flexibly than the robots we have today. And on the good side, that means a lot of manual labor which people don’t particularly like doing can be replaced by robots. On the potentially negative side, a lot of jobs that people have today will be much more cheaply accomplished by robotic systems, and so it’s going to be a big dislocation in the economy of the world. Huge potential benefits, way greater productivity, meaning that there’s a lot more potential wealth for the entire world, but exactly how we distribute that, and how the social structure adapts to this new technology is one of the big questions we’re facing right now.
Should we start bracing ourselves for this wild new future? When is it going to play out?
Well, it’s challenging to try and put precise dates on it. Ray Kurzweil argues that if you look at technological trends, how fast computers are getting cheaper and faster, he estimates somewhere around 2030 as the point at which a brute force approach to AI should work. Some of the people at the Singularity Summit claimed that they expected their systems to be fully intelligent within the next 4 years. So, I think there’s a lot of uncertainty and we’ll have to see how it plays out, but it could be soon. I think in our thinking about what the future looks like, we have to account for the possibility that it could be in the next few decades, certainly.
Sounds like we have a few more years of calm before the storm. What do you think?