Is the Singularity a Red Herring Built on Compelling, Yet Faulty Logic?

October 06 2008 / by Alvis Brigis
Category: Social Issues   Year: Beyond   Rating: 5 Hot

Built on a faulty definition of intelligence, the Singularity meme is an informal fallacy with limited utility that constricts our view of the future if we rely on it too heavily. As we continue to refine our collective model of a rapidly accelerating future dominated by convergence, we should look to more comprehensive scientific models to take its place.

Let me start off by saying that Ray Kurzweil’s The Age of Spiritual Machines is one of the most important books I have ever read. It ably makes the case for accelerating change and a resulting Singularity, so I highly recommend it to those interested in exploring the possible futures ahead of us.

Similarly, Vernor Vinge’s 1993 paper, The Coming Technological Singularity, which argues that the appearance of superhuman intelligence could mark an end to the human era and create unimaginable conditions, and I. J. Good’s statement on ultra-intelligence are must-reads for future-interested persons.

Each definition contains valuable nuggets about how the future may unfold. Yet I have come to believe all three are fundamentally flawed due to their reliance on the vague term: “intelligence”.

Intelligence Remains Undefined: There is no objective, comprehensive, scientifically valid description of the term. Though it’s easy to believe we understand what intelligence is and how it works, we humans have not yet achieved consensus on an overarching definition nor its constituent properties. There are many theories, but an objective law has yet to emerge.

According to an APA report titled Intelligence: Knowns and Unknowns, “when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen somewhat different definitions.”

The Wikipedia definition reflects this vagueness:

Intelligence (also called intellect) is an umbrella term used to describe a property of the mind that encompasses many related abilities, such as the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn. There are several ways to define intelligence. In some cases, intelligence may include traits such as creativity, personality, character, knowledge, or wisdom. However, most psychologists prefer not to include these traits in the definition of intelligence.

At the same time, the bulk of the AI theorists working to create Strong AI/AGI that matches or exceeds human intelligence are either 1) applying a very narrow definition of intelligence that equates one human brain or personality to a discrete unit of intelligence, or 2) building logical or neural processes step-by-step and refraining from venturing a concrete definition.

Definitions of the Singularity Rely on Vague Definitions of Intelligence that Don’t Hold Up: Singularity proponents and detractors alike go about making their arguments without questioning the underlying assumption that human intelligence is composed of discrete units. By and large, they either overtly or tacitly equate intelligence to the functions of an individual brain or system. This is not surprising considering how the brain likes to simplify subject and object so that we can go about living our lives. But that fundamental assumption appears to be wrong, and at the very least is far from verifiable.

Recent research by cognitive historian James Flynn, who incidentally discovered the fascinating Flynn Effect, suggests that intelligence may well be non-static and cannot effectively be defined without placing the subject in environmental context.

Similarly, general observations about social cognition, wisdom of the crowds, an emerging global brain and global body, and machine augmented intelligence reinforce the argument that intelligence is an elusive network property that is very difficult to quantify, much less effectively incorporate into theories about our future.

A clear trend toward permeating, system-wide intelligence growth appears to be revealing itself. If in fact this proves to be the rule and it replaces the commonly held belief that neuron clusters (brains) alone = intelligence, then a Singularity will only be possible if intelligence that we create, facilitate, or discover suddenly overtakes the total body of intelligence inherent in the entire system. Thus, the likelihood of a Singularity as currently defined is tremendously diminished.

And even if it does come down to a single human brain or small network, I find very compelling Future Blogger Will’s argument that “As our ability to understand the technological processes that could lead to a singularity increase, the point in time regarded as being [The Singularity] onset must be pushed further off into the future.” Depending on the extent to which we co-evolve with our technology, this postponement could go on for quite a while.

But whichever way you slice it, until we learn to accurately measure pockets of intelligence and determine their relationship to the broader system it will remain impossible to empirically define a Singularity. This realization greatly reduces the usefulness of the term in making future projections, especially considering how it tends to steal the spotlight from multi-variable future(s) in favor of a more singular, pardon the pun, vision.

So where then does that leave us?

Objective Topsight: Convergent Accelerating Change, Systems Theory, Information, Knowledge, Intelligence and Related Sciences: Though the Singularity may look more and more like the ultimate red herring, this doesn’t mean that things aren’t changing at a steadily accelerating rate. Therefore it’s incumbent upon us to quickly get better at identifying the myriad possible futures ahead of us by developing the skills, tools and knowledge base to do so.

We would do well to place greater emphasis on advancing comprehensive Evo Devo systems research, utilizing social media to more quickly generate wisdom, incorporating holistic thinking into the quest to generate AI, continuing to expand our definition of intelligence/life/humanity, pushing forward related sciences, and ultimately developing an overarching Info/Knowledge/Intel theory that jives with the rest of our knowledge base.

As the Singularity meme continues to heat up and the world reacts accordingly (check out Kevin Keck’s piece on the topic), it’s important that we frame the debate logically and scientifically. We’re presented with a great opportunity to advance the puck by fostering productive dialogue, developing new theories and amplifying our coping abilities – a goal that is being advanced by important forward-looking organizations like The Singularity Institute for Artificial Intelligence and The Acceleration Studies Foundation, both of which support powerful communities of rational thinkers.

Conclusion: If objective topsight is the ultimate destination, then it’s likely the inherently subjective notion of a Singularity will become less useful over the coming years. Still, it’s a scenario set that’s diffusing rapidly and can quickly open up minds to the reality of multi-faceted accelerating change.

Comment Thread (11 Responses)

  1. I happen to agree in general with your view. I think that the sweeping changes which will result from nanoscale and biosciences will also alter our perspective. It’s very hard to say what will happen as things speed up, so even though we know we’re going somewhere, it’s hard to say where.

    AI will be interesting. It is (finally) starting to advance as fast as other fields, but we could just end up with Data from Star Trek (a recent study showed that emotion is key to intelligence, so intelligence is probably more subtle than we imagine). Still, the next few decades will be very interesting.

    Posted by: CptSunbeam   October 06, 2008
    Vote for this comment - Recommend

  2. I’ve read The Singularity is Near but not The Age of Spiritual Machines. If anyone’s read both, can you say if Kurzweil’s general Singularity/acceleration ideas evolved much between the two books, or are the differences mainly in the specific descriptions and predictions of technology?

    Posted by: gremlinn   October 06, 2008
    Vote for this comment - Recommend

  3. At this point, I am at a “I’ll just wait and see” attitude. While I see accelerating change, I am more interested in nanotech and biotech than strong AI.

    Posted by: Covus   October 06, 2008
    Vote for this comment - Recommend

  4. @ Cpt Sunbeam – Interesting about the emotion discovery. Could you provide a link? This jives with what a Silicon Valley AI company called Syntience is working on via their artificial intuition approach.

    @ gremilnn – I read some of The Singularity is Near and skimmed most of it. For the most part I found it to be a more comprehensive analysis of the same argument. This was the general consensus among the futurists I hang with. It is however a great reference book with tons of data goodness.

    @ Covus and all – I agree that nano and biotech will be critical drivers of accelerating change and will probably be more interesting in the near-term than AGI. That being said, I think we’ll see big advances in social intelligence, individual intelligence, and smart software. I expect these all to converge and derive power from their peer trends that ultimately push topsight and total system intelligence.

    Posted by: Alvis Brigis   October 06, 2008
    Vote for this comment - Recommend

  5. @ CptSunbeam – It is interesting that a study concluded that the key to intelligence is emotion. It occurs to me that context dictates emotion. If we can create AI that creates, maintains, dismantles and then recreates context, then emotion will just be a bi-product or a measurement of a particular context expressed.

    One workable aspect of using the Singularity as a concept to predict from is that it allows critical thinking to take place from the future, or a declared possible future. This is different than making a prediction from where we are now or where we have been in the past. In other words, whether or not the Singularity is a valid description of the future, it is a place to stand, a context for predicting technological change.

    Posted by: Peltaire   October 07, 2008
    Vote for this comment - Recommend

  6. I keep arriving late at these soirees and then tripping over myself when I do finally show up. :)

    I feel I should point out that I’m nobody’s idea of an expert when it comes to topics of this nature. I’m really just a guy who makes a habit of reading a lot, and much of that at least slightly above my grade level so to speak. It’s flattering to be quoted and all, but I sincerely hope that’s because I happened to offer an unusually elegant turn of phrase in regards to this subject and not my imagined expertise.

    And with the disclaimer out of the way, let’s speculate even more, shall we?

    Whether we are talking about Vinge’s Technological Singularity or Kurzweil’s more general concept, I think we should establish that what’s really being discussed isn’t some particular event or metric as such, but an intellectual exercise for re-catagorising our conceptual processes regarding human development.

    Just as history wasn’t actually a succession of neatly segregated events, growth into the future won’t be a linear progression either. Events might be most easily considered to lead one from another, but even a casual study of current research efforts around the world will reveal an extensive degree of overlap with often inexplicable-seeming gaps (most commonly attributed to the limited supply of money available – quite true, but also beside the point). The reality is that, by and large, people tend to pursue what they believe themselves most likely to be successful at pursueing. That being the case, research is often as much a result of individual ego as it is anything else, I expect.

    So, not only is development not linear, it isn’t especially logical either apparently. As well, widespread acceptance of at least one other factor can be attributed to the singularity concept, that of syncronicity of development.

    As can be seen, none of these ideas are especially unique or original, except in their application to future human development. Those of us trying to apply their insights may be guilty of over-expectation however, both in our search for greater meaning and in our attempts at measurement of progress.

    Phil Bowermaster at The Speculist website once asked how we would know when we had created an AI. I only slightly tongue-in-cheek commented that I felt certain it would tell us when it was well and truely ready for us to know. Not to be dismissive of your concerns, but I think the questions raised about intelligence in this comment stream might well fall into a similar catagory; we’ll know intelligence when we run into it I expect. Beyond that, how do you measure the infinite? Since potential has to be accounted a contributing factor of intelligence, it would seem an effective impossibility to achieve more than a momentary valuation of an open-ended process, wouldn’t you agree?

    Similarly, the concept of singularity entales the notion of impenetrability to it. There is a point in the development process beyond which our present degree of knowledge can no longer extrapolate further possibility. As our knowledge grows, of course this point must recede further into the process, but that doesn’t invalidate the concept I suggest, any more than our here-to-now inability to catalog all of the ramifications of Einstien’s little mathematical formula invalidates it.

    I will resist the Clint Eastwood movie cliche. :)

    Vinge’s postulate and Kurtzweil’s speculations there-on leave us with a mechanism by which we are better able to imagine our course into the future, but does so by stipulating that we will ever only be able to do so up to some variable limit. Does any of that sound oddly quantum theoryish to anyone else? Can our attempts to measure our progress cause some fluctuation in that progress? Whether or not that be true, does our inability to measure the ultimate of our potential invalidate that potential? I think not and suggest that we postpone any conclusion until some intelligence appears with which to discuss it further. :)

    Posted by: Will   October 07, 2008
    Vote for this comment - Recommend

  7. Having re-read this, I have so impressed myself that I have copied it entire and reposted it to my own blog page.

    And Yes, I am easily entertained. Sadly, easily is not a synonym for “cheaply”.

    Posted by: Will   October 07, 2008
    Vote for this comment - Recommend

  8. @ Will – I’ve been quoting your earlier piece because I really like the way you define singularities as ever receding points. That way of framing it struck a chord and so I like to give credit where credit is due. :)

    On to your latest points:

    First, I absolutely concur with your analysis that serious change is typically punctuated, and that this can be attributed to the interplay of development and evolution in a more or less chaotic environment.

    re: We’ll know intelligence when we run into it I expect.

    Yes, lacking a more effective framework for measuring intelligence, that would be the one clear-cut way to realize we’ve encountered a superior intelligence.

    Beyond that, how do you measure the infinite? Since potential has to be accounted a contributing factor of intelligence, it would seem an effective impossibility to achieve more than a momentary valuation of an open-ended process, wouldn’t you agree?

    Very nicely put. This gets at our desire to quantify and create a unifying theory for everything. But this, as you allude, is impossible until we effectively “close” off the entirety of the system (multiverse) and define every speck of everything. (Did somebody say Computational End of Days? :) Still, though all judgments/measurements are totally context dependent, I do think we can achieve better (from a subjective viewpoint, obv) than just a momentary valuation of intelligence. If we frame intelligence (and info, knowledge and communication) as inherently context-dependent, and plot our attempts to measure it over time, we can then develop a model that plays nicely with the uncertain universe around us. That is all that we can do. It may be proven 100% wrong in the end, but it sure seems like that’s the way we gradually learn and learn to expand our control over perceived environment (COPE :). As agent of the system, we are collectively compelled to advance this goal. Developing more effective frameworks is essential. Hence my beef with the Singularity framework.

    The Singularity meme is very useful in certain contexts, but limiting in others. I think it’s more useful to talk about a receding future perception cone that may or may not be less punctuated (lava lamp bubbles that undulate) and devleop theories and laws for that, than to continue lumping that into a loaded Singularity framework. Yes, it serves a visualization purpose, but it also detracts from the scientific process, imho.

    There is a point in the development process beyond which our present degree of knowledge can no longer extrapolate further possibility. As our knowledge grows, of course this point must recede further into the process, but that doesn’t invalidate the concept I suggest, any more than our here-to-now inability to catalog all of the ramifications of Einstien’s little mathematical formula invalidates it.

    Yes, that’s true. But to properly define what or what can’t be perceived or guesstimated we must step out of the realm of subjective single-brain perspective and comprehensively quantify our future-sensing mind. Is this mind limited to individuals? Can one brain assume different forecasting modes? Can certain people see significantly further than others? Do subconscious processes play a role? Is it limited to humans? Is it fractured into multiple pockets of intelligence? Is there a global brain? Might biology, DNA, or even other forms of matter be part of this intelligence?

    The Singularity has become synonymous with highly subjective single-brain-see-the-future perspective, when new research increasingly points out that the system is networked and that intelligence manifests in unexpected ways. So why must we continue to frame things according to this term?

    Your logic and view of the system is compelling, but I see no reason why you need to bend it to fit the Singularity term. Rather, I think we need to discover the new language that much more elegantly jives with complex systems analysis.

    Does that make sense?

    (And I def want to read more of your thoughts on measuring the infinite. :)

    Posted by: Alvis Brigis   October 07, 2008
    Vote for this comment - Recommend

  9. @ Alvis:

    Thank you, and feel free to continue quoting me. :)

    (And I def want to read more of your thoughts on measuring the infinite. :)

    Man, that’s gonna be a lot of work!

    :)

    It’s also probably going to be necessary work, if only to keep track of the unintended influences that work to advance human development, whether technological or otherwise, I think.

    As to the rest, I suggest that some simplified framework that admits complex structural fluctuations will prove the most workable metric for continuing the study of human development to cope with the anticipated increasing rate of future progress. The temptation to refine interim understanding must always come at the expense of further understanding as such efforts always extend retrogresively to more fully achieve refinement. It should be kept in mind that the more elegant a structure becomes the less robust it generally proves to be in a unanticipated circumstance, and what better definition is there for future developments than that?

    In any case, such conversations serve to improve our overall understanding so I look forward to future episodes with yourself and the others who post and comment here.

    My best to you all ‘til then.

    Posted by: Will   October 07, 2008
    Vote for this comment - Recommend

  10. @ Alvis – You wrote “Your logic and view of the system is compelling, but I see no reason why you need to bend it to fit the Singularity term. Rather, I think we need to discover the new language that much more elegantly jives with complex systems analysis.”

    I want to be rigorous and make the distinction between discovering language and creating language. The language that will allow us to elegantly jive inside the world of systems analysis will be created. Furthermore, we shouldn’t forget that we made up all of the models we use to discuss the future. The singularity was invented as a particular way to view the future.

    What will always limit us in this area is the human tendency to relate to a seemingly valid model or approach to an inquiry as truth as opposed to just a particular way of looking at something. Especially in the conversation of the Singularity there is a level of significance brought to the content, which renders the conversation either true or false, right or wrong, etc. Another way to look at these conversations is to simply ask, “Does this work or does it not work?” “What is missing that if present, it would work?”

    Posted by: Peltaire   October 07, 2008
    Vote for this comment - Recommend

  11. @ Peltaire – I agree re: the Singularity. It has become so loaded that it is synonymous with a certain truth/scenario, and effectively discourages the ongoing exploration of other futures. The question now becomes, “Do we modify it, or scrap it and start from scratch?”

    re: I want to be rigorous and make the distinction between discovering language and creating language.

    My rigorous/technical response is that discovery is an essential part of creation. Humans are not the first to create – we imitate, mix and compare to existing models that we created by doing the same. Thus, our language naturally evolves to emulate the aspects of reality that we learn to comprehend.

    Posted by: Alvis Brigis   October 10, 2008
    Vote for this comment - Recommend

Related content from the Future Scanner and Future Blogger