The Complexity Break

Normally I disagree with Paul Allen pretty much on principle, but I think he has the complexity break pretty much correct:

The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings.

via Paul Allen: The Singularity Isnt Near – Technology Review.

Related posts:

  1. Readings: Performance, Brazil, Dollar, Complexity, Carbon, Food, etc.
  2. The Art of Simple Complexity
  3. Readings: Data, Las Vegas, Markopolos, Complexity, etc.
  4. The Big Question: Celebrities, Shoes and Women
  5. Readings: IPOs, GM, Greece, Gulf, Complexity, etc.

Comments

  1. Tristan says:

    The biggest issue that I have with Paul’s analysis is that at some point he makes the jump from talking about the singularity to talking about “scientific understanding of human cognition.” It’s true that one way we could get to advanced AI is by modeling the human brain, but that’s certainly not the only way. And it should be obvious that it’s not the only way because the singularity is a prediction about intelligence (that artificial intelligence will eventually be greater than human intelligence) and our brains do a lot more then just intelligence.

    If we use a relatively simple model of intelligence (see http://consciousthoughts.net/intelligence.php for an example) then we don’t have to rely on understanding how humans use intelligence as tool, but can just focus on building the hardware and designing the algorithms – two areas which will yield results from a slow and steady application of effort and small incremental improvements.

  2. I think he's got some of the science slightly wrong, particularly this – "Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements.". It's more like a computer with a sound card, and graphics card, and network card, and a floating-point unit, etc, as well as a CPU. But he wouldn't say that having many specialized components makes a real computer somehow not a computer. The Singularity-preachers are just wrong because they massively underestimate the time-frame needed to understand it all. But sometimes debunkers go too far in expressing this.