Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk.

Oh that's a terrible point! Thermodynamics laws are nothing like predictions about the future. I would have thought linguistic slight of hand like this is beneath Kurzweil.

Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome.

The design of the human brain is not entirely contained in the genome!

As soon as we mapped the human genome we were faced with a paradox. How come the complexity difference between us and mice for example, is NOT proportional to the difference in our genomes?

Here's an article form 2002 "Just 2.5% of DNA turns mice into men": http://www.newscientist.com/article/dn2352-just-25-of-dna-tu...

In other words, if you look at just how the genomes are different then humans and mice ought be a lot more similar than we are.

We have since come to find out just what a huge role the feedback-interactions of DNA and its products, like proteins and all kinds of RNA, play in the development of life.

This staggeringly complex feedback mechanism is why despite the mapping of the human genome, medical progress still remains excruciatingly slow. Much, much faster the before! But not nearly as fast as we had hopped when the human genome was first mapped.

Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome.

This is true in that they don't add much to the genome. But it is profoundly wrong in that they do add hugely to the actual resulting phenotype.

Kurzweil continues in this same vane for a while. I don't know if he has just never bothered to look into the latest research or if his understandably strong desire to not die has resulted in a huge confirmation bias.

When Kurzweil talks about the general trend of scientific progress I tend to agree with him. But neither Paul Allen nor anyone else disagrees with the notion that we will reach the singularity at some point in the future.

The argument is about the timing. And timing the future, is like timing the stock market, something I don't care to try to do.

But when Kurzweil attempts to convince the reader that the singularity is near, by using specific examples, that's when I start do disagree with him. Because once he starts being specific, it becomes easy for me to see where he is wrong, factually, objectively wrong.



>Oh that's a terrible point! Thermodynamics laws are nothing like predictions about the future. I would have thought linguistic slight of hand like this is beneath Kurzweil.

Could you elaborate on this? I'm not a huge Kurzweil fan, but as far as I can tell he's saying something reasonable here - that when he talks about LOAR, he's describing a phenomenon rather than a physical process, and that this is an accepted usage of the word "law". I don't think he's playing semantic tricks so much as responding to a semantic complaint.


Our understanding of thermodynamics is very thorough. Our understanding allows us to make a plethora of predictions, all of which are falsifiable, and have been throughly tested over the years.

This is what makes our theories about thermodynamics real scientific theories.

Predictions about the future, no matter how simple or based on long running past trends, are only falsifiable in exactly one way: wait until the predicted date passes.

I think a very, very informal use of the term "law" could cover both. But what irks me as a science minded person, is that Kurzweil is attempting to equate the informal meaning of "law", generic description of a phenomenon, with the scientific "law", an actual testable, falsifiable theory with predictive powers.


No, Kurzweil was not equating the "Law of Accelerating Returns" to a physical law of the universe. Instead, he was comparing it, albeit clumsily, to laws that govern aggregate behavior, like the second law of thermodynamics. There are lot of things that we colloquially call "laws" that clearly don't have the same footing as laws in physics, "Moore's Law" being one of them.


I thought it was unreasonable for him to choose to compare to the second law of thermo, which is lawful in a much stronger way (very precisely stated, understood in a quantitative way at microfoundations...) than the pattern in accelerating returns that he's pointing out. There are dozens or hundreds of comparisons he could have made, he didn't need to pick one which is so central and stable that thinking one has found a way around it has become a classic sign of being a crank. It would be more reasonable to choose to compare to some other pattern that is generally understood to be important in economics --- e.g., returns to specialization or returns to capital investment.


It seems to me that Kurzweil is on rather strong grounds when he argues in effect that 25Mbytes is a safe conservative upper bound on the information needed to specify a human infant brain. The relevant information content of the epigenetic stuff is unlikely to be tens of megabytes, and extremely unlikely to be hundreds of megabytes. Otherwise, it's hard to see how we could've overlooked such a high proportion of non-DNA design information being passed around in all the work being done on genetics. It's also hard to see how so much extra information would stay stable against mutation-ish pressures unless its copy error rate was much lower than DNA, and hard to see how we'd've overlooked all the machinery that would accomplish that.

Moreover, I think 25M bytes is probably a very conservative upper bound, so that the relevant uncompressable complexity of what computer scientists need to design for general AI is likely no more than 1M bytes. A lot of actual brain stuff is likely to be description of the physical layer that silicon engineers won't care about, because they do the physical layer in a completely different way (silicon and masks and resists and hardest of hard UV and two low digital voltages, not wet tendrils groping toward each other in the dark and washing each other with neurotransmitters). A significant amount of actual brain stuff is likely to be application layer stuff that we don't need (e.g., the Bearnaise sauce effect, and fear of heights and snakes) and optimizations that we don't strictly need (all sorts of shortcuts for visual processing and language grammar and so forth, when more general-purpose mechanisms would still suffice to pass a Turing test). A lot of brain stuff is likely to be stuff in common with a fish, much of which we already know how to implement from scratch. And all brain stuff seems pretty likely to be encoded rather inefficiently: lots of twisty little protein substructures and nucleic acid binding sites are unlikely to be nearly as concise as the kind of mathematical or programming language notation that describes what's going on.

When Kurzweil writes "do not appreciably add" I understand him to be willing to stand by roughly the quantitative information theoretical claims I made at first (25Mbytes, tens of Mbytes, hundreds of Mbytes). When you write "profoundly wrong ... add hugely" I am unable to tell what you are claiming. How many uncompressable bits of design information you are talking about? Perhaps you believe that natural selection pounded out and mitosis reliably propagates 200M uncompressable bytes of brain design information? or 1G bytes? As above, I think that is probably false. Or perhaps I should read "hugely" as "vitally" and understand that you merely mean that the epigenetic information might be less than a million uncompressable bytes but still if you corrupt it badly you have a dead or hopelessly moronic infant. If that's what you mean, I think you are factually correct, but also don't think that that fact contradicts Kurzweil's argument.


I fully agree with your estimates, and would just like to point out more precisely where people (especially biologists) tend to misinterpret this argument.

The strawman argument that people tend to hate is this: the brain is encoded in 25M of DNA, so it would only take 25M to build a physical brain. To be clear: we're not arguing that.

Then they go on about how complex the process of creating a physical brain from a string of DNA is, how there's so much information that we'd need about the chemical reactions, that because the building-up is so complicated we couldn't do it with computers 100 years from now even if Moore's law held up, etc. And I agree with all of that, but it's not what that 25M figure refers to.

What we're saying when we give that number is that an algorithm that does more or less what the brain can do can be coded in less than 25M. It won't implement its physical structure exactly, but some algorithm that comes in under the 25M limit in almost any suitably strong programming language is all but guaranteed to qualify as "intelligence". Whether we can find it or not is another matter; all we're saying is that it's there (and I'd go further, and say that many such algorithms exist in the <25M algorithm-space, because if they weren't relatively easy to find, evolution never would have figured them out).

That the particular genotype->phenotype->algorithm encoding that creates the brain's algorithm is hideously complex doesn't change the information theoretic content, it loosely corresponds to inserting a massively complex general-purpose compiler in front of a Turing-complete language, which doesn't change the compressibility of the code one bit. Unless the compiler is specifically built to compress a certain type of algorithm very well, the compressed information density will not change significantly for any program of sufficient complexity (this is provable mathematically if you properly define the various conditions). In fact, there's a very good chance that the genotype->phenotype->algorithm mapping that results in human intelligence uses a less efficient coding of the algorithm than we could achieve via a modern expressive programming language, because the brain's physical implementation severely limits the expressivity of algorithms that can be baked into it.


It seems to me that Kurzweil is on rather strong grounds when he argues in effect that 25Mbytes is a safe conservative upper bound on the information needed to specify a human infant brain.

This is the best I could do: http://www.sciencedaily.com/releases/2005/01/050111115721.ht...

I can't find the actual scientific papers. Anyway, form the article above:

The lack of correlation between genome size and an organism’s complexity raised a question – how do complexity and diversity arise in higher life forms?

Or to rephrase that, why is there no correlation between source code size and application complexity? Why are mice with 24.99Mb of DNA so much less than humans with 25Mb of DNA? Why is there not a linear relationship between the complexity of the animal and the complexity of it's DNA? Well...:

RNA editing involves the process by which cells use their genetic code to manufacture proteins. More specifically, says Maas, RNA editing “describes the posttranscriptional alteration of gene sequences by mechanisms including the deletion, insertion and modification of nucleotides.”

RNA editing, says Maas, can “increase exponentially the number of gene products generated from a single gene.”

Increase the number of gene products from a single gene exponentially, that says.

The paper I can't find describes how this process, as it takes place in the brain, is an almost exact match for the complexity difference between mice and humans.

And yes, posttranscriptional alteration is much more fragile than good old double helix DNA. And no, evolution doesn't care.

...hard to see how we'd've overlooked all the machinery that would accomplish that.

We didn't overlook it for long, shortly after the human genome project raised the question, we spotted it. See above.

the relevant uncompressable complexity of what computer scientists need to design for general AI is likely no more than 1M bytes.

What do you base this statement on?

How many uncompressable bits of design information you are talking about?

Scientists have already discovered that posttranscriptional alteration (a part of epigenetic information) adds huge complexity. How much more? I am absolutely not comfortable guessing at numbers of Mbytes because I know how little I know.

And what about the actual cell machinery. As the egg is being formed inside the mother how much complexity does the way its machinery work add the what will happen after the egg is fertilized? Again, I dare not guess.


You're saying a lot there, so rather than create a wall of text in response I'd like to boil it down a bit - assume N=25Mb, give or take an order of magnitude:

Are you making the claim that the N bits of DNA involved in coding the brain can encode more than 2^N neural algorithms?

Or do you think that the particular set of 2^N (assuming no redundancy, which is generous...) neural algorithms that N bits of DNA can encode are more likely to result in intelligence than a random sampling of algorithms of equivalent Kolmogorov complexity?

Or are you claiming that epigenetic factors are able to reliably transmit significantly more than N bits of mission-critical data across the generations, and that epigenetic evolution is likely to thank for devising the human intelligence algorithm rather than evolution of DNA?

Edit: looking over your post, I suspect that part of the misunderstanding is over the word "complexity". You seem to be focusing on the complexity of the products; these estimates focus on the complexity of the spec. In humans the difference is muddled because the spec goes through such ridiculously complicated machinery to become the product, but when it comes to designing algorithms, that complicated machinery might as well be a random shuffle for all it matters to the algorithm's proper functioning, so the Kolmogorov complexity that it adds is effectively zero.


Are you making the claim that the N bits of DNA involved in coding the brain can encode more than 2^N neural algorithms?

That's exactly what the article above explains. Did you read it?

Or do you think that the particular set of 2^N (assuming no redundancy, which is generous...) neural algorithms that N bits of DNA can encode are more likely to result in intelligence than a random sampling of algorithms of equivalent Kolmogorov complexity?

I am not sure I understand the question. Are you asking if I believe the brain is a large but mostly simply designed neural network? If that is the question, then no.

Or are you claiming that epigenetic factors are able to reliably transmit significantly more than N bits of mission-critical data across the generations

I am claiming that do get a human you must "host" the human genome in a pre-existing human. Sticking it in a mouse will not result in a human. What does that imply?

that epigenetic evolution is likely to thank for devising the human intelligence algorithm rather than evolution of DNA?

I don't see two kind of evolutions there. It's all just human evolution genome and all. After all, it's not like human dna is out there evolving in something else besides humans.

In humans the difference is muddled because the spec goes through such ridiculously complicated machinery to become the product

Yes!

but when it comes to designing algorithms, that complicated machinery might as well be a random shuffle for all it matters to the algorithm's proper functioning

What implies that? How do you go form yes a hugely complex compiler is necessary, to no we can just randomly shuffle the code and it'll be just as good?

How many bits does it take to describe the string "aaaaaaa"? Not many. How many bits to describe the human genome to a scientist? I'll just gzip it and email it and were done, awesome!

How many bits to describe a human brain or how to turn that genome into a human brain? Well lets see, its a complex self-modifying process, the human brain expands the number of sequence products exponentially and interestingly the mouse brain does not do this.

In mice the complexity difference between their brain and their genome is linear. In humans it is not.

In mice the Kolmogorov complexity of their brain is equal to the Kolmogorov complexity of their genome + some linear factor.

In humans it's the Kolmogorov complexity of our genome + a lot more.

How much is "a lot more"? No idea.

Is all of this inherited? Yes, partly through the genome, partly through the fact that that genome must be planted in a pre-existing human. Again, if you swap it out with a mice genome humans won't be giving birth to healthy mice and mice won't be producing humans.

You can move a simple sequence across species, like a glowing protein form jellyfish to rabbits for example. You can not move whole genomes in higher order life forms.

I think the disagreement between early and late singularity people often comes down to is the human brain mostly a large but simple mass of neurons or not.

I think computer scientist are often in the it's just a large neural network camp. Brain scientists are in the it's much more complicated than that camp. As a computer scientist and software engineer who's worked in biotech for many years, I agree with the brain scientists.


I've already replied to some of this, but re: your mice vs. humans example, my views on this are that the fundamental algorithmic innovation that makes humans so intelligent was already present in mice, and almost all critters in the "bigger than a bug" families. Fundamentally we do process information in the same way as mice, it's merely a matter of turning up some of the intensity knobs (or more likely, adding a few more well-tuned layers to the network that already exists) to let humans take intelligent thought to new realms of utility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: