I fully agree with your estimates, and would just like to point out more precisely where people (especially biologists) tend to misinterpret this argument.
The strawman argument that people tend to hate is this: the brain is encoded in 25M of DNA, so it would only take 25M to build a physical brain. To be clear: we're not arguing that.
Then they go on about how complex the process of creating a physical brain from a string of DNA is, how there's so much information that we'd need about the chemical reactions, that because the building-up is so complicated we couldn't do it with computers 100 years from now even if Moore's law held up, etc. And I agree with all of that, but it's not what that 25M figure refers to.
What we're saying when we give that number is that an algorithm that does more or less what the brain can do can be coded in less than 25M. It won't implement its physical structure exactly, but some algorithm that comes in under the 25M limit in almost any suitably strong programming language is all but guaranteed to qualify as "intelligence". Whether we can find it or not is another matter; all we're saying is that it's there (and I'd go further, and say that many such algorithms exist in the <25M algorithm-space, because if they weren't relatively easy to find, evolution never would have figured them out).
That the particular genotype->phenotype->algorithm encoding that creates the brain's algorithm is hideously complex doesn't change the information theoretic content, it loosely corresponds to inserting a massively complex general-purpose compiler in front of a Turing-complete language, which doesn't change the compressibility of the code one bit. Unless the compiler is specifically built to compress a certain type of algorithm very well, the compressed information density will not change significantly for any program of sufficient complexity (this is provable mathematically if you properly define the various conditions). In fact, there's a very good chance that the genotype->phenotype->algorithm mapping that results in human intelligence uses a less efficient coding of the algorithm than we could achieve via a modern expressive programming language, because the brain's physical implementation severely limits the expressivity of algorithms that can be baked into it.
The strawman argument that people tend to hate is this: the brain is encoded in 25M of DNA, so it would only take 25M to build a physical brain. To be clear: we're not arguing that.
Then they go on about how complex the process of creating a physical brain from a string of DNA is, how there's so much information that we'd need about the chemical reactions, that because the building-up is so complicated we couldn't do it with computers 100 years from now even if Moore's law held up, etc. And I agree with all of that, but it's not what that 25M figure refers to.
What we're saying when we give that number is that an algorithm that does more or less what the brain can do can be coded in less than 25M. It won't implement its physical structure exactly, but some algorithm that comes in under the 25M limit in almost any suitably strong programming language is all but guaranteed to qualify as "intelligence". Whether we can find it or not is another matter; all we're saying is that it's there (and I'd go further, and say that many such algorithms exist in the <25M algorithm-space, because if they weren't relatively easy to find, evolution never would have figured them out).
That the particular genotype->phenotype->algorithm encoding that creates the brain's algorithm is hideously complex doesn't change the information theoretic content, it loosely corresponds to inserting a massively complex general-purpose compiler in front of a Turing-complete language, which doesn't change the compressibility of the code one bit. Unless the compiler is specifically built to compress a certain type of algorithm very well, the compressed information density will not change significantly for any program of sufficient complexity (this is provable mathematically if you properly define the various conditions). In fact, there's a very good chance that the genotype->phenotype->algorithm mapping that results in human intelligence uses a less efficient coding of the algorithm than we could achieve via a modern expressive programming language, because the brain's physical implementation severely limits the expressivity of algorithms that can be baked into it.