My take is that mathematics provides rigorously defined, leak-proof primitives and operations. Most of the difficulty involved in porting a piece of math to a program is in plugging the leaks in the abstractions provided by efficient machines. For example, infinite-precision real arithmetic (with no overflows, loss of entropy due to FP rounding mode, etc.) is assumed in mathematics, but is devilishly hard to get right (and fast) on fixed-width machines. Obtaining a passphrase from the user in order to hash it is assumed in the algorithm for the hash function, but in reality doing so without compromising the rest of your system can be much harder than coding the hash function correctly.
Things like side channel (e.g., timing) attacks add an orthogonal dimension of complexity to secure complexity that simply doesn't exist (or is rightly elided) in the related math.
Proofs only operate in the domain of the pure and infinite. The glue logic to interface that with the real world is what makes it tricky to write secure code, especially when errors in any part of any program can compromise an entire system (better hardware-enforced isolation between pieces of code is possible today, and indeed used, but isn't pervasive yet because performance is still king, IMO.
Things like side channel (e.g., timing) attacks add an orthogonal dimension of complexity to secure complexity that simply doesn't exist (or is rightly elided) in the related math.
Proofs only operate in the domain of the pure and infinite. The glue logic to interface that with the real world is what makes it tricky to write secure code, especially when errors in any part of any program can compromise an entire system (better hardware-enforced isolation between pieces of code is possible today, and indeed used, but isn't pervasive yet because performance is still king, IMO.