Note that this is the version from before Randall decided that writing his own assembler with a completely different syntax than anything else out there and rewriting his book to use it was a good idea. Thus I recommend this version and not the later ones.
MASM, MS-DOS... 32/64 bit? The hardest thing about learning x86 Assembly is finding a manual that deals mostly with the CPUs ans OSes we use these days.
Since the x86 family spans over 3 decades, and is almost completely backwards-compatible, it makes sense to start out at the beginning. The basic concepts don't change, and it's easier to see the rationale for a lot of things that would otherwise seem odd if you know the history behind it.
The latest PCs can still run DOS (often needed for BIOS updates), so the programs will still assemble and run. There is indeed a 640K limit, but even the 64K limit of a basic single-segment .COM file will seem like a huge amount to work with if you're just starting out with Asm; a "Hello World" program is around two dozen bytes. (Learning Asm really changes your perspective on things like efficiency - I've taught it to programmers who have only ever used high-level languages, and they are often surprised at the vast differences in scale. Ditto when showing them some nice 4k/64k productions from the demoscene.)
Then once you learn the basics, it's not so hard to go up to 32, then 64 bits, and Windows/Linux environments.
No, however while one can debate the pricing models of books in general, AFAIK the cost of an ebook is about equivalent to a regular book (they both have layout costs, and actually printing and even distributing books, is so ridiculously cheap that it's absurd -- you'd think there's a bid difference -- but there isn't). So a better question might be, is $50 for a high-quality, highly-technical, low-volume, narrow-audience book normal -- and I'm afraid (for book buyers, not publishers/authors) the answer to that is: yes, quite.
I think you're too quick to dismiss the value of older texts. I first "got" assembly language from reading the infamous 6502.txt, which was probably somewhere between 10 and 20 years old at that point, and written about a totally obsolete (but still in widespread use to this day, believe it or not) 8-bit CPU. It didn't matter, learning about instructions and opcodes and memory maps and addressing modes was universally important.
Likewise, when I wanted to learn x86 assembly somewhat recently, the texts I learned the most from were written in the Windows 95 era, when the transition from 16 to 32 bit and DOS to Windows was underway. It didn't hamper me at all, because while microarchitectures have changed considerably (and thus most optimization tips would be useless), the instruction set architecture has not been "broken," only added to. If you really want to "get" x86-64, you need to know how it evolved from 32 bit x86, and 16 bit x86 before that.
I think the older texts are actually superior for this purpose, because newer ones are missing a lot of historical context. A 64 bit-focused guide may tell you that accessing 16 bit values can be bad for performance, because any instructions on 16 bit values in memory need an extra prefix byte, and leave it at that. Maybe they won't even bother telling you that, and work only at the assembly language level, since that's all that matters for debugging compiler-produced code. An older guide, from when DOS was still relevant, gives much more context: The x86 architecture was originally 16 bit, and a bit in the most common instruction opcodes indicated whether to operate on an 8 bit or 16 bit value. When expanding the architecture to 32 bits, the designers realized that 16 bit values would be needed much less often than 8 or 32 bit ones, and redefined that bit to select between 8 and 32 bit data sizes. A data size prefix byte was added to the architecture to give 32 bit code the option of operating on 16 bit values at the expense of an extra instruction byte (and giving 16 bit code the option of using 32 bit data, actually). That information is a lot more likely to stick in your head if you know the reason for it instead of just saying "here's another quirk of this crusty, weird architecture. How baroque it is."
Another reason to prefer older texts is that assembly language was relevant to a much wider range of programmers then, when processors were slower, compilers less advanced, and games and demos made around tight assembly routines, so there's more written about DOS and Win32 assembly than will probably ever be about x86-64.
That said, don't waste too much time on BCD, segments, and near/far pointers ;)
EDIT: Here are some random resources I found valuable:
The Art of Picking Intel Registers: http://www.swansontec.com/sregisters.html (explains the original intended purpose of the various x86 registers. While you can ignore them and use most registers pretty much interchangeably in 32 and 64 bit x86, there are shorter encodings and special operations that can only be done with certain registers)
Agner Fog's software optimization resources: http://www.agner.org/optimize/ (extremely detailed manuals on modern x86 microarchitectures and optimizing code in C++ and assembly for them)
It's a great resource, I just wish there was a version targeting amd64/Linux -- it's such a much more pleasant target to write asm for than x86 (not to mention if you target running code under "regular" OS', it's a more and more relevant target).
I've read this whole book while working at some boring job as a student and have it all printed out on an A4 paper in my desk.
I've always had a great interest into reverse engineering, so I thought learning assembly from the ground up would be a good start. However, I've actually learned more about some general concepts on how the computers work on the lowest level than I did the actual assembly programming; probably due to me losing focus further I went into the book.
I've never really continued my journey into the reverse engineering in great depth, but I'm curious how much this book is relevant to the problems today that are solved with assembly language.
One problem that's solved with assembly today is optimizing code in C or C++. In order to do that, you'd certainly need to study documentation newer than this book, because you need to know the performance characteristics of (old) instructions on current CPUs and you need to know about new instructions, like SIMD instructions. Without that knowledge, you'd be lost, because you're trying to beat the compiler, which is already pretty good.
This would be a good foundation, though. Not everything has changed.
Thanks for posting this version! Does anyone know where one can find the resources listed in the book (standard libraries & source code)? The link ftp.cs.ucr.edu seems to be long-dead.
Also does anyone know if there is a solution manual available anywhere for this edition? Thank you!
This is a great find; looking forward to learning this stuff the right way.