Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> when you return to the old systems after decades, and re-code things.

One of the things that fascinate me is the architectural trade-offs made to deal with the differences in memore and storage.

I grew up with a C64 and Amiga, and have in particularly been delving into old Amiga software. And while a "big" computer compared to the C64 or Oric (which I only remember from the regular "doorstop" insults from C64 users...), you see the difference all over the place.

E.g. a symbolic disassembler that, instead of assuming what we'd likely do now, that it could read everything into memory and build all kinds of structures to help disassemble things, would do two passes, one to identify code sections and one to attach labels and outputthe result.

Or how cut and paste from the shell on the Amiga was structured so that writing the data to the clipboard would happen in a separate task (thread/process) to the one you cut and pasted from, because your clipboard could be located on a floppy or other slow device (most would have it assigned to a directory in a ram disk, but for people with only 256KB or 512KB RAM who wanted to be able to cut and paste "a lot" of data, it might very well be assigned somewhere weird, and even a harddisk might be extremely slow), and so there was otherwise a risk of locking up the UI.

I come away from processes like that thinking about how wasteful a lot of modern development is. Of course most of the time it doesn't matter. But when it does, a lot of modern developers just have never been exposed to styles of development that would help them easiest conserve resources.

(And for a lot of development it does matter greatly. E.g. it annoys me greatly that my Android phone with a CPU several hundred times faster than my Amiga (the comparison being particularly interesting because the Amiga at the time was in competition with the Acorn Archimedes range, running one of the earliest ARM CPU's at around the 8MHz mark, and it was a pretty even match), and a screen resolution and bits per pixel that's not more than about 10-15 times higher, and read/write speeds even to the slow flash of current phones that's still ten times higher than the 20MB harddisk I had back then, is still substantially more sluggish even for basic user interface updates when nothing much is running...)



Fully with you, except I LOL'ed about one thing: an assembler that only makes TWO passes? Ha hah! ;)

Yeah, I'm learning new tricks and have a newfound appreciation for those wireheads who always seemed hell bent on optimizing the crap out of code that I had already deployed .. definitely, returning to the 8bit mindset has made me a much, much better programmer. Whereas 'good enough' works most of the time, 95%, I've developed another sense, maybe borne from operating at the Mhz level, for when things could be 'just a little bit tighter', and I think that all came from a return to the Oric-1 and 6502 assembly ..

And yeah, totally with you on the Android phone being slow point of view. It infuririates me to no end to return to an Android project after 2 or 3 years and realize "oh shit, I have to be responsible for ALL of this crappy pile of code, just to get something up on the screen". Its one of the reasons I moved on from pure 100% Native development, to developing with things like MOAI (which I love) .. one set of code that runs on iOS/Android/Mac/Win/Linux/Chrome/&etc. is far better than having to have a full repository for each platform, different languages, different text files, all for doing the same purpose: putting a button up on the screen, or whatever.

A return to the 8bit scene can give even the most proud, arrogant developer, a reality adjustment to just how devolved we have become .. I yearn for a mobile platform that ships with its own compiler, or multi-pass assembler, at the very least .. ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: