Hacker Newsnew | past | comments | ask | show | jobs | submit | imtringued's commentslogin

Even if you land a job in your field, you will encounter that academia is backwards vs industry in some aspects and decades ahead of what is adopted in the industry in other aspects to the point where both of these mean that you won't make much use of the skills you learned in university.

Teaching how computer hardware works is pretty smart. There is no need to do it in depth though.

Writing assembly is probably completely irrelevant. You should still know how programming language concepts map to basic operations though. Simple things like strict field offsets, calling conventions, function calls, dynamic linking, etc.


> Writing assembly is probably completely irrelevant.

ffmpeg disagrees.

More broadly, though, it’s a logical step if you want to go from “here’s how PN junctions work” to “let’s run code on a microprocessor.” There was a game up here yesterday about building a GPU, in the same vein of nand2tetris, Turing Complete, etc. I find those quite fun, and if you wanted to do something like Ben Eater’s 8-bit computer, it would probably make sense to continue with assembly before going into C, and then a higher-level language.


I agree. In the beginning when I was starting, I let the AI do all of the work and merely verified that it does what I want, but then I started running into token limits. In the first two weeks I honestly was just looking forward for the limit to refresh. The low effort made it feel like I would be wasting my time writing code without the agent.

Starting with week three the overall structure of the code base is done, but the actual implementation is lacking. Whenever I run out of tokens I just started programming by hand again. As you keep doing this, the code base becomes ever more familiar to you until you're at a point where you tear down the AI scaffolding in the places where it is lacking and keep it where it makes no difference.


Anthropic is still getting weekly memory leak reports with memory leaking at a rate of 61GB/h and all of them are getting closed automatically as duplicates.

I personally haven't tried Claude Code because I can't install it on my PC. I'm starting to get the impression that they banned non Claude products from using their subscription, because their products are of such a poor quality that everyone is fleeing from them.


I don't know what Ubuntu is doing with the RAM but I'm constantly swapping with all of the 16GB RAM filled on my work laptop with Ubuntu.

At home I have a desktop running Arch plus Gnome with 32GB RAM and I am at 7GB on a normal day and below 16GB at all times unless I run an LLM.


That reminds me of the restaurant using its liquidity to prepay its dry aged steak supplier.

https://commoncog.com/cash-flow-games/#3-pre-payments-in-the...

Liquidity is expensive. Selling a carrier one at a time is like a retail business where you're expected to hold onto stock. If you don't build up an inventory to sell from and just sell one unit, you have to markup the price to cover the cost of the factory when it is idle.


There is a comment by sophisticles that fundamentally misunderstands the cost of an endian swap.

It costs nothing other than having separate instructions for the different endian types.

The reason for this is that on the transistor level it takes exactly zero transistors to implement a byte swap since all you are changing is in which order the wires are connected.

Forcing software to deal with the pain of big endian support in exchange for saving a nonexistent cost in hardware is such a bad trade that it's on the same level of stupidity as not applying a clear coat on a car and then seeing them rust and expecting the owner of the car to thoroughly wax the car frequently to prevent the inevitable formation of rust.


It doesn't take zero transistors, at the very least you will need a multiplexer to choose between the two encodings. But such a mux is less than 10 transistors per bit, a rounding error for any modern CPU.

Not just that. If you store a lower precision type e.g. 2 byte integer inside a large precision type e.g. 8 byte integer, you can read the 2 byte integer by just reading two bytes. Extending or shrinking data types leads to a very natural way of implementing arbitrary precision arithmetic. To get the same capability with big endian your pointer has to point at the end of the number. If you have byte arrays, like a string, you would have to swap the order in which you allocate data, starting from the end of the array and always decrement your index from the array pointer. This would then also apply this to struts. You point at the end of the struct and subtract the field offsets.

Overall this seems like a pretty weird choice on a planet where the vast majority of text is written from left to right and only numbers are written right to left. Especially since endianness only affects byte order but not bit order, as you said.


The primary benefit touted for big-endian is "When I do a memory dump, the data looks right."

But if you really believe the left side is bigger, why do you put the smaller memory address on the left side of your dump?


This is probably one of the worst analogies you could have brought up in this context.

The business model of an ISP involves fixed capital investments into infrastructure with constant opex and very little variable costs.

The marginal cost of sending a gigabyte is basically zero. The limited resource here is bandwidth and ISPs split their tiers based on bandwidth.

The problem is that some users may consume the local bandwidth that is shared with other users. More bandwidth requires more investment into infrastructure. This means that bandwidth in itself doesn't produce costs for the ISP either, it is the maximum bandwidth capacity that costs money.

Hence, oversubscription is a viable business as long as neighbors aren't impacted by power users.

This doesn't apply to LLMs. Token economics has the same economics as steel. There is high capex to get started, but the real killer is the variable cost per unit of steel.

You can't sell steel on a oversubscribed subscription model. It's nonsensical.

If the subscription is more expensive than buying what you need, nobody is going to pay for the subscription unless they consume all of it.

Hence the subscription must contain a subsidy to make it competitive.

However, the people who consume the full subscription are still there and each token they request adds up on your electricity bill.

Ergo, the subscription must be more expensive than the API, but with a smart billing limit that removes the cognitive burden of using your service with pay as you go billing.


You're mixing up the words engineer and professional.

A professional can still be a mere subordinate who just follows orders.

I don't know why it's so poulpular to conflate the word engineer and developer to the point where simonw decided to drop the most important word "software" and started calling AI assisted software development "agentic engineering" which is the most absurd oxymoron you can come up with.

The person prompting for code is delegating the majority of decision making to the AI. This is the antithesis of engineering. Hence the operator cannot be the "engineer", at best the AI can be the "engineer", if it is smart enough.

The word engineering implies a task with trade offs, guarantees and expectations about the finished product. The vast majority of software isn't important enough to even know what the specifications are or what features it should have ahead of time. You throw something at the wall and see what sticks. "Agentic engineering" just accelerates the process of throwing things onto the market.

Then there is the fact that "engineering" has become a euphemism for software and nothing else. Anything physical is excluded from the start.

Finally "agentic engineering" implies that you're engineering the agent, but you're not doing that either. You're just a user who set up a sandbox and is letting the AI loose.


Engineers are only one type of professional: doctors, lawyers and accountants are also professionals who have obligations to their profession before their obligation to their employer.

The title 'software developer' is correct. We are not engineers and we are not professionals. Pretending otherwise is a grasp for unearned status.


I believe software developers don't have any kind of paperwork to be considered professionals. Professionalism is a kind of attitude to begin with and can be tied to your conscience and moral compass.

Any paperwork certifying this is just a label and external anchor. In essence, it starts from within.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: