Hacker Newsnew | past | comments | ask | show | jobs | submit | mseepgood's commentslogin

This text lacks information about why it is being sunset.


Indeed. It's weird they write so much with addressing the elephant.

So lets discuss it...

From the start I thought that the TechEmpower Benchmarks were testing all the metrics the JVM is good at, and non the JVM is bad at (mainly: memory usage, start-up time, container size). I got the idea back then than they were a JVM shop (could not confirm this on their current website).

Lately the JVM contenders are not longer at the top. And the benchmark contains many contenders with highly optimized implementations that do not reflect real life use.


Maintaining something like this is probably a little bit stressful.

We all know some of us take our language and framework choices as seriously as religion. I wouldn't be surprised if there was a lawsuit involved.


A third that will never have a wife.


That's not how it works. A similar fraction of women are apparently looking for just such an arrangement.

Dunno how we fix this. I suspect it can't be fixed.


"Group A wants X. Group B wants X. That needs fixing." In the name of feeedom of choice, I imagine?


You say the same about neo-nazis?


Take a deep breath, then let me know if you still want your false equivalence to be treated as a reply.


Well to be fair strict patriarchal values were the backbone of far right movements.


The second paragraph disagrees with you.


And if were true, Darwin would've already taken care of the problem.

Fact is, it's not true. The survey, whose authors work in a field where a 50% replication rate justifies breaking out the champagne, clearly didn't represent tens of millions of white and Latinx women in religious communities.

These women are almost literally bred to submit... and I say 'almost' only to contrast the recent past with the not-too-distant future. Who do you think put Trump over the top?


> We would immediately build better telescopes to track it precisely, refine its trajectory models, and begin developing propulsion systems capable of interception

That's not what would happen. We wouldn't mobilize. We'd fragment. Within days, the prediction would be declared partisan. One bloc would call it settled science; another would call it statistical hysteria. Billionaires would quietly commission private shelters while publicly funding studies questioning whether the asteroid even qualified as "large." News panels would debate whether the projected impact zone was being unfairly politicized. Conspiracy channels would insist the asteroid was fabricated to justify global governance. Others would insist the real asteroid was being hidden. Amateur analysts would flood the internet with homemade trajectory charts proving the professionals wrong. Death threats would arrive in astronomers' inboxes faster than research grants.


The film "Don't Look Up" is very similar to what you describe.


With a statically compiled language it is usually culled through dead-code elimination (DCE), and with static linking you don’t ship entire libraries.


The technology to cull code can work for dynamic languages too, even tho it does get difficult sometimes (google closure compiler[1] does dead code elimination for js, for example). It's just that most dynamic language users don't make the attempt (and you end up with this dependabot giving you thousands of false positives due to the deep dependency tree).

[1]https://github.com/google/closure-compiler


The question is how many decades each user of your software would have to use it in order to offset, through the optimisation it provides, the energy consumption you burned through with LLMs.


When global supply chains are disrupted again, energy and/or compute costs will skyrocket, meaning your org may be forced to defer hardware upgrades and LLMs may no longer be cost effective (as over-leveraged AI companies attempt to recover their investment with less hardware than they'd planned.) Once this happens, it may be too late to draw on LLMs to quickly refactor your code.

If your business requirements are stable and you have a good test suite, you're living in a golden age for leveraging your current access to LLMs to reduce your future operational costs.


In the past week I made 4 different tasks that were going to make my m4brun at full tilt for a week optimized down to 20 minutes with just a few prompts. So more like an hour to pay off not decades. average claude invocation is .3 wh. m4 usez 40-60 watts, so 24x7x40 >> .3 * 10


Would it be that many? Asked AI to do some rough calculation, and it spit that:

Making 50 SOTA AI requests per day ≈ running a 10W LED bulb for about 2.5 hours per day

Given I usually have 2-3 lights on all day in the house, that's like 1500 LLM requests per day (which sounds quite more than I do).

So even a month worth of requests for building some software doesn't sound that much. Having a local beefy traditional build server compling or running tests for 4 hours a day would be like ~7,600 requests/day


> Making 50 SOTA AI requests per day ≈ running a 10W LED bulb for about 2.5 hours per day

This seems remarkably far from what we know. I mean, just to run the data centre aircon will be an order of magnitude greater than that.


Air conditioning for a whole data center services a whole data center, not one machine running a task for 1 min


Yes... But the machines in those data centres don't get there without the companies who put them there. You get no tasks for no minutes, without the infrastructure, and so the infrastructure does actually have to be part of the environmental impact survey.


Is that true? Because that's indeed FAR less than I thought. That would definitely make me worry a lot less about energy consumption (not that I would go and consume more but not feeling guilty I guess).


A H100 uses about 1000W including networking gear and can generate 80-150 t/s for a 70B model like llama.

So back of the napkin, for a decently sized 1000 token response you’re talking about 8s/3600s*1000 = 2wh which even in California is about $0.001 of electricity.


With batched parallel requests this scales down further. Even a MacBook M3 on battery power can do inference quickly and efficiently. Large scale training is the power hog.


I’m not really worried about energy consumption. We have more energy falling out of the sky than we could ever need. I’m much more interested in saving human time so we can focus on bigger problems, like using that free energy instead of killing ourselves extracting and burning limited resources.


Especially considering that suddenly everyone and their mother create their own software with LLMs instead of using almost-perfect-but-slighty-non-ideal software others written before.


Our railways don't need sabotage - trains fail to run anyway.


Yeah but what about that electricity sabotage in Berlin, drones over airports etc.


Drones also harassed Danish airports IIRC.


True but I was talking about cut glass fibers at train tracks.


So why do people still design declarative languages?


OP is not being very precise (and in a way that I don't think is helpful). There is nothing imperative in an if expression. Declarative languages can be Turing complete. Declarative languages are a subset of programming languages.


Wishful thinking? Maybe they are tired of all this and want to make something good again, and so the cycle continues.


If you can mostly stick to the declarative way, it's still a benefit. No Turing-complete language completely prevents you from writing "bad" code. "You are not completely prevented from doing things that are hard to understand" is a bad argument. "You are encouraged to do things that are hard to understand" is a good one (looking at you, Perl).


> So why do people still design declarative languages?

Cost.

If money were no object, you would only hire people who can troubleshoot the entire stack, from React and SQL all the way down to machine code and using an oscilloscope to test network and power cabling.

Or put another way, it would be nice for the employer if your data analyst who knows SQL also knew C and how to compile Postgres from scratch, so they could fully debug why their query doesn’t do what they expect. But that’s a more expensive luxury.

Good software has declarative and imperative parts. It’s an eternal tradeoff whether you want the convenience of those parts being in the same codebase, which makes it easier to troubleshoot more of the stack, but that leads to hacks that break the separation. So sometimes you want a firm boundary, so people don’t do workarounds, and because then you can hire cheaper people who only need to know SQL or React or CSS or whatever, instead of all of them.


It’s the cycle of newcomers to <field> looking at the existing solutions and declaring “this shit is too complicated, why did these morons design it this way? Check out my DSL that does everything and is super simple!”

Then time passes, edge cases start cropping up and hacks are bolted on to accommodate them. Eventually everything struggles under the weight of not having loops, conditionals, etc. and those are added.

After some time, the cycle begins anew.


> Octals must start with zero and then o/O literals.

No, the o/O is optional (hence in square brackets), only the leading zero is required. All of these are valid octal literals in Go:

0600 (zero six zero zero)

0_600 (zero underscore six zero zero)

0o600 (zero lower-case-letter-o six zero zero)

0O600 (zero upper-case-letter-o six zero zero)


My bad! I was wrong; added a playground demonstration the parsing behavior above.


You're looking at the wrong production. They are octal literals:

    octal_lit      = "0" [ "o" | "O" ] [ "_" ] octal_digits .


Thanks! Never considered that a 21st century language designed for “power of two bits per word” hardware would keep that feature from the 1970s, so I never looked at that production.

Are there other modern languages that still have that?


The values for x and y should't come from your brain, though (with the exception of 0). They should come from previous index operations like s.indexOf(...) or s.search(regex), etc.


Indeed. Or s.length, whatever that represents.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: