Would be good to not depend on the US that much any longer, since they have proven to be such an unreliable "partner". Even in a non-Trump future one cannot rely upon some future election not resulting in some similar disaster. Better to pull out, before some hothead gets weird ideas about that gold.
Maybe the fact that US soldiers and military bases exist inside Germany's borders is slightly more important than where the gold is. First regain your sovereignty, I'd say.
I am guessing that these bases are one of the last things to go. Would be a major diplomatic incident. But then again Trump creates those for breakfast, so who knows when we finally have had enough.
I don't get that "Strait" discussion. Where does the Strait begin and end? If somehow the US Navy "opens" the Strait, what stops Iran to attack every ship moving in the direction of the Strait? Where does the "protection zone" start and end?
Much further than that. At least 200nm using drone ISR to cue Shaheeds, 500nm with satellite ISR. (With a 90kg warhead.) There are also many fishing vessels in the region, originating from a number of countries (e.g. Oman, Iran, Pakistan) which can report sightings of VLCCs.
Once you have sighted the ship it is an undergrad project to implement target classification and recognition using off the shelf algorithms. It doesn't need a fast GPU because naval engagements are very slow, a cheap mobile phone can do it.
How many innocent fishermen are you willing to murder? And of course, the famine in Balochistan that would follow. Maybe not a great idea if you want an uprising of the Balochi against the Persians.
Oman is a regional ally, but they would not stand idly by while their citizens are killed.
Agency:
"Social Security initially denied Borges’s allegations and said the data referenced in his complaint is stored in a secure environment walled-off from the internet."
Ah walled of the internet, so no one can get there and copy the data to a flashdrive. Move on, move on!
If I recall, that was exactly what happened early on in DOGE's tenure. Senior personnel were explicitly directed to grant admin access to DOGE personnel, and auditing/logging were disabled. This was widely reported at the time. I don't remember whether there were threats of termination, but it would not surprise me.
The "fun" thing was when some agencies started then seeing access attempts from Russian IPs sometimes as soon as 15 minutes after this happened, using credentials that were valid and created by/for DOGE people...
Honest question. Why isn't stuff like this a bigger deal? Why isn't anyone being held accountable for what is undeniably a national security incident?
I can understand why the administration would try to bury it. But I wouldn't have heard of most of the shitty stuff Doge employees have done were it not for HN. Why isn't this getting more media coverage?
Right? And many of the DOGE people who were outed were shown/known/had convictions for being involved in cybercrime gangs and such. I get it, in a controlled manner, for some cybersecurity jobs, but even at face value, that was nothing about what was DOGE was doing.
Unfortunately it seems quite believable. This is the same outfit that fired a bunch of people responsible for overseeing the US Nuclear Arsenal. [0] The combination of arrogance and stupidity was breathtaking.
Contemporaneous reporting was that DOGE people demanded root-level access across multiple systems (disallowed by federal policy, so political appointees had to demand the access) and without background checks or onboarding, after which they extracted protected data and shoved it in some S3 buckets. Just blew a hole right through the entire federal data protection model; you can't plan for "the President orders everyone to ignore all privacy and security controls" as a threat model.
It was absolutely a secure environment prior to DOGE laying waste to all the layers of security in place. Presumably those safeguards are now back in place post-DOGE razing.
While it's hard to overestimate the clownishness of this administration, I'd want to see the original wording of this denial before concluding that they said something that stupid, versus the author of this article paraphrasing it in a stupid manner. I'm not sure if this is what they're referring to, but the only response from the SSA that I found with a brief search doesn't say anything so foolish: https://dailycaller.com/2025/09/02/social-security-administr...
Nothing nerve wrecking like that but come on. They claim "the information could not have been stolen because the security practices" but "evidence has been published online, is now available to anyone and therefore it is dangerous" is a clown situation. It doesn't matter how it happened, it happened. Them trying to dispute the method is a clown camp.
The agency's statement says that PII is secure but that the complaint included internal emails and documents with info about the agency's systems and employees. That's not contradictory.
I suspect the whistleblower is correct, but I don't think it's proven to the point where we can confidently state that "it happened." SSA isn't trying to dispute the method, they're trying to dispute the fundamental claim.
Hard disagree. How can it be “walled off” from the internet if it’s not connected? Despite the jokes, cutting access on its own is not the same as air gapping or a firewall. As soon as it’s plugged in there are zero controls.
One thing I love about Go, not fancy-latest-hype features, until the language collapses or every upgrade becomes a nightmare, just adding useful stuff and getting out of the way.
I know, I recently upgraded and skipped several releases without any issues with some large codebases.
The compatability guarantee is a massive win, so exciting to have a boring language to build on that doesn’t change much but just gradually gets better.
Really? My experience is that of C, C++, Go, Python, and Rust, Go BY FAR breaks code most often. (except the Python 2->3 change)
Sure, most of that is not the compiler or standard library, but dependencies. But I'm not talking random opensource library (I can't blame the core for that), but things like protobuf breaking EVERY TIME. Or x/net, x/crypto, or whatever.
But also yes, from random dependencies. It seems that language-culturally, Go authors are fine with breaking changes. Whereas I don't see that with people making Rust crates. And multiple times I've dug out C++ projects that I have not touched in 25 years, and they just work.
The stdlib has been very very stable since the first release - I still use some code from Go 1.0 days which has not evolved much.
The x/ packages are more unstable yes, that's why they're outside stdlib, though I haven't personally noticed any breakage and have never been bitten by this. What breakage did you see?
I think protobuf is notorious for breaking (but more from user changes). I don't use it I'm afraid so have no opinion on that, though it has gone through some major revisions so perhaps that's what you mean?
I don't tend to use much third party code apart from the standard library and some x libraries (most libraries are internal to the org), I'm sure if you do have a lot of external dependencies you might have a different experience.
Well, for C++ the backwards compatability is even better. Unless you're using `gets()` or `auto_ptr`, old C++ code either just continue to compile perfectly, or was always broken.
Sure, the Go standard library is in some sense bigger, so it's nice of them to not break that. But short of a Python2->3 or Perl5->6 migration, isn't that just table stakes for a language?
The only good thing about Go is that its standard library has enough coverage to do a reasonable number of things. The only good thing. But any time you need to step outside of that, it starts a bit-rotting timer that ticks very quickly.
> though [protobuf] has gone through some major revisions so perhaps that's what you mean?
No, it seems it's broken way more often than that, requiring manual changes.
But any time you need to step outside of that, it starts a bit-rotting timer that ticks very quickly.
This is not my experience with my own or third party code. I can't remember any regressions I experienced caused by code changes to the large stdlib at all in the last decade, and perhaps one caused by changes to a third party library (sendgrid, who changed their API with breaking changes, not really a Go problem).
A 'bit-rotting timer' isn't very specific or convincing, do you have examples in mind?
> I can't remember any regressions I experienced caused by code changes to the large stdlib at all in the last decade
I agree. But I'm saying it's a very low bar, since that's true for every language. But repeating myself I do acknowledge that Go in some senses has a bigger standard library. It's still just table stakes to not break stdlib.
> A 'bit-rotting timer' isn't very specific or convincing, do you have examples in mind?
I don't want to dox myself by digging up examples. But it seems that maybe half the time dependabot or something encourages me to bump versions on a project that's otherwise "done", I have to spend time adjusting to non backwards compatible changes.
This is not my experience at all in other languages. And you would expect it to be MORE common in languages where third party code is needed for many things that Go stdlib has built in, not less.
I've made and maintained opensource code continuously since years started with "19", and aside from Java Applets, everything else just continues to work.
> sendgrid, who changed their API with breaking changes, not really a Go problem
To repeat: "It seems that language-culturally, Go authors are fine with breaking changes".
To repeat: "It seems that language-culturally, Go authors are fine with breaking changes". I just chose x as examples of near-stdlib, as opposed to appearing to complain about some library made by some random person with skill issues or who had a reasonable opinion that since almost nobody uses the library, it's OK to break compat. Protobuf is another. (not to mention the GCP libraries, that both break and move URLs, and/or get deprecated for a rewrite every Friday)
The standard library not breaking is table stakes for a language, so I find it hard to give credit to Go specifically for table stakes.
And it's not like Go standard library is not a bit messy. As any library would be in order to maintain compatibility. E.g. net.Dialer has Timeout (and Deadline), but it also has DialContext, introduced later.
If the Go standard library had managed to maintain table stakes compatibility without collecting cruft, that'd be more impressive. But as those are contradictory requirements in practice, we shouldn't expect that of any language.
I initially loved Zed because it was so much snappier than VSCode/Cursor, but running several Zed instances made my Ryzen/32gb machine unusable together with Claude because Zed seems such a memory hog. Not using it currently anymore.
(Win11)
Most software the companies I worked for that was put into production, was not verified. There were spotty code reviews, mostly focusing on "I would have done it differently" and a limited amount of unit tests with low test coverage and Heisenberg E2E tests, often turned off because, Heisenberg. Sometimes overworked, bottle neck, testers.
There is hope that with AI we get to better tested, better written, better verified software.
> There is hope that with AI we get to better tested, better written, better verified software.
And it is one thing we don't get for sure.
This tech, in a different world, could be empowering common people and take some weight from their shoulders. But in this world its purpose is quite the opposite.
I thought a Studio would be my local LLM machine 2026, but this is $2000+ for the 126gb option - not for me. I assume $6000 for that Studio machine but it looks now more like $8000.
reply