Codex is pretty good at Rust with x86 and arm intrinsics too, it replaced a bunch of hand written C/assembly code I was using. I will try Qwen and Kimi on this kind of task too.
(GP here) Its true that we need to be cautious about continuing to exercise our thinking muscles, undoubtedly. Ofc you can do that without using legacy techniques, but each to their own.
Agree (redundant, not obsolete), but there are better tools for the job in terms of the production value gained in terms time and mental energy spent in mastering them. You can certainly think less as tech becomes more powerful in a domain; I wouldn't advise that either.
Yubikey can require touch, and Secretive for Apple Secure enclave can require touch with fingerprint id. Some people disable these, it depends exactly on your use case.
yes, but what's to stop a malicious actor from intercepting a signature request and replacing its own contents in place of the legitimate one. yes you would find out when your push was rejected, but that would be a bit late.
I am writing an s3 server, just checked, have detailed tests for content-length-range. I found that Ceph was the only open source implementation with decent tests, and I ported these as my first stages of implementation, although I have since added a lot more. Notionally rustfs say they use the ceph test suite, but not sure how often and completely, they certainly had big conformance gaps.
Jujutsu is not "VC funded". But some of the developers, including me, work at East River Source Control (I worked on Jujutsu before that, too). The majority of the code in the project doesn't come from us -- or Google, for that matter. We don't allow people to approve patches when the author is from the same company, anyway.
If I remember correctly, jj is one guy who works at Google. Which presents a separate worry, which is that one day, when jj gets popular enough, Google will consume it, make it shit, change the name of it every six months and then shut it down.
That hasn't really been the case for a while imo: Martin works at Google and is paid to work on jj (there are also other Google employees who contribute, not sure whether they're paid to). jj is in use (wide use? No idea) alongside Google's internal tool (piper) with which it can interact (and with which it has some features in common) because jj has a pluggable backend architecture.
While I hate to engage in speculation, tell spooky stories, or screech at people about the evil CLA you have to sign in order to contribute, my personal opinion is that if Google were ever to start throwing their weight around, the project would be forked in short order and development would continue as normal – it has momentum, plenty of non-Google contributors, and a community. It's also not a product per se, though as we're about to find out, you can certainly build products on top of it – that probably makes it less likely for its current home to suddenly become proprietorial about it.
Good points. I had a horrible vision of a git -> GitHub -> Microsoft -> GitHub-on-Azure style pipeline but yeah, I think there's enough good people involved around jj that your vision is probably more likely. Also, hi Steph!
jj is not "one guy who works at Google" and the vast majority of submitted code comes from non-Google developers. Even if Google were to stop developing jj (they won't) the project would be healthy and strong.
There's some legal annoyances around e.g. CLA which was a result of being a side project of Google originally. Hopefully we'll move through that in due time. But realistically it's a much larger project at this point and has grown up a lot, it's not Martin's side project anymore.
Software security heavily favours the attacker (ex. its much easier to find a single vulnerability than to patch every vulnerability). Thus with better tools and ample time to reach steady-state, we would expect software to remain insecure.
If we think in the context of LLMs, why is it easier to find a single vulnerability than to patch every vulnerability? If the defender and the attacker are using the same LLM, the defender will run "find a critical vulnerability in my software" until it comes up empty and then the attacker will find nothing.
Defenders are favored here too, especially for closed-source applications where the defender's LLM has access to all the source code while the attacker's LLM doesn't.
This is only true if your approach is security through correctness. This never works in practice. Try security through compartmentalization. Qubes OS provides it reasonably well.
That generally makes sense to me, but I wonder if it's different when the attacker and defender are using the same tool (Mythos in this case)
Maybe you just spend more on tokens by some factor than the attackers do combined, and end up mostly okay. Put another way, if there's 20 vulnerabilities that Mythos is capable of finding, maybe it's reasonable to find all of them?
"Most security tooling has historically benefitted defenders more than attackers. When the first software fuzzers were deployed at large scale, there were concerns they might enable attackers to identify vulnerabilities at an increased rate. And they did. But modern fuzzers like AFL are now a critical component of the security ecosystem: projects like OSS-Fuzz dedicate significant resources to help secure key open source software.
We believe the same will hold true here too—eventually. Once the security landscape has reached a new equilibrium, we believe that powerful language models will benefit defenders more than attackers, increasing the overall security of the software ecosystem. The advantage will belong to the side that can get the most out of these tools. In the short term, this could be attackers, if frontier labs aren’t careful about how they release these models. In the long term, we expect it will be defenders who will more efficiently direct resources and use these models to fix bugs before new code ever ships.
"
Thunderbolt is its own protocol, electrically incompatible with PCIe. Its purpose is to encapsulate data traffic from multiple other protocols (PCIe, DisplayPort, USB) and multiplex that over the same wires. It cannot function exactly like an external PCIe port because it's solving a bigger, more complicated problem.
reply