The untrusted header problem could potentially be fixed by having the reverse proxy embed all the trusted information in a specific header, and then it just has to make sure that one header is stripped from the request. Unfortunately, there isn't (yet) a standard for that.
Or you could use something like haproxy's proxy protocol (although that may not support all the information you want, and doesn't work for multiplexing).
Edit: actually the "Forwarded" header kind of fills that niche. Although you may want extensions for things like the client certificate.
FastCGI has "parameters" and HTTP headers are special parameters starting with "HTTP_" (mimicking CGI's environment variables). All parameters not starting with "HTTP_" can be trusted because only the web server (= FastCGI client) can construct them.
> I feel like MS went out of its way to make a point that GitHub and NPM would be independent orgs that no longer had to worry about making keep-the-lights-on money
A lot of companies say that when they acquire another, and it might be true for a few years, it might even be the actual intention of the people involved in making the acquisition, but it usually doesn't last.
Kamala Harris bragged about enforcing this law against parents in California. That’s the only way that I know that it’s an actual law that gets enforced because I had never heard of laws like this before, and I grew up in US public schools in the South.
I agree with the general idea, but I would like this header to be more fine grained than just a binary "adult" or not. For example, so that you can distinguish between content that is age appropriate for teenagers and older from content that is suitable for all ages.
I'm not sure if your aware, but in American English, "con artist" is another term for a scammer. Someone who does "cons", short for "confidence tricks" (or "confidence schemes") where you gain someone's confidence in order to take advantage of them in some way, usually financial fraud.
¯\_(ツ)_/¯ I'm me. 's not likely to change is it? I could color inside the lines and hope it buys me a nice life, but if I was that kind of person I would never have tackled this insane of a project
> Those screenshots and videos are taking up space SOMEWHERE,
Sure but there is a big difference between being stored once (modulo backups) on a central server, and every developer needing to download all the resources for every issue and the entire wiki in order to work on the code at all. It works fine for sqlite, because they only have a handful of developers, so it's not a big deal for them each to have their own local copy of everything. But having to download GBs of issue and wiki data in order to make a pull request (however that would work with fossil) or otherwise contribute is a significant barrier to entry.
Not without having a degraded git experience like shallow clones, or using hacks like LFS or Xet, and then you're back at the initial problem of depending on "something else besides your repo".
Part of the problem is that fossil is very opinionated. It's great if your development flow is similar to that of the sqlite team. But it is very difficult to get it to work for other workflows. And in particular, fossil is designed for use by small teams and isn't really designed to be used by large organizations. This is even explicitly mentioned in the "Fossil versus Git" page (https://fossil-scm.org/home/doc/trunk/www/fossil-v-git.wiki)
That's fine as long as the company can choose they don't like those terms and refuse to do business. But in this case the government threatened, and carried out the threat, of classifying Anthropic as a "supply chain threat" if they didn't agree to the government's terms.
reply