The vagaries of the dual licensing discourages a lot of teams working on commercial projects from kicking the tires on CodeQL and generally hinders adoption for private projects as well: are there any plans to change the licensing in the future?
The Jenkins vitriol is also puzzling to me, I think the security model, reliability and backup/restore story has gotten seismically better in the intervening decade people wrote it off
Respectfully, neither of these docs strike me as really sufficient to debug live running systems in the critical path for paying users. The first seems to be related to the inner development loop and local the second is again how to attach gdb to debug something in a controlled environment
Crash reporting, telemetry, useful queuing/saturation measures or a Rosetta Stone of “we look at X today in system and app level telemetry, in the <unikernel system> world we look at Y (or don’t need X for reason Z) would be more in the spirit of parity
Systems are often somewhat “hands off” in more change control sensitive environments too, these guides presume full access, line of sight connectivity and a expert operator which are three unsafe assumptions in larger production systems IMO
You can expose Unikernel application metrics in Prometheus text exposition format at `/metrics` http page and collect them with Prometheus or any other collector, which can scrape Prometheus-compatible targets. Alternatively, you can push metrics from the Unikernel to the centralized database for metrics for further investigation. Both pull-based and push-based metrics' collection is supported by popular client libraries for metrics such as https://github.com/VictoriaMetrics/metrics .
You can emit logs by the Unikernel app and send them to a centralized database for logs via syslog protocol (or any other protocol) for further analysis. See, for example, how to set up collect ing logs via syslog protocol at VictoriaLogs - https://docs.victoriametrics.com/victorialogs/data-ingestion...
You can expose various debug endpoints via http at the Unikernel application for debugging assistance. For example, if the application is written in Go, it is recommended exposing endpoints for collecting CPU, memory and goroutines profiles from the running application.
I was able to setup SigNoz on the order of five minutes to view traces in my Dagger builds locally just by exporting the right env vars — it was nice to not have to run and orchestrate three+ tools together
I don't recall either CORBA or SOAP ever seeing enough penetration to look "eternal" as mainstream tech goes (obviously, and especially with SOAP, there's still plenty of enterprise use). Unlike XML and JSON.
I hear you, but I am not aware of anyone that tried XMLHttpRequest.send('<s:Envelope xmlns:s...') or its '<methodCall>' friend from the browser. I think that's why they cited "of the web" and not "of RPC frameworks"
Eternal or not, right now JSON is used everywhere which means the performance gains of a more optimized Stalin would be significant. Just because we don’t know if JSON is around in 10 years doesn’t mean we should settle for burning extra compute on it.
MongoDB Atlas is a multi-cloud developer data platform that simplifies how developers work with data.
We’re hiring a Site Reliability Engineer to join our Developer Infrastructure team. Our team supports the broader SRE organization by:
- Building and maintaining container images, OS packages, and infrastructure tooling
- Creating self-service Terraform workflows
- Handling cloud resource provisioning across AWS, GCP, and Azure
This is a full-time, remote role for candidates based in the four continental US timezones. Prior familiarity with Bazel, Go, Terraform, Argo workflows and GitHub Actions would be helpful.
MongoDB | Site Reliability Engineer | Full-time | Remote (US-based)
MongoDB Atlas is a multi-cloud developer data platform that simplifies how developers work with data.
We’re hiring a Site Reliability Engineer to join our Developer Infrastructure team. Our team supports the broader SRE organization by:
- Building and maintaining container images, OS packages, and infrastructure tooling
- Managing internal Terraform workflows
- Handling cloud resource provisioning across AWS, GCP, and Azure
This is a full-time, remote role for candidates based in the US. Prior familiarity with bazel, Go, Terraform, Argo and GitHub Actions would be helpful.
If you’re interested, apply at https://www.mongodb.com/careers/jobs/6168913 (ignore JD and mention your interest in the "SRE DevInfra" team in your cover letter/note and the talent partner will route it to the team)
Kaiser in Oakland is without exaggeration the best medical care I’ve ever experienced. Aligning incentives between the care provider and the insurer, vertically integrating care and putting it all on a walkable campus (even with a pharmacy!!) was such an efficient and pleasant process.
I was never healthier. The other Kaisers in Oregon aren’t geographically collocated so there’s less of an effect and they’re far away from me so I don’t use them anymore, sadly