Hacker Newsnew | past | comments | ask | show | jobs | submit | godisdad's commentslogin

You can’t post the GTA “here we go again” gif in HN or I would


The vagaries of the dual licensing discourages a lot of teams working on commercial projects from kicking the tires on CodeQL and generally hinders adoption for private projects as well: are there any plans to change the licensing in the future?


The Jenkins vitriol is also puzzling to me, I think the security model, reliability and backup/restore story has gotten seismically better in the intervening decade people wrote it off


Respectfully, neither of these docs strike me as really sufficient to debug live running systems in the critical path for paying users. The first seems to be related to the inner development loop and local the second is again how to attach gdb to debug something in a controlled environment

Crash reporting, telemetry, useful queuing/saturation measures or a Rosetta Stone of “we look at X today in system and app level telemetry, in the <unikernel system> world we look at Y (or don’t need X for reason Z) would be more in the spirit of parity

Systems are often somewhat “hands off” in more change control sensitive environments too, these guides presume full access, line of sight connectivity and a expert operator which are three unsafe assumptions in larger production systems IMO


You can expose Unikernel application metrics in Prometheus text exposition format at `/metrics` http page and collect them with Prometheus or any other collector, which can scrape Prometheus-compatible targets. Alternatively, you can push metrics from the Unikernel to the centralized database for metrics for further investigation. Both pull-based and push-based metrics' collection is supported by popular client libraries for metrics such as https://github.com/VictoriaMetrics/metrics .

You can emit logs by the Unikernel app and send them to a centralized database for logs via syslog protocol (or any other protocol) for further analysis. See, for example, how to set up collect ing logs via syslog protocol at VictoriaLogs - https://docs.victoriametrics.com/victorialogs/data-ingestion...

You can expose various debug endpoints via http at the Unikernel application for debugging assistance. For example, if the application is written in Go, it is recommended exposing endpoints for collecting CPU, memory and goroutines profiles from the running application.


Looking forward to the AI enabled subscription version


I was able to setup SigNoz on the order of five minutes to view traces in my Dagger builds locally just by exporting the right env vars — it was nice to not have to run and orchestrate three+ tools together


> As JSON is such an important part of the web nowadays, it deserves to be treated with more care.

There is a case to be made here but Corba, SOAP and XML-RPC likely looked similarly sticky and eternal in the past


I don't recall either CORBA or SOAP ever seeing enough penetration to look "eternal" as mainstream tech goes (obviously, and especially with SOAP, there's still plenty of enterprise use). Unlike XML and JSON.


They surely were, for anyone doing enterprise during the 2000's.

We had no plans to change to something else.


I recall a lot of talk about CORBA in early 00s, but I don't think I've actually ever seen it used anywhere outside of Gnome.

By late 00s, even the talk was more along the lines of it being legacy tech.


Several Nokia Networks products were based on CORBA, running on HP-UX, in a mix of C++ and Perl.

Eventually migrated to Java EE, also taking advantage of CORBA compatibility.


Q3 is still in C++. Huawei also still supports CORBA.


I hear you, but I am not aware of anyone that tried XMLHttpRequest.send('<s:Envelope xmlns:s...') or its '<methodCall>' friend from the browser. I think that's why they cited "of the web" and not "of RPC frameworks"


No, because that was server's job on the endpoint.


Eternal or not, right now JSON is used everywhere which means the performance gains of a more optimized Stalin would be significant. Just because we don’t know if JSON is around in 10 years doesn’t mean we should settle for burning extra compute on it.


MongoDB Atlas is a multi-cloud developer data platform that simplifies how developers work with data.

We’re hiring a Site Reliability Engineer to join our Developer Infrastructure team. Our team supports the broader SRE organization by:

- Building and maintaining container images, OS packages, and infrastructure tooling - Creating self-service Terraform workflows - Handling cloud resource provisioning across AWS, GCP, and Azure

This is a full-time, remote role for candidates based in the four continental US timezones. Prior familiarity with Bazel, Go, Terraform, Argo workflows and GitHub Actions would be helpful.

If you’re interested, apply at https://www.mongodb.com/careers/jobs/6711510


MongoDB | Site Reliability Engineer | Full-time | Remote (US-based)

MongoDB Atlas is a multi-cloud developer data platform that simplifies how developers work with data.

We’re hiring a Site Reliability Engineer to join our Developer Infrastructure team. Our team supports the broader SRE organization by:

- Building and maintaining container images, OS packages, and infrastructure tooling - Managing internal Terraform workflows - Handling cloud resource provisioning across AWS, GCP, and Azure

This is a full-time, remote role for candidates based in the US. Prior familiarity with bazel, Go, Terraform, Argo and GitHub Actions would be helpful.

If you’re interested, apply at https://www.mongodb.com/careers/jobs/6168913 (ignore JD and mention your interest in the "SRE DevInfra" team in your cover letter/note and the talent partner will route it to the team)

There are other roles in the same organization listed here if you search for 'New York': https://www.mongodb.com/company/careers/teams/engineering


Hi! Thanks for posting this. I just applied for this position through the MongoDB careers website. Any chance we can connect on LI? Thanks.


Kaiser in Oakland is without exaggeration the best medical care I’ve ever experienced. Aligning incentives between the care provider and the insurer, vertically integrating care and putting it all on a walkable campus (even with a pharmacy!!) was such an efficient and pleasant process.

I was never healthier. The other Kaisers in Oregon aren’t geographically collocated so there’s less of an effect and they’re far away from me so I don’t use them anymore, sadly


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: