I've chatted a bit with the author, but not actually tried the language. It looks very interesting, and a clear improvement. I'm not particularly quiet about not liking Go[1].
I do think there may be a limit to how far it can be improved, though. Like typed nil means that a variable of an interface type (say coming from pure Go code) should enter Lisette as Option<Option<http.Handler>>. Sure, one can match on Some(Some(h)) to not require two unwrapping steps, but it becomes a bit awkward anyway. (note: this double-Option is not a thing in Lisette at least as of now)
Lisette also doesn't remove the need to call defer (as opposed to RAII) in the very awkward way Go does. E.g. de facto requiring that you double-close on any file opened for write.
Typescript helps write javascript, but that's because until WASM there was no other language option to actually run in the browser. So even typescript would be a harder sell now that WASM can do it. Basically, why try to make Go more like Rust when Rust is right there? And fair enough, the author may be aiming for somewhere in between. And then there's the issue of existing codebases; not everything is greenfield.
So this seems best suited for existing Go codebases, or when one (for some reason) wants to use the Go runtime (which sure, it's at least nicer than the Java runtime), but with a better language. And it does look like a better language.
So I guess what's not obvious to me (and I mentioned this to the author) is what's the quick start guide to having the next file be in Lisette and not Go. I don't think this is a flaw, but just a matter of filling in some blanks.
> Basically, why try to make Go more like Rust when Rust is right there?
Go gives you access to a compute- and memory-efficient concurrent GC that has few or no equivalents elsewhere. It's a great platform for problem domains where GC is truly essential (fiddling with spaghetti-like reference graphs), even though you're giving up the enormous C-FFI ecosystem (unless you use Cgo, which is not really Go in a sense) due to the incompatibilities introduced by Go's weird user-mode stackful fibers approach.
> Basically, why try to make Go more like Rust when Rust is right there?
The avg developer moves a lot faster in a GC language. I recently tried making a chatbot in both Rust and Python, and even with some experience in Rust I was much faster in Python.
No doubt a chatbot would be built faster if using a less strict language. It wasn't until I started working on larger Python codebases (written by good programmers) that I went "oh no, now I see how this is not an appropriate language".
Similar to how even smaller problems are better suited for just writing a bash script.
When you can have the whole program basically in your head, you don't need the guardrails that prevent problems. Similar to how it's easy to keep track of object ownership with pointers in a small and simple C program. There's no fixed size after which you can no longer say "there are no dangling pointers in this C program". (but it's probably smaller than the size where Python becomes a problem)
My experience writing TUI in Go and Rust has been much better in Rust. Though to be fair, the Go TUI libraries may have improved a lot by now, since my Go TUI experience is older than me playing with Rust's ratatui.
I've also found that traversing a third-party codebase in Python is extremely frustrating and requires lots of manual work (with PyCharm) whereas with Rust, it's just 'Go to definition/implementation' every time from the IDE (RustRover). The strong typing is a huge plus when trying to understand code you didn't write (and I'm not talking LLM-generated).
Only in the old "move fast and break things" sense. RAII augmented with modern borrow checking is not really any syntactically heavier than GC, and the underlying semantics of memory allocations and lifecycles is something that you need to be aware of for good design. There are some exceptions (problems that must be modeled with general reference graphs, where the "lifecycle" becomes indeterminate and GC is thus essential) but they'll be quite clear anyway.
> Only in the old "move fast and break things" sense
No, definitely not only in that sense. GC is a boon to productivity no matter how you slice it, for projects of all sizes.
I think the idea that this is not the case, perhaps stems from the fact that Rust specifically has a better type system than Java specifically, so that becomes the default comparison. But not every GC language is Java. They don't all have lax type systems where you have to tiptoe around nulls. Many are quite strict and are definitely not "move fast and break things" type if languages.
> Go was not satisfied with one billion dollar mistake, so they decided to have two flavors of NULL
Thanks for raising this kind of things in such a comprehensible way.
Now what I don't understand is that TypeScript, even if it was something to make JavaScript more bearable, didn't fix this! TS is even worse in this regard. And yet no one seems to care in the NodeJS ecosystem.
<selfPromotion>That's why I created my own Option type package in NPM in case it's useful for anyone: https://www.npmjs.com/package/fp-sdk </selfPromotion>
TypeScript tried to accurately model (and expose to language services) the actual behavior of JS with regards to null/undefined. In its early days, TypeScript got a lot of reflexive grief for attempting to make JS not JS. Had the TS team attempted to pave over null/undefined rather than modeling it with the best fidelity they could at the time, I think these criticisms would have been more on the mark.
ReasonML / Melange / Rescript are a wholistic approach to this: The issue with stapling an option or result type into Typescript is that your colleagues and LLMs won't used it (ask me how I know).
Your readme would really benefit from code snippets illustrating the library. The context it currently contains is valuable but it’s more what I’d expect at the bottom of the readme as something more like historical context for why you wrote it.
Yup, in my TODO list (I've only recently published this package). For now you can just check the tests, or a SO answer I wrote a while ago (before I published the idea as an npm package): https://stackoverflow.com/a/78937127/544947
Golang does have a lot of weird flaws/gotchas, but as a language target for a compiler (transpiler) it's actually pretty great!
Syntax is simple and small without too many weird/confusing features, it's cross platform, has a great runtime and GC out of the box, "errors as values" so you can build whatever kind of error mechanism you want on top, green threading, speedy AOT compiler. Footguns that apply when writing Go don't apply so much when just using it as a compile target.
I've been writing a tiny toy functional language targeting Go and it's been really fun.
Go's defer is generally good, but it interacts weirdly with error handling (huge wart on Go language design) and has weird scoping rules (function scoped instead of scope scoped).
Does Go actually have an async story? I know that question risks starting a semantic debate, so let me be more specific.
Go allows creating lightweight threads to the point where it's a good pattern to just spin off goroutines left and right to your heart's content. That's more of a concurrency primitive than async. Sure, you combine it with a channel, and you've created an async future.
The explicit passing of contexts is interesting. I initially thought it would be awkward, but it works well in practice. Except of course when you need to call a blocking API that doesn't take context.
And in environments where you can run a multitasking runtime, that's pretty cool. Rust's async is more ambitious, but has its drawbacks.
Go's concurrency story (I wouldn't call it an async story) is way more yolo, as is the rest of the Go language. And in my experience that Go yolo tends to blow up in more hilarious ways once the system is complex enough.
To be fair, Go’s async story only works because there’s a prologue compiled into every single function that says “before I execute this function, should another goroutine run instead?” and you pay that cost on every function call. (Granted, that prologue is also used for other features like GC checks and stack size guards, but the point still stands.) Languages that aspire to having zero-cost abstractions can’t make that kind of decision, and so you get function coloring.
I'm not sure this is 100% correct. I haven't researched it but why would they perform such a check at runtime if it is 1)material and 2) can be done at compile time. However, even if it is, Go is only trying to be medium fast / efficient in the same realm as its garbage collected peers (Java and C#).
If you want to look at Rust peer languages though, I do think the direction the Zig team is heading with 0.16 looks like a good direction to me.
> why would they perform such a check at runtime if it is 1)material and 2) can be done at compile time
It can’t be done at compile time because it’s a scheduler. Goroutines are scheduled in userland, they map M:N to “real” threads, so something has to be able to say “this thread needs to switch to a different goroutine”.
There’s two ways of doing this:
- Signal-based preemption: Set an alarm (which requires a syscall) that will interrupt the thread after a timeout, transferring control to the goroutine scheduler
- Insert a check to see if a re-schedule needs to happen, in certain choice parts of the compiled code (ie. At function call entry points.)
Golang used to only do the second one (and you can go back to this behavior with - asyncpreemptoff=1), it’s why there was a well-known issue that if you entered an infinite loop in a goroutine and never called any functions, other goroutines would be starved. They fixed that by implementing signal-based preemption above too, but it’s done on top of the second approach.
Granted, the prologue needs to happen anyway, because go needs to check if the stack needs to grow, on every function call. So there’s basically a “hook” installed into this prologue that is a single branch, saying “if the scheduler needs to switch, jump there now”, and it basically works sort of like an atomic bool the scheduler writes to when it needs to re-schedule a goroutine… Setting it to true causes that function to jump to the scheduler.
Go has done a lot of work to make all of this fast, and you’re right that it only aspires to be a “medium-fast” language, and things like mandatory GC make these sort of prologues round to zero in the scheme of things. But it’s something other languages are fully within their rights to avoid, is my point (and it sounds like you agree.)
It sounds like you know about this / have researched it. Are you saying that any go function, even func add(x,y int) { return x + y}, is going to have such overhead in all situations? Why wouldn't Go just inline this for instance when it can? It seems like such an obvious optimization.
If go chooses to inline a function in general, then it doesn’t need to add the prologue to the inlined code, no. The prologue applies to all functions that remain after the inlining is done.
There’s also functions that can be marked as “nosplit” that skip the prologue as well.
But otherwise, it has to be in every function because you might be 1 byte away from the top of go’s (small) stack size, then you call that simple add function, and if the prologue isn’t run the stack will overflow. Go has tiny stacks by default that grow if they need to, with this prologue functioning as the “do I need to split/grow the stack?” check, so it needs to be every function that does it. The scheduler hook is just a single branch that’s part of the prologue, so it’s not that much more expensive if you’re doing the prologue anyway.
I do think there may be a limit to how far it can be improved, though. Like typed nil means that a variable of an interface type (say coming from pure Go code) should enter Lisette as Option<Option<http.Handler>>. Sure, one can match on Some(Some(h)) to not require two unwrapping steps, but it becomes a bit awkward anyway. (note: this double-Option is not a thing in Lisette at least as of now)
Lisette also doesn't remove the need to call defer (as opposed to RAII) in the very awkward way Go does. E.g. de facto requiring that you double-close on any file opened for write.
Typescript helps write javascript, but that's because until WASM there was no other language option to actually run in the browser. So even typescript would be a harder sell now that WASM can do it. Basically, why try to make Go more like Rust when Rust is right there? And fair enough, the author may be aiming for somewhere in between. And then there's the issue of existing codebases; not everything is greenfield.
So this seems best suited for existing Go codebases, or when one (for some reason) wants to use the Go runtime (which sure, it's at least nicer than the Java runtime), but with a better language. And it does look like a better language.
So I guess what's not obvious to me (and I mentioned this to the author) is what's the quick start guide to having the next file be in Lisette and not Go. I don't think this is a flaw, but just a matter of filling in some blanks.
[1] https://blog.habets.se/2025/07/Go-is-still-not-good.html