I suppose what’s impressive is that (with the author’s help) it did ultimately get the port to work, in spite of all the caveats described by the author that make Claude sound like a really bad programmer. The code is likely terrible, and the 3.5x speedup way low compared to what it could be, but I guess these days we’re supposed to be impressed by quantity rather than quality.
We must have a different definition of arbitrary. OP ran 2.3 million tests comparing random battles against the original implementation? Which is probably what you or I would do if we were given this task without an LLM.
Well I cloned the repo and cannot generate this
battle test by following the instructions. It appears a file called dex.js that is required is not present among other things as well as other suspicious wrong things for what appears to be on the surface a well organized project.
I'm very suspicious of such projects so take it for what you will, but I don't have time to debug some toy project so if it was presented as complete but the instructions don't work it's a red flag for the increasingly AI slop internet to me.
I'm saying I think they may have used one simple trick called lying.
Lego blocks are how I like to think about software components... They may not be the perfect shape you need but you can iterate fast. In fact my favorite software development model is just to iterate on your lego blocks until the app you need is some trivial combination of your blocks.
Ok, maybe someone here can clear this up for me. My understanding of B+tree's is that they are good for implementing indexes on disk because the fanout reduces disk seeks... what I don't understand is in memory b+trees... which most of the implementations I find are. What are the advantages of an in memory b+tree?
You use either container when you want a sorted associative map type, which I have not found many uses cases for in my work. I might have a handful of them versus many instances of vectors and unsorted associative maps, i.e. absl::flat_hash_map.
Reverse mode differentiation? No, it can't be that natural since it took until 1970 to be proposed. But also in a sense basic (which you could also guess, since it was introduced in a MSc thesis).
Most of us that are somewhat into the tech behind AI know that it's all based on simple matrix math... and anyone can do that... So "inevitibalism" is how we sound because we see that if OpenAI doesn't do it, someone else will. Even if all the countries in the world agree to ban AI, its not based on something with actual scarcity (like purified uranium, or gold) so someone somewhere will keep moving this tech forward...
> Even if all the countries in the world agree to ban AI, its not based on something with actual scarcity (like purified uranium, or gold) so someone somewhere will keep moving this tech forward...
However, this is the crux of the matter! At issue is whether or not one believes people (individually and/or socially) have the ability to make large decisions about what should or should not be acceptable. Worse -- a culture with _assumed_ inevitability concerning some trend might well bring forth that trend _merely by the assumed inevitability and nothing else_.
It is obvious that the scales required to make LLM-style AI effective require extremely large capital investments and infrastructure, and that at the same time there is potentially a lot of money to be made. Both of those aspects -- to me -- point to a lot of "assumed inevitability," in particular when you look at who is making the most boisterous statements and for what reasons.
Integrating my time series database (https://github.com/dicroce/nanots) as the underlying storage engine in my video surveillance system, and the performance is glorious. Next up I'm trying to decide between a mobile app or AI... and if AI local or in the cloud?
Holy shit, is this the squatting man? (strangely similar stick figure cave drawings dating to the same timeframe all over the world, and reproduced apparently with high energy plasma experiment).
reply