Hacker Newsnew | past | comments | ask | show | jobs | submit | dicroce's commentslogin

If you know about the distribution of keys you can do even better by factoring that knowledge into where you split.

This is actually pretty incredible. Cannot really argue against the productivity in this case.


one possible argument against the productivity is if the mirgration introduced too many bugs to be useable.

In which case the code produced has zero value, resulting in a wasted month.


I suppose what’s impressive is that (with the author’s help) it did ultimately get the port to work, in spite of all the caveats described by the author that make Claude sound like a really bad programmer. The code is likely terrible, and the 3.5x speedup way low compared to what it could be, but I guess these days we’re supposed to be impressed by quantity rather than quality.


Its not. The project does not work or actually implement anything. It just compiles and passes some arbitrary tests the author wrote.


We must have a different definition of arbitrary. OP ran 2.3 million tests comparing random battles against the original implementation? Which is probably what you or I would do if we were given this task without an LLM.


Well I cloned the repo and cannot generate this battle test by following the instructions. It appears a file called dex.js that is required is not present among other things as well as other suspicious wrong things for what appears to be on the surface a well organized project.

I'm very suspicious of such projects so take it for what you will, but I don't have time to debug some toy project so if it was presented as complete but the instructions don't work it's a red flag for the increasingly AI slop internet to me. I'm saying I think they may have used one simple trick called lying.


Lego blocks are how I like to think about software components... They may not be the perfect shape you need but you can iterate fast. In fact my favorite software development model is just to iterate on your lego blocks until the app you need is some trivial combination of your blocks.


Ok, maybe someone here can clear this up for me. My understanding of B+tree's is that they are good for implementing indexes on disk because the fanout reduces disk seeks... what I don't understand is in memory b+trees... which most of the implementations I find are. What are the advantages of an in memory b+tree?


https://github.com/abseil/abseil-cpp/blob/master/absl/contai... mentions that b-tree maps hold multiple values per node, which makes them more cache-friendly than the red-black trees used in std::map.

You use either container when you want a sorted associative map type, which I have not found many uses cases for in my work. I might have a handful of them versus many instances of vectors and unsorted associative maps, i.e. absl::flat_hash_map.


Memory also has a seek penalty. It's called a cache miss penalty. It might be easier to think of them in general as penalties for nonlocality.


Exactly my thoughts.


They should have used HLS. Its still pulling, and the client controls the downshifts if required...


Isn't it just kinda a natural thing once you have the chain rule?


Reverse mode differentiation? No, it can't be that natural since it took until 1970 to be proposed. But also in a sense basic (which you could also guess, since it was introduced in a MSc thesis).


yes


Most of us that are somewhat into the tech behind AI know that it's all based on simple matrix math... and anyone can do that... So "inevitibalism" is how we sound because we see that if OpenAI doesn't do it, someone else will. Even if all the countries in the world agree to ban AI, its not based on something with actual scarcity (like purified uranium, or gold) so someone somewhere will keep moving this tech forward...


> Even if all the countries in the world agree to ban AI, its not based on something with actual scarcity (like purified uranium, or gold) so someone somewhere will keep moving this tech forward...

However, this is the crux of the matter! At issue is whether or not one believes people (individually and/or socially) have the ability to make large decisions about what should or should not be acceptable. Worse -- a culture with _assumed_ inevitability concerning some trend might well bring forth that trend _merely by the assumed inevitability and nothing else_.

It is obvious that the scales required to make LLM-style AI effective require extremely large capital investments and infrastructure, and that at the same time there is potentially a lot of money to be made. Both of those aspects -- to me -- point to a lot of "assumed inevitability," in particular when you look at who is making the most boisterous statements and for what reasons.


Integrating my time series database (https://github.com/dicroce/nanots) as the underlying storage engine in my video surveillance system, and the performance is glorious. Next up I'm trying to decide between a mobile app or AI... and if AI local or in the cloud?


Holy shit, is this the squatting man? (strangely similar stick figure cave drawings dating to the same timeframe all over the world, and reproduced apparently with high energy plasma experiment).

https://medium.com/@rajkumarrr/history-mystery-the-squatting...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: