I think you need the talk to be able to understand what they meant.
Because I for one haven't heard any rumors that forEach is faster than a for loop for example, and a lot of the other stuff.
There is also a lot of things that are true in certain browsers but not others, if you want the fastest possible performance you probably need to optimize per engine. (But that is overkill for most Javascript applications)
The point of this test is to show that by creating your own custom forEach-lite or other native-lite implementations you can get better performance over native.
Many times you don't need to support all of the functionality/edge cases of the native method and that allows you to gain performance/custom-functionality and save code because you no longer need to have an ES5 compliant fallback.
Many libs, like Underscore.js, fork for native methods but could reduce code/gain speed if they settled for simpler methods.
Most devs don't care about supporting sparse arrays or ToUint32'ing a `length` value and this allows us to optimize for the common case.
Also, by implementing a custom method you can gain functionality like exiting early by explicitly returning `false` or method chaining.
Well, `forEach` being faster than a for loop is more of a newbie misconception I think. It's not an unreasonable assumption, after all. In fact, looking at the `forEach` algorithm[1], it is not even immediately obvious why it would be so much slower than a for loop.
It seems that the killer is the membership test[2] at line 44. What I don't really understand is why `i in O`, when O is an array, would be so slow, and furthermore, given that it is that slow, why `forEach` uses it instead of a simple bounds check.
Strongly agree. To me the majority of slides read like one big strawman, I didn't realise the myths were even myths. Or, in a couple of cases, what they attribute to performance concerns are things I'd attribute to good programming style for other reasons.
My eval tests said the same. I'm not sure if thats indicative of a host-specific performance problem or if it is with eval itself.
Perhaps the problem I have is with this statement:
> "Eval is evil", or in other words, Eval is too slow and quirky to be considered useful.
That's misleading. While performance of eval is perhaps getting better, eval is still evil for a number of reasons that aren't related to performance at all.
Eval breaks both JIT[1] and garbage collection [2] (unless used very carefully). It also affects Google Closure Compiler [3]. What is even better - it won't ever be fixed, at least not in the terms of performance.
The same is true with undefined. You should use void 0 rather than undefined because undefined is just a normal variable which happens to be undefined unless somebody defines it, while void 0 is always going to be undefined.
There is absolutely no need to use `void 0` you can simply use `typeof` or create a local `undefined` var. Many libs do this in their IIFE `(function(undefined) { /* your code */ }());`. The point was there isn't a real world performance concern to justify using one over the other.
> Some of the tests they link prove the opposite of what they say in the slides.
The content can't be seen without signing in and it would appear that I've forgotten the password for my fake-details account, so I can't look at the code specifically.
But I can say that it is not unexpected that the results one person gets can diverge significantly when someone else tries the same test. A lot of code rearrangements that are intended to speed up Javascript execution are optimised for a particular browser (or, if people are being more thorough, a particular subset of browsers). What is optimal for Firefox's engine is not necessarily so for Chrome's, and performance metrics can vary significantly between different versions of the same browser as changes are made to optimisations within the code interpreter/compiler over time. Heck, even exactly the same version of Firefox may behave differently performance-wise on Linux than Windows.
You can never say that a given "trick" is an optimisation generally because there are many variables. Unless of course what you are actually doing is optimising your logic rather than rearranging it (i.e. changing from something that is akin to a linear search to something that has a logarithmic growth pattern) but that isn't optimising Javascript, it is optimising the algorithm you've implemented using the language.
> Also, saying that eval is not evil is evil.
There are very specific circumstances when eval() can be used with relative safety [i.e. when the only injection method is by direct code manipulation, and all client-side Javascript is susceptible to that with or without eval()]. Other than that I agree, and none of the cases where eval() is safe that spring to mind can't be implemented some other way (at least in modern browsers).
While you're entirely correct that results can differ between browers, many of the links are to jsperf, which compensates for test-run variance and shows a comparative graph of all results; some of those graphs demonstrate the opposite of what the linking slides say.
For an example, they link to http://jsperf.com/undefined-void0-perf to show that using void 0 instead of undefined is not a performance win. The data shows there's no browser in which void 0 is slower than undefined and several browsers in which is it faster, by as much as 50-60%.
In my real world case a change in loop mattered a lot for ie6 users (60% of our use base at the time, sadly). It was .5sec that we couldn't get from other means.
You shouldn't simply look at the pretty bar charts to draw conclusions. Results that are millions of ops/sec vs. more-millions ops/sec aren't likely to be a performance issue in real world use.
Eval, with-statements, arguments.callee, the Function constructor, and the like aren't "evil". They are just tools for you to use-wisely or abuse.
Repeating what I said on Reddit: The Regexp vs. indexOf test seems a little misleading. Generally, .indexOf will be faster, but they seem to have performed a very specific test that requires the indexOf method to do some string concatenation before actually searching, which IMO invalidates the usefulness of the test.
The test isn't broken in the scope of its use case. You are totally missing the point which is RegExp's are no longer a guaranteed fail when it comes to performance and that in many situations, including that jsPerf test, can out perform the indexOf equivalent.
Because I for one haven't heard any rumors that forEach is faster than a for loop for example, and a lot of the other stuff.
There is also a lot of things that are true in certain browsers but not others, if you want the fastest possible performance you probably need to optimize per engine. (But that is overkill for most Javascript applications)