As a person who works on a JS engine I can say that a lot of the speed up in this library is the failure to handle holes correctly - it's surprisingly expensive to do a hole check, although there's still room in most engines to optimise it, those checks are fairly time consuming :-/
You can version manually or automatically in the optimizing compiler inner loops of those builtins depending on the denseness / representation of the array backing store. Something along this lines: http://gist.io/7050013. Trace compiler could actually give such versioning for free automatically.
So I always putting this into not-done-yet category for JS VM related work and it is a very interesting problem to tackle.
This seems like a potential win for JS performance in real-world applications -- an optimization hint to indicate whether an array is "overwhelmingly full of holes" or something less sparse where more optimized versions of the functions can be used.
But then you need to check if you need to alter the flag that says whether the array is full of holes, which is itself an extra cost. It's hard to know what's an overall win, adding cost to save it elsewhere.