Based on that analysis, javascript "scrambling" is the preferable method: It is 100%(1) effective and has no usability implications.
1: The analysis runs 1.5 years until July 2008 -- one must assume that crawlers has become more sophisticated. I.e. building the DOM, executing any javascript and then searching all visible text isn't that difficult, and less so now than in 2007.
It's still far from trivial if you're doing this on millions of pages, especially as you'll have to sandbox the JS in some way, which may or may not subtly break things in other ways. I suspect the effort isn't worth it.
Email harvesters probably don't want to run javascript because then they would be open to traps (like infinite loops or other cpu consuming scripts) that could be targeted at them.
There are ways to trick bots. 4chan used to have a second field named 'email' in their submission form that was hidden with the value set to "DO NOT PUT ANYTHING HERE" (or something similar) and spammers would blindly fill both email fields (unless someone was specifically targeting 4chan). I'll bet there are plenty of ways to get an email-crawler to click on some link that a normal person would not.
Such traps can be placed on decoy pages that users and good robots are unlikely to visit. Or, that legitimate users execute rarely -- when clicking a link to send a single legitimate email -- but email harvesters execute in excess.
1: The analysis runs 1.5 years until July 2008 -- one must assume that crawlers has become more sophisticated. I.e. building the DOM, executing any javascript and then searching all visible text isn't that difficult, and less so now than in 2007.