Byte-for byte de-duping of search results is perfect and fairly cheap. Fuzzy de-duping is more expensive and imperfect. Users get really annoyed when a single query gives them several results that seem like near copies of the same page.
Tons of pages have minor modifications by JavaScript, and only a very small percentage have modifications done by JavaScript that result in JavaScript analysis resulting in improved search results.
So, if JavaScript analysis isn't deterministic, it has a small negative effect on the search results of many pages that offsets the positive effect it has on a small number of pages.
Tons of pages have minor modifications by JavaScript, and only a very small percentage have modifications done by JavaScript that result in JavaScript analysis resulting in improved search results.
So, if JavaScript analysis isn't deterministic, it has a small negative effect on the search results of many pages that offsets the positive effect it has on a small number of pages.