For those thinking about using this, it bears mentioning that there are a few caveats to script type=module that makes it difficult to use with a lot of modern workflows, namely:
1. No way to dynamically import a module (the article explains this).
2. All imports must be relative or absolute urls, so either ./foo.js or /foo.js or http://example.com/foo.js
This means you can't npm install lodash and import the string "lodash".
3. All imports must include the file extension (.js). So even if you set up a server route for /lodash it would break as soon as one of lodash's imports didn't use .js (or it imports another package). This is assuming lodash is written with import/export; it's not so it wouldn't work at all.
4. Until http2 comes around, this is probably not something you want to ship to production (unless you have a small app with only have a handful of scripts being imported), so bundlers are here to stay for a while.
The article doesn't mention 2 and 3, but just as with 1 the module loader spec that is not yet implemented would also happen to help address 2 and 3 by providing plugin mechanisms to support name mappings like automatic file extensions and mapping canonical module names like 'lodash' to file paths.
Just to clarify, the mappings for names to paths isn't enough to load an NPM-based project. With NPM you can have multiple semver-incompatible packages. It's very common to happen, actually.
For example, you might depend on "foo" and "bar". "foo" also depends on "bar", but it needs version 1 and you need version 2. In this case, setting paths for "bar" won't work, the paths are different for each parent.
The loader spec provides a hook "resolve" that allows you to work around this, just wanted to point out that configuration alone wouldn't be enough.
It's also worth noting that the Loader spec doesn't have a real urgency to it. What I mean is that likely browsers will implement script type=module first, give it some time to brew, and then pull in things from whatwg/loader over time. In the meantime we'll want solutions that do allow us to use script type=module and still have modern workflows we are used to.
Until http/2 comes around? I've been using it for a couple of months on all our client sites and the standard was finalized last year. It's now available in all current, major browsers.
Please have a look at the data before writing such statement. In the latest CloudFlare article [1] about it (Feb 2016), they concluded that only 52.93% of their users are connecting over HTTP/2.
Like any Internet Explorer on any non-Windows 10 systems?
If 47% of all CloudFlare users are not connecting via HTTP/2 (even though it is supported server side), I believe it means that those users do not have client side support (or are behind a proxy, etc. which doesn't allow it).
At the end, if someone is thinking about making a website which would be much slower for non-HTTP/2 users then right now it means around 47% of their users.
I said "all current, major browsers" in my original post. IE is not Microsoft's current browser. In addition, not all web sites are serving via http/2 and that web sites aren't using http/2 is no fault of the browser or the user.
Honestly I'm perfectly happy with Typescript and browserify. At this point I'm not sure I need support for JS in the browser to advance unless it allows my transpiler to produce more optimized code.
The reason IE used to be behind was that at one point Microsoft decided to only publish features which had been standardized, which sounds nice, but the rest of the web wants to be on the bleeding edge with new features.
I don't think anybody really believed that. Maybe some manager. Everyone I knew on the IE team at the time wanted to be on the bleeding edge, but they were up against it catching up years of no work on IE—the "Recommendations only" line was as far as one could tell something made up by marketing to justify the fact they were so far behind and what they were implementing as a result. (It's also laughable—for a specification to become a REC at the W3C, post-2001, one practically needs two implementations. Practically it's saying "we're always behind two others!")
I can't remember where I read it, if on HN or elsewhere, but that was what I heard, it sounded plausible enough. I guess it was likely another excuse. At the very least they're aiming towards the bleeding 'Edge'.
It originally comes from the IE Blog in, I think, the IE7 release cycle (my memory of the date could be wrong; it's certainly a while ago!). At the time almost everything there was pretty censored and overly-dressed-up.
When the time between releases is measured in years you better be sure that new features are somewhat stable before you ship, especially when customers take a long time to upgrade because of the support you offer them. We always have to make a judgement about when things are ready enough but it is much easier now than it was back then. Sometimes we guess wrong but with faster release cycles it is more possible to fix things so we don't have to be _quite_ so conservative. We can also ship things behind flags now (which is how we are experimenting with modules) and we can include things in Insider builds so you can try them without waiting for a full release.
To be fair, if I'm not mistaken, those comments dated from a time when most of your efforts were put into fixing ancient bugs and implementing CSS 2.1 and the like (which, uh, was years away from being a REC), rather than anything particularly volatile (well, ignoring things like display: run-in which were ultimately dropped at CR). From everything I heard and the order that was chosen, it ultimately seemed like what was probably the sensible route to catching up, where the order was ultimately aiming at getting an increasing baseline working, and spec stability was just one of many metrics used. I was definitely told that the line was mostly trying to spin things positively, though. Certainly there were plenty of things that weren't implemented as "unstable" were all but done, such as the majority of Selectors Level 3, of which 8/9 only supported a small subset. (Heck, were you even on the IE team that far back?)
Perhaps the "bleeding edge" is a touch too extreme for what ultimately was wanted, but certainly something much closer to other browsers than where IE had been before.
So what I still don't understand is how es6 module imports work with requests. Even reading through that blog post it's not obvious to me. Does the browser make an HTTP request for the file on the server based on the relative location on the server? So if I have type="module" and app.js is in example.com/js/app.js and then I have an import statement for ./foo/bar.js it makes that request to example.com/js/foo/bar.js? I imagine all these requests would add up and will block if so. With HTTP2 maybe it would be OK to do it this way but I see myself creating single JavaScript builds for the foreseeable future making this sort of module support unlikely to help in anything other than development (which still is nice).
Note that since ES6 imports are statically defined, the browser can find and begin to download dependent files before even parsing the actual JS. They could even start downloading dependent files in parallel before the first file is fully downloaded.
But bundling will still probably be faster and will probably continue to be a mainstay of production web apps, just like minification already is.
>Note that since ES6 imports are statically defined, the browser can find and begin to download dependent files before even parsing the actual JS.
I think you meant executing the actual JS. Finding the imports requires parsing the file, and imports can be anywhere in the file as long as they're at the top level.
It depends. For HTML all browsers do a form of "preparsing" where they do simplistic string searches for things that look like URLs and start speculatively fetching based on that. You could imagine a similar pass for JS modules.
Firefox doesn't do simplistic string searches, for what it's worth. It actually tokenizes the HTML and kicks off loads from tokenization.
If someone then misuses document.write the token stream has to be thrown out and things have to be retokenized, but that's rare in practice. The tradeoff is better accuracy of the preloader and not having to go through the text twice unless someone is misusing document.write.
You still have to fetch a module before you know it's dependencies. So if A depends on B which depends on C, you must fetch those sequentially. With a deep dependency graph this will be a problem.
I don't see a practical difference between bundling and server push to the developer. You are still using tooling that traces your dependency graph and produces... something. In the case of a bundler it produces a concatenated script and in the case of http push it produces a list of files that should be sent with a particular route. To the developer, why should the latter be preferred over the former?
Three reasons off the top of my head:
- Smaller cacheable objects -- every change you make doesn't invalidate the whole "bundle".
- If there are many permutations of the optimal bundles: different browsers, different pages use different scripts, etc.
- The browser does not need to wait for the "full bundle" to download to execute your app/site -- it can start once the first necessary assets are loaded.
In one project I worked on, we had 3 bundles:
- early load (inside the head tag)
- deferred load secondary (after body tag, for the other site pages)
- deferred load primary (after the body tag, for the 'hot' pages of the site)
With a optimal http/2.0 setup, we wouldn't need to make these never fully optimal bundles.
From a developer side, at least, HTTP/2+caching (with or without server push) is preferential in that the browser handles everything (and knows what it needs) versus bundling which is a build process of some sort.
From a developer perspective, producing the only the dependency graph is still going to be faster and need less overall IO than building a bundle. You can get a sense of it today with jspm dep-cache versus jspm bundle times.
Yes, it makes separate requests, like if you made separate script tags (just using module semantics instead). With HTTP2 it will be ok, but you will still need something that understands your dependency graph so your server knows which modules to include with the request.
I guess service workers might solve a lot of issues here. You could do weird stuff like basically serving a tar file for all assets and use a service worker to handle individual requests.
Also, the not yet implemented module loader spec would include a format similar to the AMD bundle format where modules are wrapped in calls to `System.register`.
The Loader spec does include the concepts of a module registry and a low level API for them. From what I can tell skimming the specs, the high level construct of System.register has been in and out of the spec. Also from what I can tell, System.register may be possible to be implemented on top of the low level API regardless of whether it makes it into the final spec or not.
> On the other hand, I think if you started packing everything into a tar file, that could as well be a js file.
Modules give you a certain degree of namespacing, which is nice. Compiling down to a tar to be served to a Service Worker is undoubtedly easier than the mangling needed to remove the modules.
Short of browser useragent sniffing (yuck...) is there any way to make use of ES6 features in a browser agnostic app? Seems like the only way to work is with a transpiler and series of polyfills.
Depending on the ES6 features and your supported browsers, you don't need anything.
If you want to use all the ES6 features, no single modern browser implements them all (Webkit is still lacking on modules) and so you will need to use a collection of polyfills.
Who'd have though that the first vendor to build in support for ES6 modules was going to be Microsoft?
Of course it will be at least 10 years before people can safely require Edge and another 5 years before that version of Edge current enough to support modules.
Remember: IE11 will be supported with Windows 8, 8.1 and 10 until at least 2020 and the enterprise version of Windows 10 never gets feature updates and thus never an updated Edge.
Enterprise LTSB (as opposed to just Enterprise) is least of your problems, it's a niche Windows release, not supposed to be used by end-users, but rather by systems like kiosks, ATMs, medical devices, and other devices that shouldn't break, and doesn't need new features in an operating system.
I doubt many LTSB browsers browse the internet (beyond perhaps having a website open in kiosk mode disallowing entering other websites), and even if they do, not with IE.
The catch is there's the JS Module syntax, controlled by TC39, which WebKit may or may not support (not sure whether its a part of ES6 or not), and then there's the browser module loading standard, which is controlled by WHATWG.
How does this comment contribute to the discussion in any meaningful way? Microsoft's browser is actually quite advanced when it comes to new JavaScript features. The biggest problem for developers with Edge is that it requires Windows, a slow internet based testing service, or a VM to test. But really if it works in Chrome, Safari and Firefox it generally works in Edge as well.
Well, I suppose it's just a reminder for those who (fortunately) didn't have to "enjoy" the Internet Explorer years a decade ago, where 99% of the internet was designed for a bad, proprietary and buggy browser which got there not for its merits, but for the nastiest market dominance tricks. I'm afraid it would be happening again should we all give Microsoft the chance.
And those who claim that IE got dominance "not for its merits" clearly never used Netscape. IE6 lasted so long that it stagnated, but when they came out, IE5 and IE6 were well ahead of the competition.
I tell you how. Posts like this claiming they are cutting-edge will enable to once again take control of browsers and once they are at the top, they will start playing their monopoly game. And the cycle will start again.
The entire ecosystem has changed. Microsoft really isn't in a position to "embrace and extend" and they probably never will be again.
And they're taking more radical steps towards ongoing transparency and fairness than most other companies I could name, including things like their compiler & runtime licensing, pledges to never use over API use, directly supporting their cloud service competitors.
I'm not sure what value your grudge gives to you, but let me offer you a new target for hate: The only people given a free license to lock down the ecosystem from hardware all the way out to setting prices and fees in the software market is Apple, I guess. THEY won't betray us!
> Posts like this claiming they are cutting-edge will enable to once again take control of browsers and once they are at the top, they will start playing their monopoly game.
Microsoft employs smart people. It has become clear that long-term, that approach simply doesn't work. There are legal consequences to being an abusive monopoly, and on top of that we've seen that the software ecosystem routes around the bottleneck pretty rapidly.
1. No way to dynamically import a module (the article explains this).
2. All imports must be relative or absolute urls, so either ./foo.js or /foo.js or http://example.com/foo.js
This means you can't npm install lodash and import the string "lodash".
3. All imports must include the file extension (.js). So even if you set up a server route for /lodash it would break as soon as one of lodash's imports didn't use .js (or it imports another package). This is assuming lodash is written with import/export; it's not so it wouldn't work at all.
4. Until http2 comes around, this is probably not something you want to ship to production (unless you have a small app with only have a handful of scripts being imported), so bundlers are here to stay for a while.
It might be possible to work around 1, 2, and 3 using Service Workers, I wrote about this a while ago: https://matthewphillips.info/posts/loading-app-with-script-m...