To be fair, it feels like the DNS service has been the most reliable part of our Azure infra. Never really had issues with it, whether with traffic or API calls.
More seriously, keeping a local cache of external npm packages, and a local artifact storage for internal npm packages looks like a wise thing to have done long ago. Might be cheaper in the long run.
Ironically, both Nandu and Verdaccio are implemented in Tyepscript and install via npm.
(Same logic obviously applies to Python packages, Docker images, etc.)
At my former job we had a private registry that was a mirror of npm’s with an approval gate for packages devs would request and it would always pin versions
I took that for granted back then and just assumed it was standard enterprise policy
Multiple previous jobs had this too (local Packagist is thing, Artifactory is another) but my current job got rid of theirs. Seemed a little short-sighted given the risks but I don't make the decisions.
> a local artifact storage for internal npm packages looks like a wise thing to have done long ago
Deno already does this invisibly by default.
All packages are stored in the global cache.
No need to store multiple versions of the same dependencies across projects.
To the code in your projects: there is no such thing as a global cache. Just import your dependencies like normal and deno maps them to the global cache.
Caching NPM was easier when you could pull the Couchbase replicate API. Afaik that's gone and now you just have to send a bazillion http requests instead.
Does IPFS support content eviction now? If not, that could go wrong really fast. You get a compromised package out there and then, I think, literally every node needs to unpin it or it remains.
Presumably, how ever you mark a version as latest would also be how you mark one as compromised. IPFS files are immutable and keyed by hash. But this seems like overengineering.
libc is still working just fine, as is the linux kernel. Mayhaps having 2000 dependencies on 3000 packages from 4000 unvetted sources was a mistake afterall?
GitLab is right there. And overall provides a better product than GitHub, if nothing else on these two points:
* You can actually have an organisational structure (folders/namespaces), and projects can be moved around with automatic redirects. Also, inheritance of access controls, variables between the namespaces
* GitLabCI is organised in a way that makes supply chain attacks less of a risk. GitHub Actions takes the NPM/JS approach, where every step is an action, one you usually need to get off someone, with shoddy versioning, tons of transient dependencies, etc. In GitLabCI you can have templates, but you don't have to use an external template for every bit. It's shell scripting on top of containers, so you can have custom container images with your stuff, or custom scripts, or templates that bundle it all.
I’ve personally been deeply unappreciated of Github’s changes in the last few years to automatically not show diffs to “large files” without having to click to open them - which seems to be a threshold that continues to shrink. Maybe like 3 screenfuls of content is the limit now per file. It’s crazy.
Yeah, agreed it's not great for that. I'm not real happy with GitHub's worsening UX either, but it'll at least show the _names_ of all the files in the PR.
With GitLab, when you hit the rate limit, any file "past" that limit doesn't even show that it exists in the MR. It just looks like the MR is missing a bunch of stuff, with no workaround available. :( :( :(
SSO, access tokens, secrets are all bound to the Organization level - if you work on multiple Organizations you have to log in separately... You also cannot have nested Organizations.
I mean more like a full git competitor. Gitlab exists but more competition is generally better for the consumer and it looks like Github's lead is starting to falter with all these incidents.