Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just today I was looking into IPFS vs DAT, does anybody have any insights about the similarities/differences other than the ones listed here [1]?

From far away, DAT looks smaller and better documented (perhaps less ambitious, too?) Apparently the best IPFS overview is the 2015 paper [2] which looks pretty daunting and does not seem to cover any practical considerations.

1: https://docs.datproject.org/docs/faq#how-is-dat-different-th...

2: https://github.com/ipfs/papers/blob/master/ipfs-cap2pfs/ipfs...



I consider dat:// to be the better protocol, in part because of what you mentioned. Other advantages are the lack of duplicating data on disk (IPFS makes a copy of all data it shares) as well as having a versioned history of all changes. That way app owners can'tp ublish malicious versions while preventing people from using the non-malicious ones.

Essentially, dat:// behaves like BitTorrent but the torrent data can change.

The only downside for both protocols I can think of is that the integration story outside the browser and CLI tools is very poor (there is no FFI/C lib Ic an bind my Rust app to)


> (there is no FFI/C lib Ic an bind my Rust app to)

Language choices in both have set off my "this is doomed to obscurity" alarm bells for precisely that reason. You don't write the reference implementation for a new Internet protocol—especially the core library for it, and especially a very complex one—in a language that can't easily be included in most other languages. So, probably C.

Dat in particular seems great but ain't no way I'm relying on a large JS project for anything I don't absolutely have to, on my own time, especially if it deals with my files.


With regards to IPFS (don't have much experience with Dat), I have a hard time understanding how choosing Go to implement their protocol, as well as Javascript makes it doomed for obscurity. A lot of distributed/decentralized applications and platforms are being written in Go. This is the first time I've heard this argument.

Not to mention you can call into Go funcs over shared libs from C.

Rust is a great language definately, but not using it is a completely sane design choice. C has been a source of a number of security issues due to memory management, that type safe languages solve.

Not to mention, there are various parts of the IPFS/LibP2P stack that are being written in Rust by other teams.


There is a BitTorrent extension for updatable torrents:

https://www.bittorrent.org/beps/bep_0046.html


Yeah but there isn't any client that supports it. Or more importantly, easy-to-use libraries that allow me to use it in my projects.


Dat-rs exists but seems to be on pause.


Datrs dev here. I implemented all of the Hypercore feed protocol last year. The next step was to add the networking layer, which was blocked on async IO in Rust. So that's what I've been working on this year (the Runtime project).

Datrs development should be unblocked again soon, starting by moving the fs layer over to async IO. And then tackling the network layer.


Awesome to hear that! Thank you so much for your work!


DAT is run by influential hobbyists of independent means, with occasional funding from non-profit organisations.

IPFS is essentially run by a very well funded (300musd) private company.

For this reason alone I think DAT is the more likely to succeed. It seems hard to reconcile the longevity of a truly distributed protocol with the need of a private company to retain control.


> IPFS is essentially run by a very well funded (300musd) private company.

That is news to me - I've thought they were a scrappy startup. The issues mentioned in the post like glitches in the docs are excusable for a project that relies on volunteers who prefer writing code to polishing docs, but if you have $300M in funding then wtf? Just, you know, hire good project management and docs people.


My understanding was that most of the team focus has shifted over to the filecoin project in the last year or two, and they aren't dedicating as many resources to ipfs. That said, I agree that this is pretty deplorable.


Several of the key members have shifted focus but there's still a core IPFS team. But we're still spread quite thin.

We're still paying down years of documentation debt but there has been quite a bit of progress:

* Expanded https://docs.ipfs.io/ with a concepts section. * Tutorials https://proto.school/#/ * A ton of work on libp2p specs (https://github.com/libp2p/specs/commits/master) along with a full time documentation writer.


My bet is both likely to fail. NIH syndrome is strong with both cases.

For efficient file transfer protocol between peers, Bittorrent protocol have multiple independent implementations that is working right now. They should build on top of that. Instead, both DAT and IPFS try to implement their own protocol with dubious additional features. For IPFS, it even relies on traditional DNS. What are they thinking?


> NIH syndrome is strong with both cases.

Seriously. IPFS decided the standard URL format wasn't good enough so invented something worse, which was pretty funny/sad. I never saw reasons for it that made any sense.

See:

https://github.com/ipfs/ipfs/issues/227

[EDIT] here's the original, deeply "LOLWUT?" justification for it, quoted in this comment on another issue. Other justifications were given but this was the motivation. Oh man. Wow.

https://github.com/ipfs/go-ipfs/issues/1678#issuecomment-139...

[EDIT] farther down, same author as the quoted text in the above comment: "wish unix:// wasn't taken by unix sockets." Oh FFS, guys. Hahaha.


IPFS also needlessly rolled their own TLS replacement:

https://github.com/ipfs/specs/issues/29

> I would add that if TLS can be used without all the X.509 nonsense, and with our own choice of pubkeys, including using different signing public keys (not the DHE keys) in the same conn, then we can consider breaking our "not TLS pls" stance.

TLS fit their requirements all along, they just… decided to reinvent it instead of reading about how to use it.


Their scheme as described in #227 would make sense, if they were designing IPFS as a service for Plan 9.


IPFS doesn't rely on DNS. Most Dat deployments do, however.


This strikes me as a little disingenuous. Neither IPFS nor DAT are inherently dependent on DNS. If you're referring to HTTP the gateways that lots of people use with both IPFS and DAT in order to host static sites, arguably that's because support for UDP and other useful p2p tools in the browser is experimental! But people still find both IPFS and DAT useful for hosting static files. (libdweb is really exciting by the way: https://github.com/mozilla/libdweb)

From what I can tell, most "serious deployments" of DAT and IPFS are made by people directly using the loose underlying collections of libraries that implement each of them. These people often end up putting together application specific transport and discovery layers that work for their specific application.


Except that, from the article, the only way to really use IPFS for pretty names is to use DNSlink and not IPNS. Because IPNS is unusably slow.

So IPFS might not theoretically rely on DNS, but it seems that it does practically rely on DNS if you actually want to use it.


If you want to use it with DNS names, yes.

I have a hunch that IPNS is just broken in its implementation (manifesting as "unusably slow") but I haven't yet had the spare time to investigate this theory..


afaik several of the Dat folks worked on Bittorrent implementations beforehand so it's possible they were thinking...something?


There are a bunch of differences but the important ones (imho) are:

Dat doesn't use immutable addressing (addresses stay the same when content changes) while IPFS does.

Dat at the lowest layers is stream-oriented, allowing stream-oriented services and applications that are near-real-time. IPFS is static blob/object oriented.

IPFS has a better developed "discovery" network at present (if you use Dat today you are typically in your own island whereas with IPFS you're part of "the" IPFS network). This is being worked on however.


Some decent answers in this stack overflow thread: https://stackoverflow.com/questions/44859200/what-are-the-di... and a good response from IPFS creator jbenet here: https://github.com/ipfs/faq/issues/119#issuecomment-21827839...

I think it's possible to view Dat and IPFS as two different layers of a stack that can interoperate and each solve useful problems at their layer. For example, Dat has more UX focus and high-level abstractions making generic app development smooth and easy (an area IPFS is weaker - though https://medium.com/textileio has been working to make this much better), while IPFS has the benefit of global name-spacing and content-addressing primitives that enable deduplication across identical datasets and validate the content is what you asked for (used by tools like Qri to do dedup within a data commons: https://qri.io/faq/). I've seen demos of projects using both together, each for their unique strengths - but there's still a ways to go to make interop easy.

If you were looking for nice IPFS overviews, I'd recommend: - https://hackernoon.com/understanding-ipfs-in-depth-1-5-a-beg... - https://medium.com/textileio/whats-really-happening-when-you... - https://docs.ipfs.io/introduction/overview/ (see the concept guides for easier to parse explainers on CIDs, Pinning, etc)


Don't forget Swarm. https://swarm.ethereum.org/

So Swarm vs DAT vs IPFS.


If it has a blockchain in it, it immediately crosses the "too heavy" line for me.


Swarm would probably cross that line even without any of the blockchain stuff. It's more akin to Freenet than Bittorrent and would probably function better as a decentralized backend for most people. Expecting an average person to run a Swarm node is probably unrealistic in anything resembling its current form but allowing resilient decentralized sites which are accessible from any one of a number of gateways should be doable when/if they actually get around to implementing insured data storage that is the key component to the entire idea.


You're right about that, but I do wonder if this will be perfect for making distributed darknet marketplaces and torrent trackers a la what.cd.


No. It doesn't depend on blockchain except for the ENS resolution.


There's also Arweave, which is targeted on hosting web sites in a decentralized way.

- https://www.arweave.org/ - https://github.com/ArweaveTeam/arweave


I've got a comment comparing IPFS to Dat up in the Dat thread that's up now: https://news.ycombinator.com/item?id=20162881

tl;dr: I feel that IPFS is the smaller more tightly scoped project that fits better into the existing web ecosystem. Dat has its own browser and versioning and other stuff bundled in, and IPFS works with normal browsers (https://news.ycombinator.com/item?id=20162972) in a way aligned the web's graceful degradation principle.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: