Reticulum is incredibly versatile and has an entire ecosystem of tools under development. NomadNet is just one of the messengers. There is Sideband, a mobile app client (https://github.com/markqvist/Sideband), and Reticulum MeshChat, developed by Liam Cottle which is a browser based client https://github.com/liamcottle/reticulum-meshchat.
Reticulum can work over anything that has a throughput greater than 5 bits a second (yes, bits) and a MDU of 500 bytes. Not only can it work over hundreds of different carriers (LoRa, BLE, Packet Radio, overlay networks like Tor and I2P) but each of these carriers can be apart of the same network.
I threw together a quick proof of concept of it working over HF radio. I setup two nodes about 144 km (90 miles) separate. Both were ICOM-7300's with a Raspberry Pi 5 driving the software modem that would take packets from Reticulum and send them over the air. https://www.youtube.com/watch?v=blwNVumLujc
Node 1 was out in the field while Node 2 was back at my house. Node 2 had two interfaces setup, one for the HF modem and another connected to the TCP testnet. This means that Node 1 could access any peer that was over on the TCP testnet.
would you be able to transmit under the noisefloor and integrate on the other side? i imagine it would still break the law, but you wouldn't be likely to get caught?
Physics aside, I think the FCC is very, very slowly considering removing the "no encryption" aspect of ham radio. The arguments against encryption seem to be weak, and revolve around "if we don't know what people are saying, we won't know if they're trying to use it for commercial purposes" which is just simply not true. Oh well, here's to hoping the FCC will move into this century!
Under the noise floor at what distance? If you don't want to be a bright beacon at a mile, then you're not going to have a lot of signal at a hundred miles.
In Part 95, Section D, encryption is not mentioned.
But the Emission Types are limited, digital emissions are not allowed. (AM and SSB) are specified.)
In theory, you could have a transmission header / trailer in Esperanto that would route the message, and then the message traffic. But that sounds like a hill that wouldn't be worth the climb.
Indeed, only plain language voice communications are allowed.
§ 95.931 Permissible CBRS uses.
The operator of a CBRS station may use that station to transmit two-way plain language voice communications to other CBRS stations and to other stations that are authorized to transmit on CBRS frequencies.
is there maybe an explanation of how the network works that isn't a video? is chapter 4 of the manual the best explanation? i admit i'm spoiled by the great explanations provided by academic projects, and i don't know where to look for, say, how it defends against traffic analysis attacks to statistically deanonymize speakers, or whether that's outside of its threat model
if it were a mechanical device or a graphics rendering algorithm i would think a video would be better, but it's a peer-to-peer networking protocol, and the video just looks like distracting eye candy
To me that chapter ("Understanding Reticulum") was a very pleasant read a few weeks ago, and inspiring too. I would love to get some HN expert opinions on it, especially about routing, so I posted it separately:
It's meant to just be a quick explanation of some of the very basic concepts. But if you want to understand the network stack in-depth the manual is the best resource: https://reticulum.network/manual/index.html
Is there a more formal description of the algorithm? I'd be interested in understanding not the implementation, but how the algo has been constructed, to understand its strengths and weaknesses.
> Once an announce has reached a node in the network, any other node in direct contact with that node will be able to reach the destination the announce originated from, simply by sending a packet addressed to that destination. Any node with knowledge of the announce will be able to direct the packet towards the destination by looking up the next node with the shortest amount of hops to the destination.
I like the way you’re thinking but it doesn’t necessarily work like that in practice.
Why?
Because of how announce queues work, each interface has its own queue, and announces are limited very specifically to 2% of a channel’s bandwidth.
This means that announces are much more likely to be transferred over the faster medium first, resulting in paths that are on average the most reasonable balance between speed and distance.
If that doesn’t make sense at first, I get it. I find trying to visualize how it works really helps. Reticulum is conceptually so different from anything else out there that it takes a while to understand.
That does make a lot of sense, at least in terms of latency: I imagine that when a one-hop announce comes back from outer space, the faster multi-hop path across town will have already been established.
For bandwidth, however, I don't see it yet. If all relevant nodes are idle at the time an announce comes in (so the 2% limit doesn't come into effect), a low-bandwith route might be established before one with a much higher bandwidth, no? (Prioritising latency over bandwidth can be the right thing to do, of course, depending on what the network is used for. But it might not.)
Sure, in some cases, when the network has fewer announces to deal with, the 2% limit has less effect, the best way Reticulum can deal with this is with a network made out of more specialized node types connecting the lower speed links (http://reticulum.network/manual/interfaces.html#interfaces-m...)
However when the network (and each node) is receiving enough announces to saturate that 2% chunk of most smaller links, those interfaces prioritize announces with less hops, and there is a queue, meaning it takes that interface longer to transport announces coming from further away. This makes it much more likely that nodes will form routes over a faster but longer path.
> Sure, in some cases, when the network has fewer announces to deal with, the 2% limit has less effect, the best way Reticulum can deal with this is with a network made out of more specialized node types connecting the lower speed links (http://reticulum.network/manual/interfaces.html#interfaces-m...)
I see. So it seems that optimizing a complex network can be a little more hands-on, not completely automatic (which would be a bit too much to ask, thinking about it). I guess my hypothetical path via outer space would have a "boundary" mode node somewhere on the way, although I am still fuzzy on how exactly this would affect things.
And your point about the announce queue with saturated 2% bandwidth is clear.
Thanks so far! This makes me want to read the rest of the manual and possibly start tinkering with Reticulum myself at some point.
How does Reticulum solve the fundamental issue of mesh networks: either you have to have a central controlling authority for addressing, or an adversary can just flood your network?
I can't comment on Reticulum, but I think there are solutions to that problem re: content-addressing rather than node-addressing.
If you replicate a piece of data based on whether you or your (explicitly trusted) peers are interested in it, then the only way to flood the network is to convince all of the users to become interested in it, which is likely difficult enough to discourage misbehavior.
Sigh. The problem is not the content. It's the addressing.
For the content-based addressing to work, you need to flood the network with the reachability information ("how can I get to this block?"), and it's trivially easy to just generate enough of it to overwhelm the routing nodes.
But that's request/response, because you're requesting blocks.
I'm saying that when two nodes pass each other in the street or whatever they think "hey, a peer is in range" so they see if:
- The peer carries any topics they're interested in
- That topic's synchronization algorithm indicates that any actual synchronization is needed in this case
So some content may get shared between the devices, but nobody requested it. The event source was the arrival of the peer. You might also recheck periodically, supposing they stick around for a while.
If you subscribe to topics with dumb synchronization algorithms that just copy everything they see and don't check a web of trust, then yeah you could still get floods of data that saturate whatever resources you've allocated to that topic, but that would quickly become a useless topic, so just unsubscribe and pick smarter topics next time.
In summary, push the routing problem to the application layer.
Different data is going to have different needs. Map tiles need to stay near the parts of the planet that they describe. Information about how to mitigate a harm needs to be near the audience that might be harmed. Manuals need to be in the hands of people who are interested in the objects that they describe. Menu and open hours data needs to be near the restaurants with those menus and hours, or to nodes whose owners like that kind of food...
The goal is to converge on a particular distribution of data across nodes, not to get any one piece of data to any one particular node. No requests, and no handling data that you haven't opted into handling by subscribing to that topic.
I suppose you might conceive of the synchronization process as a series of requests, but they don't propagate anywhere. For each datum you either already have it, you decide to take it, or you decide to leave it.
> But that's request/response, because you're requesting blocks.
It doesn't matter.
> - The peer carries any topics they're interested in
How do you implement it? Do you have a human manually looking over each message and marking the interesting ones? Heck, it's likely that everything is encrypted anyway!
> So some content may get shared between the devices, but nobody requested it. The event source was the arrival of the peer. You might also recheck periodically, supposing they stick around for a while.
So you get 10^12 anonymous peers arriving at your node. How do you filter them?
The node operator subscribes their node to topics, so yeah that part is manual, and that's how the humans express their interest. You might have to update it every now and then if one topic is killing your battery or your hard drive or you just don't care anymore. The data which that topic governs is handled by the synchronization function which runs periodically without supervision.
Whether some or all of the data is encrypted is an application level detail. At the network layer you just call the function provided by the app/topic and it tells you whether to accept or ignore a datum, and which datum to delete if the storage quota for that topic is full. Lots of things don't need encryption (e.g. map tiles). Or if there's some kind of trust structure in place, the node may have keys necessary for doing the decryption--it's not like you're carrying data for apps besides those you're participating in.
As for the peer filtering thing, peer-discovery and synchronization would happen in time slots. So if it happens for 30 seconds each time and this app in particular gets 30% of that, then I guess you spend 10 seconds trying to see if any of those 10^12 peers are worth synchronizing with. Maybe you're checking to see if they're n hops away on your web of trust for that topic or something like that, it would depend on the case. But I don't see how this case has anything to do with mesh networking: all forms of communication are susceptible to DOS of some kind or another. The best we can do is limit the blast radius and maybe raise it to the user so they can handle the adversary out of band.
Not sure about reticulum but Yggdrasil uses a hybrid spanning tree and dht to guess based off dht and uses key weight to failover to the spanning tree.
If you've picked mesh networking, then you care about partition tolerance. But blockchains prioritize consistency. So I think using blockchains on mesh networks puts you in a disadvantaged situation re: the CAP theorem. There's got to be a way which better aligns the application layer with the constraints of the physical layer.
So it stops being a blockchain if the criteria for adding a block is based on something else? Or do you intend to update your definition to incorporate other consensus mechanisms as they emerge?
Seems to me that a more useful definition would abstract out the consensus model such that a blockchain is essentially a merkle-linked-list together with some function for determining which of two candidate next-blocks will be the actual one, but without getting too specific for what that function is... just because there's so much potential for variation there.
> just because there's so much potential for variation there.
There really isn't. Either you expend some resource to make it expensive to attack or you stake some resource so you have something to lose to prove you're not a bad actor. I've never seen anything more creative than this.
Or you participate in some activity which provides a service, like filecoin. Or you get somebody's approval of the block's contents, like in a permissioned blockchain.
I think we'll also see something where you do key-signing parties all at the same time as a way of providing sybil resistance. Or proof of having voted on whatever it is (the outcome of the vote would then go into the block).
Stake and work are just the easy ones to implement because they don't even try to be simultaneously useful.
Personally I'm not very excited about blockchains because I think global consistency is overkill for pretty much everything, and is in many cases harmful. But it's hard to take anyone seriously who classifies a technology as by-definition-useless. Its current forms are weak enough to defeat at face value, no need for propaganda.
As for the future ones... Maybe they be useful, we'll see.
Hm, thanks for the argument, I guess I'm turning it into semantics; filecoin and "permissioned block chains" sound more like reputation / web of trust systems. So to me it only becomes blockchain when there's some consensus mechanism not based on people making choices (because then we're just back to having a DB with auth...)
It's a little different than a DB with auth. They have to make those choices in the clear, and they have to follow whatever rules are enforced by the protocol while they do it.
Unlike a DB with auth, if the process is trustworthy then the people don't have to be. Can such a thing exist? I dunno, but there's no evidence yet that it can't, and golly it would be cool if it did.
"Permissioned blockchains" are basically isomorphic to git repositories. They certainly are useful, but they aren't "blockchains" in the regular sense.
Well... git repositories with only one branch. But yeah, I'd argue that such things are blockchains in ever sense that matters. Or should we go ask the permissioned blockchains start calling themselves permissioned merkle linked lists?
POW doesn’t have to be useless work. It can be useful work, so that mining actually creates a value tied to off chain economic systems. Then you have something that actually has value, and you can then use POS to give validators both a correct incentive alignment and a way to get paid for their work. Hybrid value backed POW for token creation with POS for validation creates a really good system for digital assets.
Sadly, extant systems for this are few because generating real value is actually challenging.
It can be in theory, but in practice there are no tasks that fit the definition. All attempts to use something like protein folding ended up in failure.
Digital content platform, for example, is a perfect task to generate outside value. Data storage is another example with many chains in that category thriving. Database is another, gpu compute/render, lots of useful tasks. You are basically saying that computational resources cannot be used to do valuable work, which is a premise that I think is questionable at best.
You don’t see a lot of hype around these projects because they aren’t intended to be speculative- teams in this space try to avoid wild speculation driven value cycles, because it is damaging to their economic models.
> Digital content platform, for example, is a perfect task to generate outside value.
Yep. Storing CSAM materials against your will is great. Maybe interspersed with pirated movies. To add insult to injury, it's also not reliable enough for archival purposes for important data.
So far, I haven't seen any actually useful proof-of-work schemes that truly benefit the society.
There are lots of other approaches - IOTA DAG, HashGraph, Ripple Consensus Process etc.
I am not a fan of blockchains, though. They are overkill for most uses. But here is an example of a non-blockchain system that doesn’t even require global consensus:
It's not really going to work. Without a centralizing consensus, any such scheme is vulnerable to be drowned in forks.
A malicious notary network can simply flood the ledger with conflicting views. So clients will have to somehow find a set of notaries that is the "best".
Proof-of-stake means that there's effectively a vote on the set of "reliable" agents, and proof-of-work works because the malicious notaries can't outrace everyone else.
Sorry, but you're not exactly an expert on this. There is a huge body of literature that says otherwise, and reference implementations.
You don't NEED "a centralizing consensus". IOTA did have one, called a "governor". And now they also did away with it.
I had a discussion with the CTO of Ripple (back then their chief cryptographer) David Schwartz about this exact issue in 2018, when I was also connecting with Leslie Lamport and others in the industry to discuss why and how global consensus was even needed
Yes. How do you find a notary that is not malicious?
A malicious subnetwork of notaries can flood it with bogus transactions. To prevent that, you have to make sure that transactions can't happen without a significant expenditure of real-world resources.
That's not true. Notaries shouldn't be able to flood a network at all. Each participant in the network is supposed to stop accepting messages from a malicious participant. It's one strike and you're out.
That's what Proof of Corruption is about, in our technology, for instance.
Every participant has to sign their claims. If a participant signs two contradictory claims, this Proof of Corruption can be gossipped and the participant is excluded.
In distributed systems, 99% of the time you have finality, but 1% of the time you may have a conflict due to race conditions or corrupt nodes, etc. Blockchains take the unfortunately brute force approach of gathering all the conflicts / ambiguities into one or another consistent chain of transactions, and then "duking it out" with a lot of expensive "stake" or "work" or whatever. But it doesn't have to be that way. The end-users are ultimately the ones to either endorse a transaction or not, there is no reason to have the network be the source of truth for the remaining 1%, there are hugely diminishing returns from all that waste of electricity. So even the double-spend problem can be solved without blockchains.
But even without this, in other decentralized architectures such as the PTN (Permissionless Timestamping Network) I linked you to above (https://intercoin.app/technology.pdf) there is no blockchain, no consensus, just nodes talking to each other and data structures accruing in eventually-consistent ways. And the nodes can just as easily stop listening to you and forwarding your messages.
Similarly in the SAFE network. Even the routing is done in a way that the routing info is deleted after one hop, so you can't DDOS the network the way you can a regular IP / BGP network or even a regular DHT (such as Bittorrent's Mainline DHT). Because the nodes will just refuse to pass on your message. Every node expends only the resources it is prepared to, and nothing more. This idea of "flooding" or "DDOS"ing is more of a legacy idea due to the federated systems we have today, like email and DNS (where the whole world can spam a person's email, and you play cat-and-mouse).
Again, blockchain is a tiny part of this space of decentralized networks. You can have CRDTs syncing, or you can have append-only logs such as Hypercore (now called Holepunch / Pears) or you can have Freenet (the new one, I interviewed the founder a couple years ago when it was still called Locutus: https://www.youtube.com/watch?v=yBtyNIqZios) you can have Secure Scuttlebutt, or Nostr etc. etc. etc.
> The end-users are ultimately the ones to either endorse a transaction or not, there is no reason to have the network be the source of truth for the remaining 1%
> And the nodes can just as easily stop listening to you and forwarding your messages.
You're describing a willingness to incorporate a respect for user consent/participation into the design of your protocol. I think that most people who are enthusiastic about blockchains are not willing to do that.
It's not a technology thing, it's a power thing. They don't want coordination of this kind to be compatible with per-transaction user consent, because if it is, then any system which preserves that consent will be more legitimate than their thing which doesn't, and that's bad for their investors/investments.
Conversely, it's hard to find investors if you're building something that leaves users with enough freedom to insulate themselves against a particular remote influence (such as the investor), which is why the blockchain people have a bit of a head start here.
Blockchain's early applications have been derailed by greed and stupid applications like meme coins and rug-pulls. Instead of leveraging the power of smart contract factories using the Factory Pattern, teams started releasing one-off contracts, and others gambling with them. This led to zero-sum games at best, and negative-sum at worse (for contracts with bugs or rugpulls). Compare something like UniSwap (which is a factory) to a random contract based on "SafeMoon" or "EverRise". The UniSwap pools all have the same trusted, audited, battle-tested code. It's the kind of stuff I am a fan of.
Going further, many COMPLETELY NON-BLOCKCHAIN enterprises, like FTX and Celsius, ruined the good name of Web3 even further, which is around Smart Contracts. Think of it like the Web2 tech bros and VCs trying to make a buck around Web3. YC-funded companies like OpenSea have been better actors, but many wallets etc. ended up using them and gateways like Infura as the source of truth, almost obviating the need for Web3.
Having said all that... I've been discussing this more on the applications and UX level. Users always have the power to switch away to a fork of something, and migrate away from an ecosystem. But a good ecosystem can "earn" its lock-in by giving them stuff they want. That's what happened with all the centralized Web2 platforms undergoing "enshittification" due to the profit motive.
So yeah, capitalism and greed have derailed a lot of Web3, but in a different way than Web2. It's why I built https://intercoin.org/applications . Look at https://intercoin.org/deck.pdf for what Web3 COULD be like. We call it Web5 to get away from the morass which is Web3.
Now, back in 2018 when I started it, I was planning to build a post-blockchain network, and I still do. It's going to be called Intercloud (a portmanteau of two words like BlockChain). But besides blockchain, there are HashGraph, IOTA DAG and others.
So yes, blockchains have a head start mostly due to the profit motive. Same with centralized Web2 social networks. You don't see HN being against those very much, but they are vehemently against Web3. I consider Web2 to have been completely ruined by megalomaniacal tech bros and VCs out for profit, and society at large has been harmed by it. Here is the diagnosis and the solution:
(Over the last decade, I have spent the majority of my own resources, without VC, building an alternative open system. I've been sort of marvelling at how actively some HN users are against it, and often knee-jerk trying to downvote it because it contains words like "decentralization" and it can interoperate with Web3... but I am confident that once it goes mainstream, suddenly some people will look back on all my posts and finally get it).
I'm afraid you're redefining what a blockchain is. A blockchain is a distributed ledger. Distributed ledgers are an application of distributed conensus, which is a truly interesting field within computer science.
You’re being downvoted because you’re making sweeping unsupported assertions, likely based on an ideological opposition to blockhain.
I am guessing it has to do with anger at FTX and other negligent participants in the wider “crypto” ecosystem that has very little to do with blockchains. But that is a “non sequitur” (Latin for “it does not follow).
If I am wrong in my assumptions and you have an actual argument supporting your assertions about “all blockchains” being crap, please do elaborate on the substance.
TL;DR it only flood routes packets that tell the network the location of a node; each node stores the “next hop” to get to every other node on the network. Everything else is routed along a single path determined by those “next hops.”
Reticulum can work over anything that has a throughput greater than 5 bits a second (yes, bits) and a MDU of 500 bytes. Not only can it work over hundreds of different carriers (LoRa, BLE, Packet Radio, overlay networks like Tor and I2P) but each of these carriers can be apart of the same network.
I threw together a quick proof of concept of it working over HF radio. I setup two nodes about 144 km (90 miles) separate. Both were ICOM-7300's with a Raspberry Pi 5 driving the software modem that would take packets from Reticulum and send them over the air. https://www.youtube.com/watch?v=blwNVumLujc
Node 1 was out in the field while Node 2 was back at my house. Node 2 had two interfaces setup, one for the HF modem and another connected to the TCP testnet. This means that Node 1 could access any peer that was over on the TCP testnet.
Here is a quick primer on Reticulum that explains some of the basic concepts: https://www.youtube.com/watch?v=q8ltLt5SK6A