How much does immutability/persistence cost in terms of performance and memory? Are there benchmarks against std::vector? E.g, are we talking about 10x or 100x slow down?
Performance is one of the goals, so I am benchmarking everything. I run benchmarks on Travis, locally on my machine, and sometimes on a Raspberry Pi 3. You can check some of the reports here: https://public.sinusoid.es/misc/immer/reports/ (this is a bit hard to interpret without further comment).
So, the slowdown depends on the operation.
At the moment, comparing `immer::vector` to `std::vector`, example, for reading, using internal iteration (e.f. `for_each`) you get 1.5X, using iterators something like 2.5X, using [] something like 6X iteratively, something like 1.1 in random order.
Updating a single `int` element is like 100X. (This factor decreases with the size of the updated element, though!) But then, for example, `push_back` is only 1.1X std::list, and it does not rely on amortization, making it better for latency. Considering that you get access time similar to std::vector and persistence, it is a good deal. Also, with `flex_vector`, you get logarithmic slicing and concatenation, making these operations faster than for std::vector (when using big-enough vectors). Also, I am working now on _batch updates_ via transients, so if you update 10 elements in a row, you don't pay 10 times 100X (basically, you only pay the 100X once, which is very much ok for most use-cases, think like a 1microsecond tax per mouse click ;)
There are lots of trade offs one can make (in general, one can make updating a bit faster by making access a bit slower). Also, it depends a lot on the memory management strategy, which can also be customized.
It all depends on the scenario. Consider this situation: a sending object has an internal collection of items and wants to send it to a receiver. With a mutable collection the sender sends a shallow copy of the collection, because the receiver can't be trusted not to modify the original (which is part of the internal state of the sender).
With an immutable collection the sender can send the original, no copy required. The immutable collection can be shared (lock free) by any number of users.
In this case you have both reduced time and memory use.
So while immutable collections are obviously "slower" for general (mutable) scenarios, the win comes when you need less locking, less copying, and so on.
It is interesting how in half of computing we are lamenting how badly trust can cause issues, and then in another half we are saying you can trust parts of the code to not modify your objects that you are passing.
That is, there is nothing that guarantees foreign code doesn't modify your "immutable" data. You can start building in checks into the system to make sure they were not modified, but you will eventually get to the point where you are basically locking on your data. Or just sending a copy of just enough of the data for the other end to work. (Which, if you are at all distributed, can make some sense anyway.)
I get it, in that if you are sticking to the contract, than immutable leads to this. So, in a very real sense, it helps an individual (or small team) stick to the convention of "initialize and then use" data structures.
None of this is to say that immutable data can not be really useful. It can be. Awesomely so. I just get a little worried at the benefits being touted as absolute.
To me, the much more interesting aspect of immutable structures are the ones that let you run something in an append only way. Or in a method where you can effectively recreate a local history of an object from the data. (Neither of these things are "new", btw. In a simplistic sense, this is refinding that assoc lists can be useful.)
Yeah, thanks for your comment. Indeed, the idea is that by paying a tax on updates, you get a much bigger improvement on the overall system performance, and also, better scalability.