It's funny how people think Reactive is new as we've been around since 2007 in different forms with a big release back in 2009 (so five years ago).
I would disagree as being the primary author of RxJS as well as a contributor to all other flavors that it's just about slinging around events. It's more than just that as we have to deal with failure, load, etc.
Basically, Rx was a huge step back from FRP by getting rid of behaviors and focusing just on events. So ya, you get to do things about those streams, but none of the fancy continuous binding computations that could be done in FRP (and to a lesser extent, React).
The classic one would be C = A + B where A and B change continuously over time, causing C to change continuously to match the current values of A and B. Change propagation then becomes a core part of push-based FRP (pull based with just resemble), whereas in Rx you have to manage the change streams for A and B manually.
C = A + B in Rx is just a combination of 2 observables. One way of doing that is with "combineLatest":
A.combineLatest(B).map((a,b) => a + b)
Of course, you may want to cache the last emitted value for "a+b", or a default value in case no items were emitted yet, something that would be a hot multicast observable, roughly:
A.combineLatest(B).map((a,b) => a + b)
.multicast(BehaviorSubject(default))
And the underlying framework can optimize for this use-case, with shortcuts and whatnot. In Scala, nothing would prevent you from having a sexy macro, akin to Scala-Async.
All that matters is the underlying abstractions are Observable (the producer, characterized by its subscribe method), the Observer (the consumer, which is really a single function split in 3) and the communication protocol between them. I don't see that as a regression, I see it as the foundation - in the end, no matter what you make of behaviors, it's still a producer/consumer pattern.
Actually my problem with the original Rx implementation is very different. I'm also working on an Rx redesigned implementation for Scala, with back-pressure baked in by design in the protocol [1]. This is because when events are passing asynchronous boundaries, you can have streams that are producing more data than consumers can process and (compared with other abstractions for processing streams of data, like Iteratees) the consumer doesn't signal demand back to the producer. And this issue becomes even more relevant when events are sent over the network.
Right. You do it yourself in Rx, including the topological sort needed for glitch-free re-evaluation and whatever other features you need. Is there is a meta protocol to dissect event streams to reason about there sources? How does switching work? If you read an FRP paper, say Leo's flapjax one (which is still only kind of FRP), you see all the complexity needed to support those features; no wonder Rx didn't bother.
There were definite tradeoffs to make while we were designing Rx and that was certainly one of them. Since we don't want to have to check if the stream is the same stream, instead of saying:
stream1.combine(stream1.skip(1), ...)
We encourage since it is the same stream to use zip to avoid glitches and keep our overall memory footprint low:
Overall, these do not go ultimately up to the publisher as we're dealing with multicast streams in which we don't want other subscribers to pay any penalty for one slow subscriber.
I would disagree as being the primary author of RxJS as well as a contributor to all other flavors that it's just about slinging around events. It's more than just that as we have to deal with failure, load, etc.