Good question. You'd need some accurate-enough data source telling you about failed writes. Which eventually comes back around to needing a consistent database and indications of client disconnects.
With a huge amount of data (as I've heard analytics is), could you take a sampling approach where you log every n transaction and only check those against the DB?
Sure. Data that people don't care about enough to be worried about losing--for example, time series data from an unimportant remote sensor. Should this data be recorded at all? Maybe not, but if should then a best-effort recording may be fine. It may even be all that's possible.
I wouldn’t go as far as to say an “unimportant” remote sensor... but I think you’re correct in spirit.
I could think of an instance where you’d like to log data, but the occasional datapoint being missing wouldn’t be terrible. Maybe something like a temperature monitor — you’d like to have a record of the temperature by the minute, but if a few records dropped out, you’d be able to guess the missing values from context. Something like the data monitoring equivalent of UDP vs TCP.
Even more elementary that sibling comments, this also happens in gaming all the time. You are recording live results, say in Fifa, but if you unplug your device, your results are gone, since they were memory only. The game simply cannot afford to write to disk, the write is "non guaranteed" in the true sense of the word, but it is fast.
You then "checkpoint" when the game is over.
You might dissent that is not a "non-guaranted" write, because in fact the write did occur, but I simply want to allude to the concept of a "non-secured" write, in that it vanished without an fsync.
People rightfully joked about MySQL when they had the non-ACID engine.
Same for MongoDB. A database that loses data when properly used is a joke.
Yes, there are use cases out there for fast non-guaranteed writes. No, 99% of companies don’t have them.