Global Ping Statistics - https://wondernetwork.com/pings
We have ~240 servers world wide, we get them all to ping each other every hour, and record the results.
We've been generating them for years, they're a pain to store, we've made $0 with it. But I really like the data we're getting. We recently moved a lot of the legacy data into S3 to save our own backup & restore process ( https://wonderproxy.com/blog/moving-ping-data-to-s3/ )
Well, we went with the datastore we knew, MySQL. So on the upside we've got full granularity forever. On the downside we were backing up the full dataset every night. Plus the large amount of data was slowing pages down (even on indexed queries).
Now that we've moved the data older than two weeks over to S3, and query with Athena our site is faster, and we're not treating our backup infrastructure quite as poorly.
The biggest ping time I see is just under 4 seconds. With milliseconds, that translates into a 7-digit string if you pretend the first 4 digits are the integer part and the last 3 are the fraction. The caveat is that you must store "42.32" as "0042.032", someone more advanced may be able to suggest a better system. The maximum 22-bit value is 4194304, which is a tad small. 23 bits is 8388608 - and I suspect you'd consider an 8388 millisecond ping time a bug. :D
64-bit time is a fad just because it's easier to do multiples of 8 than bitpack. However, if you use 33-bit time, you can count up to 8589934592, which is the year 2242.
I see you have 250 servers. Using a single int will only get you up to 255. Ouch. But using two bytes gives you space for 64000 servers you'll never use. Wat do?
Well, if you're okay with calculating the avg and mdev in realtime, that's (23*2)+33 (min+max+date), which works out to 79 bytes. So you could prefix _9_ bytes for the server ID, which gives you 512 servers.
So that's 9+23+23+33=88 bytes per ID.
At 88 bytes per ID, one year's worth of records for 250 servers is 192720000, or 183MB per year.
This is not a particularly fancy approach, and is likely inefficient in many ways. But it's definitely doable, both for long-term (full-resolution/granularity) archival and realtime querying. You could make a superfast server in Go that accepted simple queries and handled the on-disk format. You could export the Go server over the Web directly (Go is pretty concurrent, but requires 8K per goroutine, which adds if you have eg 10ks of connections...) or use a simple/low-level protocol from your existing Web framework.
We've been generating them for years, they're a pain to store, we've made $0 with it. But I really like the data we're getting. We recently moved a lot of the legacy data into S3 to save our own backup & restore process ( https://wonderproxy.com/blog/moving-ping-data-to-s3/ )