Skip to content
This repository has been archived by the owner on Jul 2, 2021. It is now read-only.

Snapshot storage on NFS dedicated area #68

Open
ghost opened this issue Mar 20, 2017 · 4 comments
Open

Snapshot storage on NFS dedicated area #68

ghost opened this issue Mar 20, 2017 · 4 comments

Comments

@ghost
Copy link

ghost commented Mar 20, 2017

DAQAggregator has been running almost without interruptions since the beginning of this month. When there exists a matching L0 static flashlist row, there is one new snapshot every three seconds, on average. Each snapshot file is slightly less than 300KB.

For the cdaq that means that DAQAggregator consumes almost 8GB of space per day. Given that there is currently 147GB left, there will be space shortage in less than three weeks from now, unless we ask for extension or we delete redundant data (dev or prod-2016).

@ghost
Copy link
Author

ghost commented Mar 28, 2017

After some preliminary test, it turns out that if a smile file is further compressed into a zip, it consumes around ten times less space. On practical terms, the DAQAggregator for cdaq could possibly write out around a GB per day (and minidaqs would add another GB or so).
The simplest solution would be to just zip every serialized snapshot individually and unzip before deserialization. On the other hand, retroactively zipping/unzipping collections of snapshots (e.g. per day or per hour) would cause disproportionally large delays when a single snapshot is requested, for example in the go-back-in-time use case of DAQView. And it would probably not save more space.
This approach would require adding a zipper and unzipper unit in the serializer and deserializer, respectively. Also, before going for this approach, it is a good idea to check if the realtime applications can afford the latency which would be introduced by the extra steps.

@mommsen
Copy link
Contributor

mommsen commented Mar 28, 2017

How about the following scheme: the snapshot is written uncompressed, i.e. you don't add latency for the live view. After some time, e.g. 24 hours or a week, e.g. a cron job zips the snapshots. The deserializer would need to look for both kind of files: if no unzipped snapshot is available, it tries to find the corresponding zipped one and decompresses it. Like this you don't introduce latency for newer snapshots, which are more likely to be requested, and save space for the long term storage.

@ghost
Copy link
Author

ghost commented Mar 28, 2017

Yes, this sounds like a nice hybrid scheme and it keeps compression on individual snapshot level, which is the important part for DAQView replays.
Both kinds of files could actually be stored in the existing time-based directories structure and the deserializer would just need to inspect the filename extension and apply whichever procedure is applicable (deserialize or unzip+deserialize). The lookup could probably stay unchanged. The only drawback I see is maintaining the extra cron script, but the advantages are for sure important.

@ghost
Copy link
Author

ghost commented Apr 3, 2017

If we only implement the simple solution of zipping/unzipping every snapshot individually, there will not be a significant delay.

I have tested the times on my PC for an hour last Saturday (~1000 files), during ongoing run with around 3/4 of the partitions in and running. Based on this I assume that there was a large variety of values within each snapshot, which is usually the case during normal runs. So the task's difficulty was realistic enough.

Overall, the time to read a smile file, zip it, write it, read the zipped, unzip it and write it again as a smile (4 I/Os, 1 compression, 1 decompression) was estimated to be less than 20ms. This should be fine to use during real-time monitoring. A further micro-optimized implementation could possibly save few more milliseconds by pipelining smile to zip on the fly, without doing all the I/Os which were done during the test.

There was not much deviation in time, because there was not much deviation in snapshot sizes either. Snapshots in .smile were around 369kB, while their .zip counterparts were 57kB (thus with zipping you save ~85% in space).

The snapshot directories need not to be changed at all, they could just contain zip files after the implementation goes into production. For backwards compatibility, the deserializer should always check whether a file is actually a zip before applying the unzip function.

The utility libraries to implement this come already with Java, there is no need of external library.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant