Seq Scalability, and Capabilities #1294
Replies: 1 comment
-
Hi Mahmoud, thanks for your message.
No, unfortunately; each Seq instance needs exclusive access to its storage volume.
There are terabyte-scale deployments of Seq out there, but they have generally been tuned carefully over time, via controlling ingestion, retention policies, signals/indexes, etc., and to do this successfully, substantial compute and storage resources can be required. I wouldn't jump to recommend deploying Seq if your load will immediately hit these kinds of numbers, though we anticipate that scale-out will make this easier some time in the future. Even a terabyte is a very large amount of log data to be collecting from most systems, though. Seq works to help you avoid collecting huge amounts of noisy data through fine-grained ingestion filters, dynamic ingestion level control, and fine-grained retention policies. If you can optimize your application's logging, and Seq's handling of the data over time, then it's often possible to get away with storing a lot less. Worth giving some thought to!
Ingestion is rarely a bottleneck, in that query performance is the bigger concern when large data volumes are being ingested. The specifics depend a lot on the shape of log data, size of requests, etc., though. We ship a tool called If you're keen to discuss your scenario in more detail, please feel free to drop us a line via Hope this helps, |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
All reactions