-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Progressively load large number of keys #471
Comments
you are right - such a feature is helpfull. |
Not sure, but I might have a similar situation with a series of SET/SADD commands. I exported a branch (200 items, some with large JSON objects to be inserted). It's been running for ~10 minutes so far which seems excessive given that a single SET or SADD happens almost instantaneously. Oh... I see "413 Payload too large" in the console. I'll see if I can cut this down and do some more testing. |
@JESii HTTP 413 is a different error - you need to set the config key |
Thanks… I’ll give that a shot!
… On Apr 29, 2022, at 8:00 AM, Jon Seidel ***@***.***> wrote:
Not sure, but I might have a similar situation with a series of SET/SADD commands. I exported a branch (200 items, some with large JSON objects to be inserted). It's been running for ~10 minutes so far which seems excessive given that a single SET or SADD happens almost instantaneously. Oh... I see "413 Payload too large" in the console. I'll see if I can cut this down and do some more testing.
—
Reply to this email directly, view it on GitHub <#471 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAAADZWQPHJSNBEK4QQVFCTVHP2SBANCNFSM5RP6YLDQ>.
You are receiving this because you are subscribed to this thread.
|
Redis commander often takes a long time to load large hashset, and can run out of memory and get killed when the hashset is large enough.
I don't think this is a bug, but it probably makes very little sense to have redis commander replicate the entire redis db in memory.
As an enhancement, could redis commander implement progressive loading of large hashset or large number of keys?
i.e. if the hashset is large, then load the first say N entries, and load more when the front-end scrolls to the bottom or user clicks to load more (or load all).
This would allow redis-commander to be configured with a reasonable memory limit.
The text was updated successfully, but these errors were encountered: