-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dir clean up on delete #371
Comments
Closing, this will be part of #90 |
I do have a lot of old directories lying around on my Liquid Metal host. I need to clean these up manually like this : It would be nice if that were not the case. |
Large scale, it can be problematic without a locking mechanism. Or we can live with that. I think the simplest solution if we care about FS We can't add all parent directory to delete and it will not delete the directory And that leads back to my original proposal to write a background goroutine or |
👍 thanks for recapping @yitsushi, i knew there was "stuff" we had to consider for this 😁 |
I think a background routine which periodically locks the filesystem to creates, recursively removes anything empty, then unlocks would be fine for now.
that check is weird tbh. doesn't cost anything to create over an existing one |
I'm creating an deleting clusters over and over again in my demo environment, and I need to clean up from time to time : $ tree -d 1 /var/lib/flintlock/vm/default/ |
This issue is stale because it has been open 60 days with no activity. |
Still valid. |
This issue was closed because it has been stalled for 365 days with no activity. |
I don;t know whether this was intentional, but while before deletes cleared the dir structure back up to the namespace, now we leave the name dir as well.
Close if this is meant to be (it does look weird with dead capmvm names lying around)
Before delete:
After delete:
Ideally we would remove back to
if all identically names mvms in that namespace are removed, and
if all mvms in that namespace are removed
The text was updated successfully, but these errors were encountered: