-
Notifications
You must be signed in to change notification settings - Fork 103
total data loss :) again #81
Comments
me too today, all my data are lost after running @rafipiccolo did you find a way to figure out this issue ? is there any way to bind the volume in the remote server so it will persist forever ? is readonly volume could fix the issue ? I'm just mounting existing data to the container, so I don't have any new generated data to be stored in my remote server |
sadly no. i still consider using another storage driver but it needs time to study. |
Same problem here. I'm not able to provide any logs, sorry |
I would be happy to help but I don't know how to reproduce this problem |
Thanks, it would be nice to have some insights / solutions :) The code source is only 300lines of code. So from reading it through, with my in existent go knowledge, I guess the destructive part is the remove function . Is this code ever needed ? Line 114 in 1e0cd2f
|
Maybe there is a problem in docker codebase, that it tries to delete every file before deleting the volume? |
What's the rationale for calling https://github.com/vieux/docker-volume-sshfs/blob/v1.4/main.go#L128 |
I just tested this out and did not lose any data. The setup:
And here's the test (essentially, had to redact/simplify for obvious reasons): # Install the plugin with debug mode enabled
$ docker plugin install vieux/sshfs DEBUG=1
# Create a volume with the new driver
$ docker volume create \
-d vieux/sshfs \
-o [email protected]:/ \
-o password='<SNIP>' \
sftptestaccount_volume
# Verify that the volume can be mounted, listed, has the correct structure, etc.
$ docker run --rm -v sftptestaccount_volume:/data testing-image tree -L 2 /data
/data/
├── testdir1
│ └── file1
└── testdir2
└── file2
# Remove the volume. Separately, verify that contents were not deleted (FTP server-side).
$ docker volume rm sftptestaccount_volume
# Re-create the volume, re-run, and verify contents are still visible
$ docker volume create \
-d vieux/sshfs \
-o [email protected]:/ \
-o password='<SNIP>' \
sftptestaccount_volume
$ docker run --rm -v sftptestaccount_volume:/data testing-image tree -L 2 /data
/data/
├── testdir1
│ └── file1
└── testdir2
└── file2
# Finally, delete the test volume for good…
$ docker volume rm sftptestaccount_volume The debug logs don't show anything worth mentioning. The full call chain from os.RemoveAll is long and has quite a few breakpoints, notably on what is essentially Remove(rootpath). My guess—just a guess at this point—is that if you traced all the calls through fuse-sshfs The question still stands, though — is that call actually necessary? Why not just unmount and not worry about deleting data? (Or, perhaps, making that configurable on the volume, something like
@rafipiccolo this implements part of Docker's Volume plugin protocol (docs), so it can't be omitted. It can be modified, however, e.g. in the way I described above (making data deletion optional / configurable per volume). |
Some more discussion and investigation on a similar issue in |
FWIW, it looks like Amazon's ECS volume driver unmounts before removing which seems like the right semantics here and wouldn't do any harm — the volume protocol plugin docs for Remove say “delete the specified volume from disk”, but I take that to mean cleaning up any resources the volume's using on the host, not unconditionally deleting data on external systems. (I could be wrong.) |
I have the same problem. |
i guess i cant use this module in production.
i cant understand what caused that. must be unstable in some situations.
this morning :
docker volume rm xxx
caused all data to be deleted on the the remote server...
what i exactly did was
i did remove other volume later without data loss.
so im not sure this command does shit in the soup.
but i have no data left
The text was updated successfully, but these errors were encountered: