-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"file not found" on temporary files with host /tmp usage #83
Comments
Just found this issue: https://gitlab.com/qemu-project/qemu/-/issues/103 |
In light of this revelation of sadness, here are perhaps a few other options aside from revert:
Edit:
|
With release 0.14 vmtest has become unusable for us. The reason being that it turns out that with the usage of the host's /tmp/ handling of temporary files broke due to deficiencies in the 9P file system [0]. Given that by now vmtest-action@master uses this very version, our CI is broken. To work around the issue until a permanent fix is found, pin vmtest-action to a usable SHA-1. [0] danobi/vmtest#83 Signed-off-by: Daniel Müller <[email protected]>
With release 0.14 vmtest has become unusable for us. The reason being that it turns out that with the usage of the host's /tmp/ handling of temporary files broke due to deficiencies in the 9P file system [0]. Given that by now vmtest-action@master uses this very version, our CI is broken. To work around the issue until a permanent fix is found, pin vmtest-action to a usable SHA-1. [0] danobi/vmtest#83 Signed-off-by: Daniel Müller <[email protected]>
Did you give the qemu patches a try? We could also try to switch to virtiofs. I believe that's the successor to this 9pfs use case. |
I tried rebasing the Qemu patches, but put it on hold for now, as it's a major effort. Haven't decided if I will spent the time on them, given that it's not even clear it would solve the issue. The original set was based off of 2.6.50 or something like that (>8 years old at this point). As interesting, yeah, |
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofsd needs a user space server. Contrary to what is the case for 9P, it is not currently integrated into Qemu itself and so we have to manage it separately (and require the user to install it). I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a ~0.7s speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. - interestingly, there may be the option of just consuming the virtiofsd crate as a library and not require any shelling out. That would be *much* nicer, but the current APIs make this somewhat cumbersome. I'd think we'd pretty much have to reimplement their entire main() functionality [1]. I consider this way out of scope for this first version. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq [1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242 Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofsd needs a user space server. Contrary to what is the case for 9P, it is not currently integrated into Qemu itself and so we have to manage it separately (and require the user to install it). I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a ~0.7s speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. - interestingly, there may be the option of just consuming the virtiofsd crate as a library and not require any shelling out. That would be *much* nicer, but the current APIs make this somewhat cumbersome. I'd think we'd pretty much have to reimplement their entire main() functionality [1]. I consider this way out of scope for this first version. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq [1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242 Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofsd needs a user space server. Contrary to what is the case for 9P, it is not currently integrated into Qemu itself and so we have to manage it separately (and require the user to install it). I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a ~0.7s speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. - interestingly, there may be the option of just consuming the virtiofsd crate as a library and not require any shelling out. That would be *much* nicer, but the current APIs make this somewhat cumbersome. I'd think we'd pretty much have to reimplement their entire main() functionality [1]. I consider this way out of scope for this first version. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq [1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242 Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
FWIW, I ran the same tests that we causing trouble with #88 and there were no issues. |
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a ~0.7s speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq [1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242 Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a ~0.7s speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq [1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242 Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a ~0.7s speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq [1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242 Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a ~0.7s speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq [1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242 Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a ~0.7s speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq [1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242 Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a ~0.7s speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq [1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242 Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. Note that this now means that both libcap-ng as well as libseccomp need to be installed. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. Given that we are not regressing in terms of performance, this is strictly future work. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. Given that we are not regressing in terms of performance, this is strictly future work. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. Given that we are not regressing in terms of performance, this is strictly future work. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. Given that we are not regressing in terms of performance, this is strictly future work. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. Given that we are not regressing in terms of performance, this is strictly future work. I have adjusted the configs, but because I don't have Docker handy I can't really create those kernel. CI seems incapable of producing the artifacts without doing a fully-blown release dance. No idea what empty is about, really. I suspect the test failures we see are because it lacks support? Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
Switch over to using virtiofsd for sharing file system data with the host. virtiofs is a file system designed for the needs of virtual machines and environments. That is in contrast to 9P fs, which we currently use for sharing data with the host, which is first and foremost a network file system. 9P is problematic if for no other reason that it lacks proper support for usage of the "open-unlink-fstat idiom", in which files are unlinked and later referenced via file descriptor (see danobi#83). virtiofs does not have this problem. This change replaces usage of 9P with that of virtiofs. In order to work, virtiofs needs a user space server. The current state-of-the-art implementation (virtiofsd) is implemented in Rust and so we interface directly with the library. Most of this code is extracted straight from virtiofsd, as it's a lot of boilerplate. An alternative approach is to install the binary via distribution packages or from crates.io, but availability (and discovery) can be a bit of a challenge. I benchmarked both the current master as well as this version with a bare-bones custom kernel: Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test' Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s] Range (min … max): 1.232 s … 1.463 s 10 runs Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test' Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s] Range (min … max): 1.227 s … 1.260 s 10 runs So it seems there is a slight speed up, on average (and significantly less system time being used). This is great, but I suspect a more pronounced speed advantage will be visible when working with large files, in which virtiofs is said to significantly outperform 9P (typically >2x from what I understand, but I have not done any benchmarks of that nature). A few other notes: - we solely rely on guest level read-only mounts to enforce read-only state. The virtiofsd recommended way is to use read-only bind mounts [0], but doing so would require root. - we are not using DAX, because it still is still incomplete and apparently requires building Qemu (?) from source. In any event, it should not change anything functionally and be solely a performance improvement. Given that we are not regressing in terms of performance, this is strictly future work. Some additional resources worth keeping around: - https://virtio-fs.gitlab.io/howto-boot.html - https://virtio-fs.gitlab.io/howto-qemu.html [0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq Closes: danobi#16 Closes: danobi#83 Signed-off-by: Daniel Müller <[email protected]>
After 0a63dce I am seeing test failures when working with temporary files (everything works if I back the commit out). For example:
when run in with something like:
fails with:
I tried that or similar things on three different systems, leading me to believe this isn't exactly specific to my setup. I suspect this is some sort of limitation of 9P fs, but don't know much about it. The error disappears when using a named temporary file.
Any ideas? Temporary files seem pretty important for testing. I can work around it by setting
TMPDIR=/var/run/
inside the VM, but it's not great. And there are still other issues of similar but potentially slightly different nature that this does not resolve. So perhaps we should go back to using a dedicatedtmpfs
after all? :-(The text was updated successfully, but these errors were encountered: