-
Notifications
You must be signed in to change notification settings - Fork 10
amdgpu proprietary driver #33
Comments
Hey there, That's exactly right yeah. To change the behaviour, you're always more than welcome to add the specific lines back into the Dockerfile and then push it to Dockerhub yourself. You could automate it using There's several reasons that led me to make this change:
Originally I was planning on making it possible to selectively install PM versions as well, I have a prototype of this working on my local Gitea instance (for lolMiner as well) but it's very dangerous in the event that the PM developer's website / Bitcoin Talk account gets hacked or they somehow go rogue, and the PM developers also went through a period where they changed between 3 different download hosts in a week (Mega -> Github -> phoenixminer.info); can't rely on them to stay consistent. Just mentioning in case it's reinstalling the drivers on every launch, if that's happening there's either something wrong with the current build of this container or something wrong with the environment you're running it in. To note, I no longer use this container personally, I just maintain it because I wrote it and it seems to have a few users that rely on it. |
Wow. Lots of great info in your response. Wasn’t expecting to get all of that. I was looking into packaging folding@home into a container that supports AMD GPUs and recalled this project being perhaps an example of how that could be done. It looks like there’s already an F@H container that might support my setup via ROCm but I have not yet gotten to test it. I’m glad to hear that the choice to install the proprietary AMD driver was driven mostly by PM being finicky and not by the GPUs having unique requirements themselves. Definitely sounds like PM presents some serious packaging challenges when it comes to ensuring proper support for various AMD GPUs. I do have to wonder if folding@home is more forgiving in this department or not.
I looked through the build script and noticed that you were installing an nvidia driver inside the container as well. I was a little confused by this since typically for containers with nvidia devices, I don’t believe you need to have the driver installed locally in the container. What is the purpose of installing the nvidia driver package inside the container? I wasn’t aware that would actually work to enable a container to access an nvidia GPU. |
This is perhaps not actually an issue but a question about the design choices made when packaging PhoenixMiner in a container.
Why is the AMD proprietary driver installed with the
start.sh
script as opposed to being installed during the container build process (IE: in the Dockerfile)? In the past the Dockerfile installed the driver and it was built into the container image.Very curious what led to the change here. Looks like it was commit b633540 where the Dockerfile no longer installed the AMD proprietary driver.
The text was updated successfully, but these errors were encountered: