-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Production server migration finished temporarily (2025-01-05) #1241
Comments
Update:
We will keep trying for the next week or so, but since it is holiday, it's possible that no one will close any existing VM. Update: Update: |
Experimented with docker NFS volume (instead of NFS): On data host: Add the following:
Save and exit and run
Modify
With this setting, |
It's not the first time that Currently, there are two (easiest) ways for data sharing using NFS: I have tried option (1) and until |
This is moved to a dedicated issue #1243 Line 11 in 245db99
Currently we first run this on the NFS host
and then verify using
And then we replace the original command section in
We might need to rebuild all the images to remove the line. But we also need to make sure the permission status... Still thinking about the solution. |
The
|
I have finished setting up the new production server with three instances:
This new distributed system can respond much faster and all containers are given a lot more resources, especially
celery
. We have figured out a way to put things across different instances according to our needs without much limits now.The NFS data sharing is now handled completely by docker swarm, which simplifies the configuration steps and is more secure because the data sharing mount exists only when the docker stack is deployed and running properly.
Some problems:
rodan-main
andcelery
etc. from initialization with a lot of data. We figured out a way to solve this temporarily but still need to think about a long-term solution (see slow start up script in rodan-main with a lot of data #1243).Also, I just discovered that if we delete a workflow, the related resources can also be deleted.
It is recommended to delete finished/failed workflow runs once they are no longer needed as our storage disk is 81% full at this moment! It will be very hard to expand this hardware.
Please send me Slack messages or emails when something is not working as before!
Original post in 2024-12:
... and hopefully we can complete it before the next semester.
Please make sure you download everything important! Ideally we will have all the data in the new server but just in case!
Staging might or might not be updated but user data will not be affected regardless.
The text was updated successfully, but these errors were encountered: