You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The cluster has been quite clogged in this period in early 2025 and @Karl-Svard in prodbioinfo notified me about a job in balsamic cadd_annotate_somaticINDEL_research which was hardly using any threads or memory on the node on ganglia but it required 36 cores and booked up the whole node. It seems that this rule doesn't at all require this amount of resources and the cluster could be free:d up a bit if we lowered it.
I then looked at other similar rules and saw a few bcftools commands that was also run on the whole node. I don't think this should be necessary at all since the sizes of the VCFs are rarely even in the scale of 1gb.
On top of this the benchmark files specified in the rules also had the same name and should overwrite each other, meaning that we cannot track the benchmark of these rules.
Suggested approach
Lower the threads to 4 for the majority of the rules and give unique names to the rules. Then test if the cases can pass, and look into the benchmark files to see if they were at any point at risk of crashing, to help determine if 4 is a reasonable thread to posit.
Deviation
No response
Risk assessment
Needed
Not needed
Risk assessment link
No response
System requirements assessed
Yes, I have reviewed the system requirements
Requirements affected
No response
Can be closed when
No response
Blockers
No response
The text was updated successfully, but these errors were encountered:
Task
The cluster has been quite clogged in this period in early 2025 and @Karl-Svard in prodbioinfo notified me about a job in balsamic cadd_annotate_somaticINDEL_research which was hardly using any threads or memory on the node on ganglia but it required 36 cores and booked up the whole node. It seems that this rule doesn't at all require this amount of resources and the cluster could be free:d up a bit if we lowered it.
I then looked at other similar rules and saw a few bcftools commands that was also run on the whole node. I don't think this should be necessary at all since the sizes of the VCFs are rarely even in the scale of 1gb.
On top of this the benchmark files specified in the rules also had the same name and should overwrite each other, meaning that we cannot track the benchmark of these rules.
Suggested approach
Lower the threads to 4 for the majority of the rules and give unique names to the rules. Then test if the cases can pass, and look into the benchmark files to see if they were at any point at risk of crashing, to help determine if 4 is a reasonable thread to posit.
Deviation
No response
Risk assessment
Risk assessment link
No response
System requirements assessed
Requirements affected
No response
Can be closed when
No response
Blockers
No response
The text was updated successfully, but these errors were encountered: