Replies: 9 comments
-
Maybe related to #764? This turned out to be a Julia bug which I reported in JuliaLang/julia#56759 and is now fixed. Now we just wait for 1.11.3, or you can force it to use 1.10 (see PySR issue for how). |
Beta Was this translation helpful? Give feedback.
-
Thank you very much for the quick answer! I tried using 1.10(.7), but unfortunately it did not solve the issue. |
Beta Was this translation helpful? Give feedback.
-
What is your |
Beta Was this translation helpful? Give feedback.
-
Thanks for your answer and your support! I am really not finding the reason and the whole issue seems quite weird to me. It also occurs with the PySR version 0.19.4, that is what I tried today. And the increase in run time gets bigger continuously for more runs and does not converge as far as I tried. I just came across this issue now, as I was not running many iterations before and the runs where not that extensive (e.g. smaller populations) and there the effect was far smaller. My loss function is defined as follows. It depends on the particular input data and the options in the beginning are to identify the different y-variables, that all need differently shaped loss functions:
|
Beta Was this translation helpful? Give feedback.
-
Very weird. If you do a How are you letting each run quit? Do they run until the end or do you quit them early? Do you see this with bumper set to False? |
Beta Was this translation helpful? Give feedback.
-
Also, if you’re up for getting your hands dirty, you could try building Julia at this commit: JuliaLang/julia#56801, and then forcing PySR to use that Julia. (via I’m curious if turning off bumper helps too though. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your answers! During the sleep, the python.exe of the process showed no activity in regards to memory or CPU at all. And there was also no process. that seemed suspicious to me. I checked that with ntop and the Process Explorer, as I am working on a Windows machine. Concerning your second answer I tried to run Julia at the state of the commit, but unfortunately I did not manage. How can I input the SHA of the commit into the require_julia function or is there another way? What I do for now is to "outsource" the for loops I run by having a .bat-file, that calls a python-script to run PySR for several times and I save the parameters of each regression as .pkl and read it in the next iteration to continue where I ended. This works without increasing runtime for many iterations. |
Beta Was this translation helpful? Give feedback.
-
I am running some iterations now on a server with many core and multiple CPUs and when turning parallelism to "multiprocessing" the behavior is not occurring any more. I really have no idea, what might be the reason for the issue, when running on a single machine. If you could specify how to force PySR to use a certain Julia commit I would still try to see, if that solves the problem. |
Beta Was this translation helpful? Give feedback.
-
Here's how to do it by installing the nightly version of Julia. First, install Juliaup: curl -fsSL https://install.julialang.org | sh
# on windows: winget install julia -s msstore Then, install the "nightly" channel: juliaup add nightly This is the most recent version of Julia in the Then, when you start Python, set the julia +nightly --startup-file=no -e 'println(Base.julia_cmd())'
# For me, I get: `/Users/mcranmer/.julia/juliaup/julia-nightly/bin/julia -C native -J/Users/mcranmer/.julia/juliaup/julia-nightly/lib/julia/sys.dylib -g1 --startup-file=no` Set this when you start python: import os
os.environ["PYTHON_JULIAPKG_EXE"] = "/Users/mcranmer/.julia/juliaup/julia-nightly/bin/julia"
# Then, import pysr
import pysr And it will automatically install under this specific Julia version! |
Beta Was this translation helpful? Give feedback.
-
Hello,
I am using PySR for my master's thesis and I came across some strange behavior, that I have not found any comments about on this Github page yet.
When I run several PySR runs after another in one Skript (for example in a for-loop), the runs tend to take longer and longer, even though their settings are identical, no warmstart is enabled and I am creating a new PySRRegressor instance for each run.
An example function, that causes the behavior is given below together with the recorded increase in runtime and RAM Usage. The RAM usage was my first guess, but it is not increasing for the searches (see record). Also the clock frequency of the CPU is staying constant.
In the PySR output it is noticeable, that the number of evaluations per second is decreasing.
The increase in runtime is becoming higher when running a more extensive regression for example with many populations.
I really do not now, what might cause this issue and thereby I would be very grateful for your support!
Thank you very much in advance already.
Best regards
Beta Was this translation helpful? Give feedback.
All reactions