-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No execuation time comparison available for PRs #43166
Comments
assign core, reconstruction |
New categories assigned: core,reconstruction @Dr15Jones,@jfernan2,@makortel,@mandrenguyen,@smuzaffar you have been requested to review this Pull request/Issue and eventually sign? Thanks |
A new Issue was created by @mandrenguyen Matthew Nguyen. @rappoccio, @antoniovilela, @sextonkennedy, @makortel, @smuzaffar, @Dr15Jones can you please review it and eventually sign/assign? Thanks. cms-bot commands are listed here |
The problem is that the cmsRun process itself gets a segfault while being profiled by Igprof. The same segfault might happen when being profiled with Vtune. |
In case IgProf+cmsRun combination crashes, is any information on the job timings saved that can be used for comparison? |
Usually the FastTimerService job completes and the average per module is contained in the raw json file if the resources piechart is not readable. |
The IgprofService dumps the profile after the first, middle and next to last event. The first one might not have enough data to be meaningful. |
@mandrenguyen Can you point me to a PR so I can look at the logs. |
The crashes under profilers are quite likely caused by the memory corruption inside Tensorflow (when ran through IgProf or VTune) that has been investigated in #42444. |
The FastTimer Service should suffice. Still It seems not active in RelVals |
for my education is this replacement documented somewhere ? |
@mmusich it is expected that VTune gives the same problem as igprof, so the replacement has not been done. |
I see, that's bad news. I gather the same holds true for user checks when developing (regardless of the time profiling in PRs) |
Is the most burning problem that there is no timing information (entire job, per module) or that the real IgProf/VTune profile (with function-level information) is missing (because of crash)? |
for me (personally) at least, having the function level information would be really helpful. |
IMHO the crash of igprof/Vtune is a problem although there is timing info from FastTimer module, but the real issue is not having a comparison of baseline time performance vs baseline+PR, which force us to detect a posteriori total increases in the profiles when a prerelease is built, and then figure out which PR(s) were responsible.... Perhaps a comparison script based on FastTimer output could be useful even if not optimal, do you think this is possible @gartung ? |
Yes it would be possible. In fact there is a script already that merges two FastTimer output files |
You can try this script https://raw.githubusercontent.com/gartung/circles/master/scripts/diff.py |
If you add |
Thanks, but the real need is to have the comparison in the PR, to see the changes introduced, the same way we had with igprof enable profiling at this point only runs the FastTimer in the PR FW, but gives no comparison of time. which is what allows to decide |
I am working on a pull request to cms-bot that will add the diff.py script and run it when |
This script has been added to pull request profiling and produces an html table of all of the modules in the resources file and their differences |
Do you mean something along the the baseline being run first, and that leading the input file being cached in memory? If we are to go to that level of precision, I'd suggest to |
In the table, I'd suggest to also add units to both time and memory, and in the cells present first the baseline and then the PR value (but keep the difference as "PR - baseline", as we do in the maxmemory table). |
I would not seek precisión, but something which allows to tell if there is a real change or not. It seems to me that with 10 events we are left to stats fluctuations which give more than 3-4% difference in about 90% of the modules being compared (orange everywhere). |
Moreover, fluctuations seem to make some modules to increase and others to decrease, so perhaps a global total value of time per event as summary makes more sense if we cannot get rid of these ups and downs |
Would a sum over module labels per module type give a better indication? The reco-event-loop shows the time in each module type's produce method. |
I think so, at least as a first result to see at a glance if timing is increased or not, the module by module info is still necessary to spot culprits. However I still believe 10 event jobs have very large uncertainty |
The variance in module timing might be caused by more than one Jenkins job running on vocms11 at the same time. I can restrict the baseline and PR profiling jobs so that only one at a time can run on vocms11. |
I determined that the IB profiling was being run on cmsprofile-11 and the PR profiling was being run on vocms011. The former is a multicore VM, the later is a bare metal machine. This could also account for the differences. |
Is the "IB profiling" the same as "PR test baseline profiling"? I'm asking because for DQM/reco comparison purposes the "IB" and "PR test baseline" are different things. |
There is no profiling done for baseline. The comparison is with the corresponding IB. |
vocms011 is also used to monitor RECO profiling on new releases, so it is used centrally from time to time. |
For example, modern CPUs adjust their operating frequency based on the load of the machine. Other processes may also interfere with e.g. disk, memory, and/or network usage. |
Yes, I see that, but that would shift all the modules in one direction, not in a random way, am I right? |
another effect is the OS can 'steal' a CPU from a process to use for something else temporarily. The heavier the load on the machine, the more likely this is to happen. If it happens while a module were running, it would make the time between the start and stop of the module longer than it would have been without the interruption. |
For a comparison of the IB timing and memory FastTimerService numbers you can also look at |
Thanks @gartung |
I tried using the step2.root produced by the IB profiling build with the IB+PR profiling in step3. I also tried to account for timing differences from running on two different VMs by dividing the time per module by the total time. The percentage difference is calculated from these fractions of the total time. I still see large percentage differences. |
I am also working on summing metrics across module labels for each module type. |
Running the IB and IB+PR profiling on the same VM still results in diffrerences |
With the number of events increased to 100 the variance between two runs of step 3 on the same VM is less |
@gartung sorry but the last version of the comparison seems to have reversed signs on the total RECO summary at the beginning of the report, see: RECO real time difference is (PR-baseline) = -2007ms when PR seems to be slower (915525ms) Thanks |
The denominator was left out because the percentage diff of fractional time was thought to be less useful. |
I meant in the legend, right after the definition of the colors. We are quoting time fraction diff percents, so there must be a denominator..... |
That would be total time. I will add it to the next iteration of the script. |
so it's 100% * ((PR/PR total time) - (Base/Base total time)) |
following #47106 (comment) (Jan 16, 2025)
Isn't the printout buggy in the E.g. the topmost line |
Yes, this script is a work in progress. |
cms-sw/cms-bot#2414 |
Since a few months we are not able to see the CPU impact of a given pull request, which used to be possible with the
enable profiling
option in the Jenkins tests.This is a bit problematic for integrating new features, as we won't easily be able to keep track of changes in performance until a pre-release is built.
The issue seems to come from igprof, which apparently can no longer really be supported.
One suggestion from @gartung is to try to move to VTune.
The text was updated successfully, but these errors were encountered: