-
Notifications
You must be signed in to change notification settings - Fork 566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge remainder monitoring PRs to oss fuzz #4563
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#4516) ### Motivation #4458 implemented an error rate for utasks, only considering exceptions. In #4499 , outcomes were split between success, failure and maybe_retry conditions. There we learned that the volume of retryable outcomes is negligible, so it makes sense to count them as failures. Listing out all the success conditions under _MetricRecorder is not desirable. However, we are consciously taking this technical debt so we can deliver #4271 . A refactor of uworker main will be later performed, so we can split the success and failure conditions, both of which are mixed in uworker_output.ErrorType. Reference for tech debt acknowledgement: #4517
…stuck in analyze (#4547) ### Motivation We currently have no way to tell if analyze task was successfully executed. The TESTCASE_UPLOAD_TRIAGE_DURATION metric from #4364 would only track duration for tasks that did finish. An analyze_pending field is added to the Testcase entity in datastore, which is set to False by default, to True for manually uploaded testcases, and to False once analyze task postprocess runs. It also increments the UNTRIAGED_TESTCASE_AGE metric from #4381 with a status label, so we can know at what step the testcase is stuck, thus allowing us to alert if analyze is taking longer to finish than expected. The alert itself could be, for instance, P50 age of untriaged testcase (status=analyze_pending) > 3h. Also, this retroactively addresses comments from #4481: * Fixes docstring for emit_testcase_triage_duration_metric * Removes assertions * Renames TESTCASE_UPLOAD_TRIAGE_DURATION to TESTCASE_TRIAGE_DURATION, since it now accounts for fuzzer generated testcases * Use a boolean "from_fuzzer" field, instead of "origin" string, in TESTCASE_TRIAGE_DURATION
### Motivation As a final step for the monitoring project, this PR creates a dashboard for overall system health. The reasoning behind it is to have: * Business level metrics (fuzzing hours, generated testcases, issues filed, issues closed, testcases for which processing is pending) * Testcase metrics (untriaged testcase age and count) * SQS metrics (queue size, and published messages, per topic) * Datastore/GCS metrics (number of requests, error rate, and latencies) * Utask level metrics (duration, number of executions, error rate, latency) These are sufficient to apply the [RED methodology](https://grafana.com/blog/2018/08/02/the-red-method-how-to-instrument-your-services/) (rate, error and duration), provide high level metrics we can alert on, and aid in troubleshooting outages with a well defined methodology. There were two options to commit this to version control: terraform, or butler definitions. The first was chosen, since it is the preffered long term solution, and it is also simpler to implement, since it supports copy pasting the JSON definition from GCP. ### Attention points This should be automatically imported from main.tf, so it (should be) sufficient to just place the .tf file in the same folder, and have butler deploy handle the terraform apply step. ### How to review Head over to go/cf-chrome-health-beta, internally. It is not expected that the actual dashboard definition is reviewed, it is just a dump of what was manually created in GCP. Part of #4271
This brings back #4497 . The issue was that, in promql definitions, the '{}' characters were lacking a double escape (\\\\{ and \\\\}). Also, the only parameter to the terraform object should be dashboard_json. Testing: terraform validate ![image](https://github.com/user-attachments/assets/f67434d0-c8e4-4bc0-be38-27188c98ea49) Part of #4271
jonathanmetzman
approved these changes
Dec 27, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This merges #4512 , #4516 , #4547 , #4497, #4560 and #4561