-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(BAOBAB) Clean up nodejs api #179
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
martinkersner
approved these changes
Feb 15, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
ghost
deleted the
fix/clean-up-legacy-api
branch
February 16, 2024 04:01
jay-kim-kor
pushed a commit
that referenced
this pull request
Apr 12, 2024
* vrf image update * vrf image update * remove name * remove name * feat: Remove prometheus * feat: Remove prometheus * feat: Update tag to deploy new fetcher * feat: Update tag to deploy new fetcher * feat: Update tag for VRF * feat: Update tag for VRF * feat: Deploy new core image * feat: Deploy new core tag * feat: Deploy core image * feat: Update version of RR, VRF * feat: Update vrf-worker * feat: Update vrf * feat: Add default namespace * fix: Remove secret value * feat: Add default namespace * fix: modify wrong server url * feat: Add bisontry-balance-checker secret * feat: Update data feed * Fix wrong port number (#13) * fix: Make vrf to use fixed port number * fixup! fix: Make vrf to use fixed port number * fix: Make `request-response` to use fixed port number * feat: Add health-check port to aggreagtor service * feat: Define `DATA_FEED_REPORTER_CONCURRENCY` env var * feat: Define DATA_FEED_REPORTER_CONCURRENCY env var * Deploy fetcher to add reducer (Baobab) (#17) * deploy for new fetcher reducer * fix: update appVersion * fix: update chart version * v0.4.0 (#18) * (baobab) Pass data feed reporter concurrency to reporter (#23) * fix: Rollback DATA_FEED_REPORTER_CONCURRENCY in vrf 🤦 * feat: Set 20 workers in DATA_FEED_REPORTER_CONCURRENCY * chore: Bump up aggregator chart version to 0.3.14 * Update tag for DATA_FEED_REPORTER_CONCURRENCY (#24) * Env vars must be strings! (#25) * Pass `CONCURRENCY` env var to Fetcher (#26) * feat: Create cronjob for execute PoR every minute (#29) * fix: Change `POR_AGGREGATOR_HASH` to use updated address (#30) * (CYPRESS) Deploy PoR (#31) * feat: Create cronjob for execute PoR every minute * fix: Change `POR_AGGREGATOR_HASH` to use cypress por address * Increase fetcher concurrency from 20 to 30 (#28) * Decrease fetcher concurrency from 30 to 20 (#32) * Split deployment (#33) * fix: Remove conflicts * feat: Split tags for aggregator * feat: Update tag for `request-response` * feat: Split tags for vrf * fix: Modify tag value to use changed it * fix: Revert wrong merged file * fix: Correcting wrong service name * Use Klaytn's RPC (#36) * Use allthatnode rpc for aggregator (#37) * Revert back to our node for aggregator (#38) * Revert "Use allthatnode rpc for aggregator (#37)" This reverts commit 2035b53. * Revert "Use Klaytn's RPC (#36)" This reverts commit 2a8572c. * fix: Remove empty space * feat: update image tag for por (#39) * feat: update image tag for por (#40) * feat: add env vars for slack webhook (#41) * feat: Add env vars for por (#42) * feat: add env vars for slack webhook * feat: Add env vars for por * feat: Add env var for por (#43) Co-authored-by: jay-bisonai <[email protected]> * feat: update for faster query and proxy location (#44) * fix: fix migration file (#45) * feat:update fetcher with new proxy settings (#46) * feat: update por image tag (#48) * feat: update por image tag (#49) * fix: rollback item not found behavior (#50) * (cypress) Pass data feed reporter concurrency to reporter (#22) * fix: Rollback DATA_FEED_REPORTER_CONCURRENCY in vrf 🤦 * feat: Set 20 workers in `DATA_FEED_REPORTER_CONCURRENCY` * chore: Bump up aggregator chart version to 0.4.1 * feat: Let's not touch the `version` * feat: update query, add new proxy property (#51) * Add label for make PoR to measurable (#52) * feat: Add label * fix: Correcting typo * feat: Add label to metadata of cronjob * feat: Add label for make PoR to measureable (#53) * feat: update api (#54) * feat: node 20 implementation to listener (#55) * fix: Fix synctax error (#57) * fix: update core Dockerimage file (#58) * fix: update prisma version to 5.5.2 (#60) * fix: updates for node20 and prisma update (#61) * feat: implement redis store for aggregates (#62) * add missing environment variable (#63) * fix: implement env * fix: remove double app port * fix: add secret in api-secret yaml file (#64) * fix: fix typo (#65) * fix: fix api route (#67) * feat: reference new api call for aggregates (#68) * feat: deploy fetcher (#69) * feat: implement util workspace for reducer (#70) * feat: add util workspace reference in core (#71) * Boabab fetcher timeout (#73) * fix: Modify health check port (#72) * Use KF's public JSON-RPC for POR (#74) * Revert "Use KF's public JSON-RPC for POR (#74)" (#75) This reverts commit 5ce9117. * feat: deploy por updates (#76) * feat: update into newest (#77) * feat: update fetcher including queue obliterate and timeout (#78) * fix: use previous version to fix obliterate issue (#79) * feat: implement queue optimization based on roundId (#81) * [CYPRESS] Deploy `orakl-backend` (#82) * feat: Create secret * feat: Create chart * feat: Add port forward 80 to 3001 * fix: Remove wrong value * fix: Correcting typo in docker registry * feat: Add secret for pull from docker registry * feat: Update tag * rollback (#83) * feat: implement obliterate & roundId optimization (#84) * fix: rollback roundId (#85) * fix: rollback (#90) * (CYPRESS) Allow orakl-backend from anywhere (#87) * feat: Add orakl-backend-fe app to ArgoCD * feat: Set service to create firewall rule automatically * feat: implement queue obliterate on start (#91) * feat: queue obliterate for aggregator l,w,and r (#92) * feat: implement queue obliterate on fetcher start (#93) * (CYPRESS) Update orakl-be hotfix (#94) * (CYPRESS) Allow HTTPS for `orakl-backend` (#97) * (BAOBAB) Update Bisontry services (#98) * feat: Add bisontry-daily-report * feat: Change docker registry to use GCP * feat: Update bisonty services * feat: Update secret to match the changed env * fix: Correcting `imagePullSecrets` of bisontry * (BAOBAB) Fix bisontry-balance-checker is not working (#100) * feat: Update tag of balance-checker * (CYPRESS) Update Bisontry services in Orakl (#101) * feat: Update tag bisontry services * feat: Add new property for bisontry-health-checker * feat: Add bisontry-daily-report * (CYPRESS) Enable SSL for orakl-backend (#103) * fix: Remove source range is not working * fix: Remove https part from loadbalancer * feat: Add ingress for ssl lb front of lb of orakl-backend * (CYPRESS) Fix problem when sync in ArgoCD (#104) * fix: Change service type to NodePort * feat: Add exact path for ingress of `orakl-backend` (#106) * (CYPRESS) Open TLS for `orakl-backend` (#108) * fix: Do not use ingress * feat: Open TLS and assign external IP address * feat: Add load tls * feat: Update image tag * (Baobab) Deploy new grafana version (#110) * feat: deploy new grafana version * feat: update appVersion, add environment variable * fix: rollback wrong pr (#111) * (BAOBAB) Deploy `orakl-grafana` (#112) * feat: Add `grafana-as-code` service * feat: Add app to ArgoCD * feat: Add environment variable for anonymous authentication * Setup vault for grafana as code (#113) * fix: add vault secret keys * feat: add secret-store settings * feat: grafana as code for cypress (#114) * fix: fix typo (#115) * (BAOBAB) Update image tag of pods that can't started. (#116) * feat: Update tag of CLI * feat: Update LRW tag of RR * feat: Update LWR tag of VRF * feat: Update image tag of PoR * (BAOBAB) Open new Grafana to inside of K8s (#118) * feat: Add port configuration for Grafana deployment and create service * fix: Fix selector syntax in service.yaml * (BAOBAB) Add type: LoadBalancer to the service of new Grafana (#121) * feat: Add LoadBalancer type to service.yaml * (CYPRESS) Open new Grafana to inside of K8s (#120) * feat: Add port configuration for Grafana container * feat: Add LoadBalancer type to service.yaml * feat: delegated fee fallback (#126) * feat: delegated fee fallback (#127) * feat: update business dashboard (#122) * fix: update with successful last build (#128) * feat: datetime formatting with iso format (#129) * feat: updated datetime format for worker and por (#130) * feat: update rr worker (#131) * (CYPRESS) RR image update (#132) * feat: update rr image version * fix: match Baobab image version * (CYPRESS) VRF image update (#133) * fix: update vrf image version * fix: match with Baobab * feat: update cli image version (#134) * feat: update delegator image version (#135) * (BAOBAB) Cleanup Grafana (#137) * feat: Pull image from GAR * test: Deploy to test pull from GAR * feat: Enable auto sync in grafana * test: Test ArgoCD's auto sync * feat: Set ArgoCD's sync period to 10 second * feat: Add `imagePullSecrets` * fix: Fixing syntax error from `imagePullSecrets` * fix: Correcting indent and `-` * fix: Remove blank before `-` * chore: Remove old Grafana * chore: Revert `targetRevision` of new Grafana * (CYPRESS) Cleanup Grafana (#138) * feat: Pull new Grafana image from GAR * feat: Enable ArgoCD's auto sync of new Grafana * fix: Correcting wrong path of `imagePullSecrets` * feat: update grafana monitor db url (#143) * feat: update grafana monitor db url (#144) * feat: update datetime formatting in POR (#142) * feat: update rr worker datetime formatting (#141) * feat: update datetime formatting (#140) * fix: update db connection url (#145) * fix: update url (#146) * (BAOBAB) goapi baobab deployment (#139) * feat: helm chart for goapi baobab deployment * feat: add deployment.yaml * fix: updates based on feedbacks * fix: update based on feedback * fix: remove start command for goapi (#147) * fix: update command for goapi (#148) * fix: update docker image (#149) * fix: update baobab goapi version (#150) * (BAOBAB) Add labels to deployment.yaml and selector to service.yaml (#151) * (BAOBAB) Fix goapi's sync error in ArgoCD (#152) * fix: Add more labels * feat: Update deployment labels to include "-goapi" suffix * feat: Update deployment labels in deployment.yaml * feat: Update deployment labels in deployment.yaml * fix: update image (#153) * fix: goapi logging (#154) * fix: use pgxPool instead of pgxConn (#155) * (BAOBAB) The health-check URL of api change to goapi (#156) * (BAOBAB) Fix baobab-health-check error (#157) * (CYPRESS) Deploy `orakl-db-backup` (#158) * feat: Add `orakl-db-backup` to chart * feat: Add secret-store for `orakl-db-backup` * feat: Add ArgoCD app for `orakl-db-backup` * chore: Delete service.yaml file is unnecessary * (CYPRESS) Fix `orakl-db-backup` is not started (#160) * feat: Update appVersion and image tag * fix: Fix issue is `orakl-db-backup` can't start * feat: print out more information (#161) * fix: update package version (#164) * feat: reporter warning on invalid keys (#162) * feat: improve goapi logging (#163) * (CYPRESS) Prepare cypress goapi deployment (#159) * feat: prepare cypress goapi deployment * fix: update version * (CYPRESS) Fix goapi deployment (#165) * fix: Commented out resource limits and requests in values.yaml (#166) * fix: update url (#168) * fix: reset resource limit (#167) * fix: Update appVersion and image tag to normal image (#169) * (BAOBAB) Api implement empty row handling (#170) * feat: implement empty row handling * fix: update default behavior * (CYPRESS) Api implement empty row handling (#171) * fix: update empty row behavior * fix: update default behavior * fix: aggregator hash fix validation (#172) * fix: remove unnecessary validation (#173) * fix: cli compatibility updates (#174) * fix: update compatibility for go api (#175) * (baobab) Update CI Helm Lint workflow to include pull request events (#176) * feat: Add actions workflow to linting helm charts * feat: Update Helm setup and linting process * feat: Update helm chart change detection logic * fix: Fix can't detect changed charts * feat: Update ci.helm-lint.yaml to set output variable for changed charts * feat: Stop workflow when lint returned ERROR * fix: Update Helm lint command to check for "failed" instead of "[ERROR]". * fix: Fix workflow can't find HEAD~1 git commit * feat: Make workflow be failed when Lint returned error * feat: Store failed services to output of workflow * refactor: Remove unnecessary code in ci.helm-lint.yaml * feat: Add failed services output to GitHub Actions workflow * fix: Fix linting script to display failed services * feat: Update CI Helm Lint workflow to include pull request events * fix: Remove unnecessary branches from CI workflow * (cypress) Update CI Helm Lint workflow to include pull request events (#177) * feat: Add ci script from PR #176 (baobab) * fix: Remove unnecessary branches * (BAOBAB) Clean up nodejs api (#179) * fix: (BAOBAB) remove legacy api * fix: remove manifest * fix: lint fix * feat: image version with published to new ecr * feat: Update `health-checker` to apply svc URL changes --------- Co-authored-by: jay-bisonai <[email protected]> * (CYPRESS) Clean up nodejs api (#178) * fix: clean up legacy api * fix: rename folder * fix: remove goapi manifest, use api manifest * fix: lint fix * feat: image version published to new ecr * fix: Update name in templates to use name from `values.yaml` * feat: Update `health-checker` to apply api URL changed --------- Co-authored-by: jay-bisonai <[email protected]> * (BAOBAB) Hotfix sync error api (#180) fix: Modify name in templates to use name from values.yaml * feat: update grafana as code (#187) * feat: update grafana as code (#188) * (Baobab) Hotfix: Update listener, it's use old one (#189) * chore: update appVersion and image tag of listener to latest versions * (BAOBAB) Reduce fetcher logs (#190) * fix: reduce baobab fetcher logs * fix: update based on linter * (Baobab) Update whole VRF, (listener and reporter) of RR (#195) * chore: Update VRF * chore: Update RR l and r --------- Co-authored-by: jay-bisonai <[email protected]> * (Cypress) Update VRF (#196) * chore: Update VRF Co-authored-by: jay-bisonai <[email protected]> * (BAOBAB) Reduce listener logs (#192) * fix: level down listener logs * fix: update log levels * fix: update log level into "info" * fix: update log levels --------- Co-authored-by: Jay(김재훈) <[email protected]> * (CYPRESS) Reduce listener logs (#194) * fix: level down listener logs * fix: update log level of vrf * (CYPRESS) Reduce fetcher logs (#191) * fix: reduce cypress fetcher logs * fix: fix linter error * (Cypress) Fix reduce log is not applied properly (#198) * chore: Change log level to `info` in VRF * chore: Modify RR env to use value from `values.yaml` file * fix: Add `global` to before of `.env` * fix: Add `global` to front of `.env` --------- Co-authored-by: jay-bisonai <[email protected]> * fix: update connection addr (#199) * feat: update dex fetcher (#200) * fix: includes dex feed for eth usdt (#201) * (BAOBAB) Updates for go delegator (#183) * feat: updates for go delegator * fix: fix linter error * feat: set image version * fix: update filenames and var names * refactor: Remove unused part of code * feat: Add service account for get secret --------- Co-authored-by: jay-bisonai <[email protected]> * (Baobab) Hotfix error (#202) * fix: Revert deletion `serviceAccount` field from values.yaml to fix error * fix: Correct typo --------- Co-authored-by: jay-bisonai <[email protected]> * (Baobab) Hotfix godelegator (#204) * fix: Revert deletion `serviceAccount` field from values.yaml to fix error * fix: Correct typo * fix: Correcting typo --------- Co-authored-by: jay-bisonai <[email protected]> * (Baobab) Hotfix godelegator (#205) * fix: Revert deletion `serviceAccount` field from values.yaml to fix error * fix: Correct typo * fix: Correcting typo * feat: Enable SA in deployment * fix: Remove unnecessary if statement * fix: Correcting SA name in deployment --------- Co-authored-by: jay-bisonai <[email protected]> * feat: remove busd from grafana (#206) * feat: removes busd from cypress grafana (#207) * feat: add checks if string empty (#208) * (Baobab) Remove BUSD-USDT from bisontry-event-checker (#209) * feat: Update tag version Co-authored-by: jay-bisonai <[email protected]> * (Cypress) Remove BUSD-USDT from bisontry-event-checker (#210) * feat: Update tag version Co-authored-by: jay-bisonai <[email protected]> * feat: update godelegator with trimming (#211) * (CYPRESS) godelegator deployment (#214) * feat: cypress go delegator helm chart * fix: add missing argocd manifest * fix: update revision * chore: Remove TODO comment * feat: Add `port` and `loadBalancerIP` to `values.yaml` * feat: Use port from values in deployment --------- Co-authored-by: jay-bisonai <[email protected]> * feat: remove provider url (#216) * fix: update gsm path (#217) * chore: Remove assigning IP address manually (#218) Co-authored-by: jay-bisonai <[email protected]> * fix: uncomment resource limit (#219) * fix: update cypress db connection url (#220) * fix: update healthcheck delegator (#221) * fix: update image version to check godelegator (#222) * (Baobab) Automate adding ArgoCD app (#212) * chore: Add name and namespace to `values.yaml` * test: Remove parent field * feat: Move name and namespace to child of global field * feat: Remove `orakl-` prefix * feat: Add `global.name` and `global.namespace` value to every chart * feat: Separate redis to multiple directory * feat: Delete unused charts * feat: Add charts to ArgoCD app automate * feat: Add redis-fetcher * fix: Add name and namespace because redis charts not contain * feat: Delete ArgoCD app manifests, it's unused * feat: Modify targetBranch to use `gcp-baobab-prod` --------- Co-authored-by: jay-bisonai <[email protected]> * feat: update hash (#227) * feat: deploy v0.0.1.20240314.0433.ce446c4 (#230) Co-authored-by: orakl-bot <[email protected]> * feat: deploy v0.0.1.20240314.0433.ce446c4 (#231) Co-authored-by: orakl-bot <[email protected]> * deploy baobab-godelegator-v0.0.1.20240313.0958.82d191d (#228) * feat: deploy v0.0.1.20240313.0958.82d191d * Update Chart.yaml --------- Co-authored-by: orakl-bot <[email protected]> * deploy cypress-godelegator-v0.0.1.20240313.0958.82d191d (#229) * feat: deploy v0.0.1.20240313.0958.82d191d * Update Chart.yaml --------- Co-authored-by: orakl-bot <[email protected]> * feat: Add bisontry-messenger (#240) Co-authored-by: jay-bisonai <[email protected]> * feat: deploy v0.0.1.20240315.0618.c907cbb (#239) Co-authored-by: orakl-bot <[email protected]> * feat: deploy v0.6.1.20240315.0618.c907cbb (#236) Co-authored-by: orakl-bot <[email protected]> * feat: deploy v0.0.1.20240315.0618.c907cbb (#238) Co-authored-by: orakl-bot <[email protected]> * feat: deploy v0.6.1.20240315.0618.c907cbb (#237) Co-authored-by: orakl-bot <[email protected]> * (Baobab) Deploy `bisontry-messenger` (#241) * fix: Remove env part * feat: Add bisontry-secrets * feat: Revert remove env part * fix: Add missing key `password` * feat: Add external IP address --------- Co-authored-by: jay-bisonai <[email protected]> * feat: Add `orakl-db-backup` job to cronjob list (#243) Co-authored-by: jay-bisonai <[email protected]> * (Cypress) Fix orakl-db-backup job (#244) * chore: Delete orakl-db-backup app * feat: Add `cronjobs` to ArgoCD * fix: Fix orakl-db-backup job --------- Co-authored-by: jay-bisonai <[email protected]> * (Cypress) Fix orakl-db-backup (#245) * fix: Remove sudo command * feat: Remove job when it's successful * fix: Correct indent * feat: Execute whole command into single item * chore: Add escape char * chore: Remove escape char * chore: Add escape char again * fix: Remove escape char and pass argument directly * chore: test * test * test * test * test * test * test * feat: Update dependencies * feat: Add drop table --------- Co-authored-by: jay-bisonai <[email protected]> * fix: Modify indent correctly (#246) Co-authored-by: jay-bisonai <[email protected]> * (Cypress) Add delegator DB backup (#247) * fix: Remove unnecessary char `$` * feat: Add delegator backup --------- Co-authored-by: jay-bisonai <[email protected]> * Remove bisontry-daily-report (#251) * chore: Remove daily-report directory * chore: Remove daily-report app from ArgoCD --------- Co-authored-by: jay-bisonai <[email protected]> * (BAOBAB) Remove godelegator (#254) * feat: remove go delegator from helm chart * fix: use valid app version * fix: don't use vault for gsm settings * (CYPRESS) Remove godelegator (#253) * fix: clear old delegator * fix: use valid app version * feat: Update bisontry-health-checker (#257) Co-authored-by: jay-bisonai <[email protected]> * (BAOBAB) Boot API (#255) * feat: boot-api helm chart for baobab * fix: fix wrong formatting * feat: update image tag * fix: update based on feedbacks * (BAOBAB) Orakl Node (#256) * feat: orakl-node helm chart * feat: update app version * feat: update version with cli commands * feat: update image with go task updates * feat: use 'test' setting for chain * fix: update redis port env variable into string (#258) * chore: Reduce cpu/memory limits of reporters (#259) Co-authored-by: jay-bisonai <[email protected]> * fix: update delegator url for orakl node (#261) * fix: Correcting typo (#260) Co-authored-by: jay-bisonai <[email protected]> * feat: log level debug (#262) * chore: Remove `bisontry-daily-report` (#252) Co-authored-by: jay-bisonai <[email protected]> * feat: update version with new features (#273) * (Baobab) update submission proxy contract address (#274) * feat: update submission proxy contract address * feat: update to use most recent version * feat: update default gas limit (#275) * chore: Remove `admin` (#271) Co-authored-by: jay-bisonai <[email protected]> * chore: Remove `orakl-admin` (#272) Co-authored-by: jay-bisonai <[email protected]> * feat: update version (#276) * feat: Remove tag command in values.yaml * feat: Remove pvc for log in aggregator * feat: Separate values.yaml file in aggregator * feat: Separate network by values.<network>.yaml in each services * feat: Add example to generate all packages * feat: Update helm packages * feat: Lint by using separated values file * fix: Fix lint error * feat: deploy v0.0.1.20240411.0711.125d763 (#283) Co-authored-by: orakl-bot <[email protected]> * feat: add fetcher secrets (#285) * feat: Set ArgoCD's target branch to `main` * Update appVersion to "monitor.v1.0.0.20240229.0613.748eb76" * feat: Install husky for git hook * feat: Automate to update appVersion in Chart.yaml * test: Change text * Update appVersion to "v0.5.1.20240223.0352.eec1c2d" * chore: Remove unnecessary part --------- Co-authored-by: jo <[email protected]> Co-authored-by: jay-bisonai <[email protected]> Co-authored-by: Martin Kersner <[email protected]> Co-authored-by: Nick <[email protected]> Co-authored-by: Bayram Guvanjov <[email protected]> Co-authored-by: orakl-bot <[email protected]>
jay-kim-kor
pushed a commit
that referenced
this pull request
Apr 17, 2024
* feat: change delegator tag * feat: update tag version * fix: fix aggregator tag image typo * feat: update tag image * feat: Add NODE_ENV to secret store * feat: Add DATABASE_URL secretKey * feat: Update monitor tag * feat: Update monitor tag * fix: Remove vault token from code for security * feat: Deploy api * feat: Deploy delegator * feat: Deploy cli * feat: Update RR chart * fixup! feat: Update RR chart * feat: Deploy RR * fix: Revert wrong deleted code * feat: Update VRF chart * feat: Deploy VRF * feat: Update Data Feed chart * feat: Deploy Data Feed * feat: Deploy fetcher * feat: Deploy redis * feat: Deploy updated fetcher * feat: Deploy monitor * feat: Update admin chart * feat: Deploy admin * feat: Deploy LB for Grafana * feat: Deploy bisontry-health-check * feat: Deploy bisontry-balance-checker * feat: Update bisontry helm charts * feat: Deploy event-checker * fix: Fix wrong name * fix: Fix wrong directory name * fix: Fix wrong secret name * deploy 20230901 version of core (vrf) * helm update * feat: Update prometheus chart * fixup! feat: Update prometheus chart * feat: Update monitor chart * feat: Deploy prometheus * feat: update vrf chart * feat: Update health-checker tag * feat: Update health-checker image * vrf image update * remove name * feat: Remove prometheus * feat: Update tag to deploy new fetcher * feat: Update tag for VRF * feat: Deploy new core image * feat: Deploy new core tag * feat: Update version of RR, VRF * feat: Update vrf-worker * feat: Add default namespace * Fix wrong port number (#13) * fix: Make vrf to use fixed port number * fixup! fix: Make vrf to use fixed port number * fix: Make `request-response` to use fixed port number * feat: Add health-check port to aggreagtor service * feat: Define `DATA_FEED_REPORTER_CONCURRENCY` env var * Deploy fetcher to add reducer (Baobab) (#17) * deploy for new fetcher reducer * fix: update appVersion * fix: update chart version * (baobab) Pass data feed reporter concurrency to reporter (#23) * fix: Rollback DATA_FEED_REPORTER_CONCURRENCY in vrf 🤦 * feat: Set 20 workers in DATA_FEED_REPORTER_CONCURRENCY * chore: Bump up aggregator chart version to 0.3.14 * Update tag for DATA_FEED_REPORTER_CONCURRENCY (#24) * Env vars must be strings! (#25) * Pass `CONCURRENCY` env var to Fetcher (#26) * feat: Create cronjob for execute PoR every minute (#29) * fix: Change `POR_AGGREGATOR_HASH` to use updated address (#30) * Increase fetcher concurrency from 20 to 30 (#28) * Decrease fetcher concurrency from 30 to 20 (#32) * Split deployment (#33) * fix: Remove conflicts * feat: Split tags for aggregator * feat: Update tag for `request-response` * feat: Split tags for vrf * fix: Modify tag value to use changed it * fix: Revert wrong merged file * fix: Correcting wrong service name * feat: update image tag for por (#39) * feat: add env vars for slack webhook (#41) * feat: Add env vars for por (#42) * feat: add env vars for slack webhook * feat: Add env vars for por * feat: update for faster query and proxy location (#44) * fix: fix migration file (#45) * feat:update fetcher with new proxy settings (#46) * feat: update por image tag (#48) * fix: rollback item not found behavior (#50) * Add label for make PoR to measurable (#52) * feat: Add label * fix: Correcting typo * feat: Add label to metadata of cronjob * feat: update api (#54) * feat: node 20 implementation to listener (#55) * fix: Fix synctax error (#57) * fix: update core Dockerimage file (#58) * fix: update prisma version to 5.5.2 (#60) * fix: updates for node20 and prisma update (#61) * feat: implement redis store for aggregates (#62) * add missing environment variable (#63) * fix: implement env * fix: remove double app port * fix: add secret in api-secret yaml file (#64) * fix: fix typo (#65) * fix: fix api route (#67) * feat: reference new api call for aggregates (#68) * feat: deploy fetcher (#69) * feat: implement util workspace for reducer (#70) * feat: add util workspace reference in core (#71) * Boabab fetcher timeout (#73) * fix: Modify health check port (#72) * feat: deploy por updates (#76) * feat: implement queue optimization based on roundId (#81) * rollback (#83) * feat: implement obliterate & roundId optimization (#84) * fix: rollback roundId (#85) * fix: rollback (#90) * feat: implement queue obliterate on start (#91) * (BAOBAB) Update Bisontry services (#98) * feat: Add bisontry-daily-report * feat: Change docker registry to use GCP * feat: Update bisonty services * feat: Update secret to match the changed env * fix: Correcting `imagePullSecrets` of bisontry * (BAOBAB) Fix bisontry-balance-checker is not working (#100) * feat: Update tag of balance-checker * (Baobab) Deploy new grafana version (#110) * feat: deploy new grafana version * feat: update appVersion, add environment variable * fix: rollback wrong pr (#111) * (BAOBAB) Deploy `orakl-grafana` (#112) * feat: Add `grafana-as-code` service * feat: Add app to ArgoCD * feat: Add environment variable for anonymous authentication * Setup vault for grafana as code (#113) * fix: add vault secret keys * feat: add secret-store settings * (BAOBAB) Update image tag of pods that can't started. (#116) * feat: Update tag of CLI * feat: Update LRW tag of RR * feat: Update LWR tag of VRF * feat: Update image tag of PoR * (BAOBAB) Open new Grafana to inside of K8s (#118) * feat: Add port configuration for Grafana deployment and create service * fix: Fix selector syntax in service.yaml * (BAOBAB) Add type: LoadBalancer to the service of new Grafana (#121) * feat: Add LoadBalancer type to service.yaml * feat: delegated fee fallback (#126) * feat: datetime formatting with iso format (#129) * feat: updated datetime format for worker and por (#130) * feat: update rr worker (#131) * (BAOBAB) Cleanup Grafana (#137) * feat: Pull image from GAR * test: Deploy to test pull from GAR * feat: Enable auto sync in grafana * test: Test ArgoCD's auto sync * feat: Set ArgoCD's sync period to 10 second * feat: Add `imagePullSecrets` * fix: Fixing syntax error from `imagePullSecrets` * fix: Correcting indent and `-` * fix: Remove blank before `-` * chore: Remove old Grafana * chore: Revert `targetRevision` of new Grafana * feat: update grafana monitor db url (#143) * fix: update url (#146) * (BAOBAB) goapi baobab deployment (#139) * feat: helm chart for goapi baobab deployment * feat: add deployment.yaml * fix: updates based on feedbacks * fix: update based on feedback * fix: remove start command for goapi (#147) * fix: update command for goapi (#148) * fix: update docker image (#149) * fix: update baobab goapi version (#150) * (BAOBAB) Add labels to deployment.yaml and selector to service.yaml (#151) * (BAOBAB) Fix goapi's sync error in ArgoCD (#152) * fix: Add more labels * feat: Update deployment labels to include "-goapi" suffix * feat: Update deployment labels in deployment.yaml * feat: Update deployment labels in deployment.yaml * fix: update image (#153) * fix: goapi logging (#154) * fix: use pgxPool instead of pgxConn (#155) * (BAOBAB) The health-check URL of api change to goapi (#156) * (BAOBAB) Fix baobab-health-check error (#157) * feat: print out more information (#161) * fix: update package version (#164) * feat: reporter warning on invalid keys (#162) * feat: improve goapi logging (#163) * (BAOBAB) Api implement empty row handling (#170) * feat: implement empty row handling * fix: update default behavior * fix: aggregator hash fix validation (#172) * fix: cli compatibility updates (#174) * (baobab) Update CI Helm Lint workflow to include pull request events (#176) * feat: Add actions workflow to linting helm charts * feat: Update Helm setup and linting process * feat: Update helm chart change detection logic * fix: Fix can't detect changed charts * feat: Update ci.helm-lint.yaml to set output variable for changed charts * feat: Stop workflow when lint returned ERROR * fix: Update Helm lint command to check for "failed" instead of "[ERROR]". * fix: Fix workflow can't find HEAD~1 git commit * feat: Make workflow be failed when Lint returned error * feat: Store failed services to output of workflow * refactor: Remove unnecessary code in ci.helm-lint.yaml * feat: Add failed services output to GitHub Actions workflow * fix: Fix linting script to display failed services * feat: Update CI Helm Lint workflow to include pull request events * fix: Remove unnecessary branches from CI workflow * (BAOBAB) Clean up nodejs api (#179) * fix: (BAOBAB) remove legacy api * fix: remove manifest * fix: lint fix * feat: image version with published to new ecr * feat: Update `health-checker` to apply svc URL changes --------- Co-authored-by: jay-bisonai <[email protected]> * (BAOBAB) Hotfix sync error api (#180) fix: Modify name in templates to use name from values.yaml * feat: update grafana as code (#187) * (Baobab) Hotfix: Update listener, it's use old one (#189) * chore: update appVersion and image tag of listener to latest versions * (BAOBAB) Reduce fetcher logs (#190) * fix: reduce baobab fetcher logs * fix: update based on linter * (Baobab) Update whole VRF, (listener and reporter) of RR (#195) * chore: Update VRF * chore: Update RR l and r --------- Co-authored-by: jay-bisonai <[email protected]> * (BAOBAB) Reduce listener logs (#192) * fix: level down listener logs * fix: update log levels * fix: update log level into "info" * fix: update log levels --------- Co-authored-by: Jay(김재훈) <[email protected]> * fix: update connection addr (#199) * feat: update dex fetcher (#200) * fix: includes dex feed for eth usdt (#201) * (BAOBAB) Updates for go delegator (#183) * feat: updates for go delegator * fix: fix linter error * feat: set image version * fix: update filenames and var names * refactor: Remove unused part of code * feat: Add service account for get secret --------- Co-authored-by: jay-bisonai <[email protected]> * (Baobab) Hotfix error (#202) * fix: Revert deletion `serviceAccount` field from values.yaml to fix error * fix: Correct typo --------- Co-authored-by: jay-bisonai <[email protected]> * (Baobab) Hotfix godelegator (#204) * fix: Revert deletion `serviceAccount` field from values.yaml to fix error * fix: Correct typo * fix: Correcting typo --------- Co-authored-by: jay-bisonai <[email protected]> * (Baobab) Hotfix godelegator (#205) * fix: Revert deletion `serviceAccount` field from values.yaml to fix error * fix: Correct typo * fix: Correcting typo * feat: Enable SA in deployment * fix: Remove unnecessary if statement * fix: Correcting SA name in deployment --------- Co-authored-by: jay-bisonai <[email protected]> * feat: remove busd from grafana (#206) * feat: add checks if string empty (#208) * (Baobab) Remove BUSD-USDT from bisontry-event-checker (#209) * feat: Update tag version Co-authored-by: jay-bisonai <[email protected]> * feat: update godelegator with trimming (#211) * fix: update healthcheck delegator (#221) * (Baobab) Automate adding ArgoCD app (#212) * chore: Add name and namespace to `values.yaml` * test: Remove parent field * feat: Move name and namespace to child of global field * feat: Remove `orakl-` prefix * feat: Add `global.name` and `global.namespace` value to every chart * feat: Separate redis to multiple directory * feat: Delete unused charts * feat: Add charts to ArgoCD app automate * feat: Add redis-fetcher * fix: Add name and namespace because redis charts not contain * feat: Delete ArgoCD app manifests, it's unused * feat: Modify targetBranch to use `gcp-baobab-prod` --------- Co-authored-by: jay-bisonai <[email protected]> * feat: deploy v0.0.1.20240314.0433.ce446c4 (#230) Co-authored-by: orakl-bot <[email protected]> * deploy baobab-godelegator-v0.0.1.20240313.0958.82d191d (#228) * feat: deploy v0.0.1.20240313.0958.82d191d * Update Chart.yaml --------- Co-authored-by: orakl-bot <[email protected]> * feat: Add bisontry-messenger (#240) Co-authored-by: jay-bisonai <[email protected]> * feat: deploy v0.0.1.20240315.0618.c907cbb (#239) Co-authored-by: orakl-bot <[email protected]> * feat: deploy v0.6.1.20240315.0618.c907cbb (#236) Co-authored-by: orakl-bot <[email protected]> * (Baobab) Deploy `bisontry-messenger` (#241) * fix: Remove env part * feat: Add bisontry-secrets * feat: Revert remove env part * fix: Add missing key `password` * feat: Add external IP address --------- Co-authored-by: jay-bisonai <[email protected]> * feat: Add cronjob set and add tls-renew job template * feat: Add renew script * feat: Add renew script to cypress helm charts * Added cluster role template for cronjobs. * aider: Replace rules section in clusterrole.yaml with values from values.yaml using nindent value of 2. # Aider chat conversation: USER: 한국어로 대답해주세요. 지금 cluster ASSISTANT: 안녕하세요, 질문이 완전하지 않은 것 같습니다. "지금 cluster"에 대해 무엇을 변경하거나 구현하길 원하시는지 구체적으로 알려주시면 도와드리겠습니다. USER: 지금 clusterrole.yaml 파일에서 21번 줄에 rules를 템플릿으로 가져오려고 합니다. nindent 값은 2로 주면 될 것 같은데, 어떻게 해야 values.yaml에서 선언된 rules 내용을 매핑되도록 할 수 있을지 모르겠습니다. ASSISTANT: ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ rules: +{{- if hasKey $job "rules" }} +{{ toYaml $job.rules | nindent 2 }} +{{- else }} +{{- end }} {{- end }} ``` 위의 변경 사항은 `cronjobs/templates/clusterrole.yaml` 파일에서 `rules` 섹션을 `values.yaml` 파일에 정의된 `rules`로 대체하기 위한 것입니다. `nindent` 값은 2로 설정되어 있습니다. * Updated clusterrole.yaml to always include rules from $job variable. * aider: Remove nindent 2 from toYaml function in clusterrole.yaml. # Aider chat conversation: USER: 만약 toYaml이 들어간 라인을 들여쓰기하면 nindent를 지워도 되나요? 아니면 이는 템플릿에서 보일 때만 적용될 뿐 실제 파일이 generate될 때 적용될 indent에는 반드시 nindent 함수를 사용해야 하나요? ASSISTANT: `toYaml` 함수를 사용할 때 들여쓰기를 직접 하면 `nindent`를 사용하지 않아도 됩니다. 그러나 Helm 템플릿에서는 `nindent` 함수를 사용하는 것이 일반적으로 권장됩니다. 이 함수는 지정된 수의 공백을 추가하여 새로운 줄에서 시작하는 문자열을 생성합니다. 이는 템플릿이 보기에만 좋게 만드는 것이 아니라, 실제로 파일이 생성될 때 적용되는 들여쓰기를 제어합니다. `nindent`는 특히 여러 줄에 걸친 YAML 블록을 생성할 때 유용합니다. 예를 들어, 여러 줄의 YAML을 생성하고 이를 특정 수준으로 들여쓰기하려면 `nindent`를 사용하는 것이 좋습니다. 이렇게 하면 생성된 YAML이 올바른 위치에 들여쓰기되어 Kubernetes 매니페스트 파일로서 올바르게 작동하게 됩니다. 따라서, `nindent` 함수는 실제 파일 생성 시에도 적용되는 들여쓰기를 제어하기 위해 필요합니다. USER: 그렇다면 들여쓰기가 분명한 위치이고, 가독성을 위해서도 필요한데, 이번 케이스에 한해 nindent 2를 제거해도 될까요? 반드시 사용해야 하는 경우는 어떤 경우인지 잘 이해했습니다. ASSISTANT: 만약 `toYaml` 함수를 사용한 라인이 이미 올바른 들여쓰기 위치에 있고, `nindent` 없이도 YAML 파일이 올바르게 생성될 것이라 확신한다면 `nindent`를 제거할 수 있습니다. `nindent`는 주로 자동으로 들여쓰기를 추가해야 할 때 사용되며, 수동으로 들여쓰기를 제어할 수 있는 상황에서는 생략할 수 있습니다. `nindent`가 반드시 필요한 경우는 다음과 같습니다: - 여러 줄에 걸친 YAML 블록을 생성하고, 이를 특정 수준으로 들여쓰기해야 할 때. - 템플릿의 다른 부분에서 생성된 변수를 사용하여 들여쓰기를 동적으로 조정해야 할 때. - 템플릿의 들여쓰기 수준이 불분명하거나 일관되지 않을 때. 이번 케이스에 한해 `nindent`를 제거하고자 한다면, 다음과 같이 `clusterrole.yaml` 파일을 수정할 수 있습니다: ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ rules: - {{ toYaml $job.rules | nindent 2 }} + {{ toYaml $job.rules }} {{- end }} ``` 위의 변경 사항은 `nindent` 함수를 제거하고, `toYaml` 함수만을 사용하여 `rules`를 YAML 형식으로 변환합니다. 이렇게 하면 `rules` 블록이 `clusterrole.yaml` 파일에 올바른 들여쓰기 수준으로 포함됩니다. * Add cluster role for TLS certificate renewal and export to Kubernetes secret in cronjob. * aider: Added a line to export the renewed TLS certificate to a Kubernetes secret. # Aider chat conversation: USER: values.yaml에 clusterRole을 도와주신 덕분에 잘 설정했습니다. args를 잘 보시면 이미 존재하는 디스크에 접근하여 존재하는 파일로 TLS인증서를 갱신하는데요. 갱신된 인증서를 이용하여 kubernetes secret으로 내보내는 라인을 추가해주실 수 있나요? 인증서 파일은 /etc/letsencrpyt/live/orakl.network 경로에 cert.pem 파일과 keyfile.pem으로 저장됩니다. ASSISTANT: To add the necessary lines to export the renewed TLS certificate to a Kubernetes secret, I will modify the `args` section of the `orakl-tls-renew` job in the `values.yaml` file. Here is the diff for the changes: ```diff --- cronjobs/values.yaml +++ cronjobs/values.yaml @@ ... @@ echo "Step 3: Export to Kubernetes secret" - sleep infinity + kubectl create secret tls orakl-network-tls --cert=/etc/letsencrypt/live/orakl.network/cert.pem --key=/etc/letsencrypt/live/orakl.network/privkey.pem --dry-run=client -o yaml | kubectl apply -f - ``` This change will create a new Kubernetes secret named `orakl-network-tls` with the certificate and private key files from the specified directory. The `--dry-run=client -o yaml | kubectl apply -f -` part of the command ensures that the secret is created in a declarative way, which is a best practice for managing Kubernetes resources. * Fix indentation in clusterrole.yaml template. * aider: Update clusterrole.yaml to dynamically reference namespace from values.yaml and fallback to Helm release namespace if not specified. # Aider chat conversation: USER: 한국어로 답해주세요. 지금까지 말씀해주신 대로 작업해서 거의 다 왔는데, values.yaml 파일에 namespace를 잘 기재했음에도 clusterrole에서 $job.namespace로 내용을 읽어오지 못 합니다. 오류 메시지는 다음과 같으며 무엇이 문제일까요? Error: UPGRADE FAILED: failed to create resource: ClusterRoleBinding.rbac.authorization.k8s.io "orakl-cronjobs-orakl-db-backup" is invalid: subjects[0].namespace: Required value ASSISTANT: 해당 문제는 `values.yaml` 파일에 정의된 `namespace` 값을 `cronjobs/templates/clusterrole.yaml` 파일에서 제대로 참조하지 못해서 발생하는 것으로 보입니다. `ClusterRoleBinding` 정의에서 `namespace` 값을 동적으로 가져오도록 템플릿을 수정해야 할 것 같습니다. `values.yaml` 파일에 `namespace`가 각 job에 명시적으로 정의되어 있는지 확인하고, `clusterrole.yaml` 템플릿 파일에서 이 값을 올바르게 참조하도록 수정하겠습니다. 다음은 `cronjobs/templates/clusterrole.yaml` 파일에 대한 수정 사항입니다: ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ subjects: - kind: ServiceAccount {{- if and (hasKey $job "serviceAccount") (hasKey $job "serviceAccount.name") }} name: {{ $job.serviceAccount.name }} {{- else }} name: {{ $.Release.Name}}-{{ $jobname }} {{- end }} - namespace: {{ $job.namespace }} + namespace: {{ default $.Release.Namespace $job.namespace }} roleRef: kind: ClusterRole {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} name: {{ $job.clusterRole.name }} {{- else }} name: {{ $.Release.Name }}-{{ $jobname }} {{- end }} apiGroup: rbac.authorization.k8s.io ``` 이 변경 사항은 `namespace` 값이 `values.yaml` 파일의 각 job 섹션에 없을 경우 기본적으로 Helm 릴리스의 네임스페이스를 사용하도록 설정합니다. 이렇게 하면 `ClusterRoleBinding` 생성 시 필요한 `namespace` 값을 항상 제공할 수 있습니다. * Added namespace field and formatted kubectl command in cronjobs/values.yaml. * aider: Added installation of kubectl to resolve the failure due to missing kubectl command in cronjobs/values.yaml. # Aider chat conversation: USER: kubectl 명령어가 없다고 실패했는데, 이를 해결해주세요. 한국어로 답해주세요. ASSISTANT: kubectl 명령어가 없어서 실패하는 문제를 해결하기 위해, 해당 명령어를 사용하는 컨테이너에 kubectl을 설치하는 단계를 추가하겠습니다. 이를 위해 `cronjobs/values.yaml` 파일에 kubectl 설치 명령어를 추가하는 패치를 만들겠습니다. ```diff --- cronjobs/values.yaml +++ cronjobs/values.yaml @@ ... @@ echo "Step 1: Install dependencies" apt -y update && apt -y install python3 python3-pip python3-venv python3 -m venv /certbot && /certbot/bin/pip3 install -U certbot-dns-godaddy + apt-get install -y curl + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" + install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl echo "Step 2: Renew TLS certificate" /certbot/bin/certbot renew \ ``` 위의 패치는 `orakl-tls-renew` 작업에 대한 `cronjobs/values.yaml` 파일의 `args` 섹션에 kubectl을 설치하는 명령어를 추가합니다. 이렇게 하면 해당 작업이 실행될 때 kubectl 명령어를 사용할 수 있게 됩니다. * chore: Add aider to gitignore * chore: Rename `aggregator-reporter.sh` * aider: ClusterRole이 항상 생성되도록 수정되었습니다. # Aider chat conversation: USER: 한국어로 답해주세요. helm install로 설치했고, template에서 cronjob이나 serviceaccount는 잘 생성되는데, clusterrole만 생성되지 않습니다. 오류가 나오지 않는데 디버깅을 어떻게 해야할까요? ASSISTANT: ClusterRole이 생성되지 않는 문제를 해결하기 위해 Helm 차트의 템플릿을 확인해야 합니다. ClusterRole이 생성되지 않는 경우, Helm 템플릿에 문제가 있거나, Helm 차트의 values 파일에 올바른 설정이 되어 있지 않을 수 있습니다. 먼저, `cronjobs/templates/clusterrole.yaml` 파일을 확인하여 ClusterRole이 조건에 따라 생성되도록 되어 있는지 확인해야 합니다. 그리고 `cronjobs/values.yaml` 파일에서 ClusterRole에 대한 설정이 올바르게 정의되어 있는지 확인해야 합니다. ClusterRole이 생성되지 않는 문제를 디버깅하기 위해 다음 단계를 수행할 수 있습니다: 1. Helm 차트의 템플릿 파일에서 ClusterRole 정의 부분이 조건문에 의해 올바르게 생성되도록 되어 있는지 확인합니다. 2. Helm의 `--dry-run` 옵션을 사용하여 실제로 클러스터에 적용되기 전에 생성될 리소스를 미리 볼 수 있습니다. 3. Helm의 `--debug` 옵션을 사용하여 설치 과정에서 발생하는 내용을 자세히 볼 수 있습니다. 4. Kubernetes 클러스터의 RBAC 설정이 ClusterRole을 생성할 수 있는 권한을 가지고 있는지 확인합니다. 현재 제공된 `cronjobs/templates/clusterrole.yaml` 파일을 기반으로 ClusterRole이 생성되지 않는 문제를 해결하기 위한 수정 사항을 제안하겠습니다. ClusterRole 생성 조건을 확인하고, 필요한 경우 수정하겠습니다. ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ {{- range $jobname, $job := .Values.jobs }} {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} name: {{ $job.clusterRole.name }} {{- else }} - name: {{ $.Release.Name }}-{{ $jobname }} + # 주석 처리된 부분을 제거하거나 조건을 수정하여 ClusterRole이 항상 생성되도록 할 수 있습니다. {{- end }} labels: {{- include "cronjobs.labels" $ | nindent 4 }} cron: {{ $jobname }} {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.annotations") }} {{- with $job.clusterRole.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} {{- end }} rules: {{ toYaml $job.rules }} {{- end }} ``` 위의 diff 출력은 `cronjobs/templates/clusterrole.yaml` 파일에서 ClusterRole이 생성되지 않는 문제를 해결하기 위한 예시 수정 사항입니다. 실제 문제 해결을 위해서는 Helm 차트의 나머지 부분과 함께 전체적인 검토가 필요합니다. * Remove unnecessary commented out code in clusterrole.yaml. * aider: Fix condition to create ClusterRole and ClusterRoleBinding only for jobs with clusterRole property. # Aider chat conversation: USER: 한국어로 답해주세요. job이 clusterRole이나 clusterRole.name 속성을 가지고 있는 경우에만 생성하도록 했는데, dry-run으로 결과물을 보면 clusterRole을 가지지 않은 backdb-job에 대해서도 롤이 생성됩니다. 원인이 무엇이고, 어떻게 해결하나요? ASSISTANT: 문제의 원인은 `cronjobs/templates/clusterrole.yaml` 파일에서 `ClusterRole`과 `ClusterRoleBinding` 리소스를 생성하는 조건문이 잘못되어 있기 때문입니다. 현재 조건문은 `clusterRole`과 `clusterRole.name` 속성이 있는지만 확인하고 있으며, 이 속성들이 없는 job에 대해서도 리소스를 생성하고 있습니다. 해결 방법은 `range` 루프 안에서 `clusterRole` 속성의 존재 여부를 확인하는 조건문을 추가하여, 해당 속성이 없는 경우 리소스 생성을 건너뛰도록 수정하는 것입니다. 다음은 수정된 `cronjobs/templates/clusterrole.yaml` 파일의 diff 결과입니다: ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ {{- range $jobname, $job := .Values.jobs }} -{{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} +{{- if hasKey $job "clusterRole" }} --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: {{ $job.clusterRole.name }} labels: {{- include "cronjobs.labels" $ | nindent 4 }} cron: {{ $jobname }} {{- with $job.clusterRole.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} rules: {{ toYaml $job.clusterRole.rules | nindent 2 }} {{- end }} --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: {{ $job.clusterRole.name | default (printf "%s-%s" $.Release.Name $jobname) }} subjects: - kind: ServiceAccount name: {{ $job.serviceAccount.name | default (printf "%s-%s" $.Release.Name $jobname) }} namespace: {{ default $.Release.Namespace $job.namespace }} roleRef: kind: ClusterRole name: {{ $job.clusterRole.name | default (printf "%s-%s" $.Release.Name $jobname) }} apiGroup: rbac.authorization.k8s.io -{{- end }} +{{- end }} {{- end }} ``` 이 변경사항은 `clusterRole` 속성이 있는 job에 대해서만 `ClusterRole`과 `ClusterRoleBinding` 리소스를 생성하도록 수정합니다. * Add namespace field to clusterrole.yaml template. * aider: Set default names for ClusterRole and ClusterRoleBinding in `cronjobs/templates/clusterrole.yaml` to prevent errors with empty names. # Aider chat conversation: USER: 한국어로 답해주세요. 말씀해주신 대로 수정 후에는 다음과 같은 오류가 발생합니다. Error: INSTALLATION FAILED: Unable to continue with install: could not get information about the resource ClusterRole "" in namespace "": resource name may not be empty ASSISTANT: 해당 오류는 Kubernetes의 ClusterRole 리소스 이름이 비어 있기 때문에 발생합니다. 이 문제를 해결하기 위해 `cronjobs/templates/clusterrole.yaml` 파일에서 ClusterRole 및 ClusterRoleBinding의 이름을 설정하는 로직을 수정해야 합니다. 이름이 제대로 설정되지 않은 경우 기본값을 사용하도록 변경하겠습니다. 다음은 수정된 `cronjobs/templates/clusterrole.yaml` 파일의 diff입니다: ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ {{- range $jobname, $job := .Values.jobs }} {{- if hasKey $job "clusterRole" }} --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} - name: {{ $job.clusterRole.name }} + name: {{ default (print $.Release.Name "-" $jobname) $job.clusterRole.name }} {{- end }} namespace: {{ default $.Release.Namespace $job.namespace }} labels: {{- include "cronjobs.labels" $ | nindent 4 }} cron: {{ $jobname }} {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.annotations") }} {{- with $job.clusterRole.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} {{- end }} rules: {{ toYaml $job.rules }} {{- end }} --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} - name: {{ $job.clusterRole.name }} + name: {{ default (print $.Release.Name "-" $jobname "-binding") $job.clusterRole.name }} {{- else }} name: {{ $.Release.Name }}-{{ $jobname }} {{- end }} subjects: - kind: ServiceAccount {{- if and (hasKey $job "serviceAccount") (hasKey $job "serviceAccount.name") }} name: {{ $job.serviceAccount.name }} {{- else }} name: {{ $.Release.Name}}-{{ $jobname }} {{- end }} namespace: {{ default $.Release.Namespace $job.namespace }} roleRef: kind: ClusterRole {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} - name: {{ $job.clusterRole.name }} + name: {{ default (print $.Release.Name "-" $jobname) $job.clusterRole.name }} {{- else }} name: {{ $.Release.Name }}-{{ $jobname }} {{- end }} apiGroup: rbac.authorization.k8s.io {{- end }} ``` 이 변경 사항은 ClusterRole 및 ClusterRoleBinding의 이름이 비어 있지 않도록 기본값을 설정합니다. ClusterRoleBinding의 이름에 "-binding" 접미사를 추가하여 이름 충돌을 방지합니다. * aider: Fix default function usage in clusterrole.yaml to prevent empty ClusterRole name issue. # Aider chat conversation: USER: 한국어로 답해주세요. 말씀해주신 수정 후에도 여전히 같은 문제가 발생합니다. ASSISTANT: 안녕하세요, 문제가 발생하는 부분에 대해 구체적으로 설명해주시면 수정을 도와드리겠습니다. 어떤 문제가 발생하는지, 어떤 파일을 수정해야 하는지, 그리고 어떤 변경을 원하시는지 알려주세요. USER: Error: INSTALLATION FAILED: Unable to continue with install: could not get information about the resource ClusterRole "" in namespace "": resource name may not be empty 해당 오류요 ASSISTANT: 해당 오류는 Kubernetes 클러스터에서 Helm 차트를 설치할 때 발생하는 것으로 보입니다. 오류 메시지에 따르면, `ClusterRole`의 이름이 비어있어서 발생하는 문제로 보입니다. `cronjobs/templates/clusterrole.yaml` 파일에서 `ClusterRole`과 `ClusterRoleBinding`의 이름을 정의하는 부분을 확인하고 수정해야 할 것 같습니다. `cronjobs/templates/clusterrole.yaml` 파일에서 `ClusterRole`과 `ClusterRoleBinding`의 이름을 설정하는 로직에 문제가 있을 수 있으므로, 해당 부분을 수정하는 diff를 생성하겠습니다. ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} - name: {{ default (print $.Release.Name "-" $jobname) $job.clusterRole.name }} + name: {{ $job.clusterRole.name | default (print $.Release.Name "-" $jobname) }} {{- end }} namespace: {{ default $.Release.Namespace $job.namespace }} @@ ... @@ {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} - name: {{ default (print $.Release.Name "-" $jobname "-binding") $job.clusterRole.name }} + name: {{ $job.clusterRole.name | default (print $.Release.Name "-" $jobname "-binding") }} {{- else }} @@ ... @@ {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.name") }} - name: {{ default (print $.Release.Name "-" $jobname) $job.clusterRole.name }} + name: {{ $job.clusterRole.name | default (print $.Release.Name "-" $jobname) }} {{- else }} ``` 위의 diff는 `default` 함수의 사용 방식을 수정하여 `ClusterRole`과 `ClusterRoleBinding`의 이름이 비어있지 않도록 합니다. Helm 템플릿에서 `default` 함수는 첫 번째 인자가 비어있거나 정의되지 않았을 때 두 번째 인자를 사용하도록 합니다. 이 변경을 통해 `ClusterRole`의 이름이 비어있는 문제를 해결할 수 있습니다. 이 수정이 문제를 해결하는 데 도움이 되었기를 바랍니다. 추가적인 수정이 필요하거나 다른 문제가 발생하면 알려주세요. * feat: Add permission about secrets * Updated clusterrole.yaml template to include roleRef for ClusterRoleBinding. * Refactored clusterRole.yaml template to simplify and improve readability. * aider: Fix indentation issue in rendering `clusterRole` rules in YAML file. # Aider chat conversation: USER: 한국어로 답해주세요. 저는 values에 적어둔 rules처럼 clusterRole에 삽입하고 싶은데, 현 상태에서 toYaml로 들어갈 때는 제가 의도한 것과는 다르게 삽입되는 것 같습니다. 아래는 렌더링된 결과입니다. rules: - apiGroups: - apps resources: - deployments verbs: - get - list - apiGroups: - "" resources: - pods - secrets verbs: - get - list - exec - apiGroups: - "" resources: - pods/exec verbs: - create ASSISTANT: 문제를 해결하기 위해 `toYaml` 함수를 사용할 때 발생하는 들여쓰기 문제를 수정하겠습니다. `toYaml` 함수가 생성하는 YAML 형식이 의도한 대로 들여쓰기 되지 않는 것을 고치기 위해, `nindent` 함수의 들여쓰기 수준을 조정하겠습니다. 다음은 `cronjobs/templates/clusterrole.yaml` 파일에 대한 수정 사항입니다: ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ {{- with $job.clusterRole.rules }} rules: - {{- toYaml . | nindent 4}} + {{- toYaml . | nindent 2}} {{- end }} ``` 이 변경을 통해 `rules` 항목이 올바른 들여쓰기 수준으로 YAML 파일에 삽입될 것입니다. * aider: Preserved YAML array values in `clusterrole.yaml` template using `nindent` and `|-` block chomping. # Aider chat conversation: USER: toYaml을 적용했을 때 배열로 들어간 값들이 모두 풀려서 원래 문법과 다르게 적용되는 것 같습니다. values.yaml에 적혀있는 그대로 삽입하려면 어떻게 해야 하나요? ASSISTANT: To ensure that the YAML array values are preserved exactly as they are in the `values.yaml` file when using `toYaml`, you can use the `nindent` function along with the `>-` block chomping indicator in Helm templates. This will maintain the indentation and format of the array as it is in the `values.yaml` file. Here's the diff for the `cronjobs/templates/clusterrole.yaml` file to apply this change: ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ {{- with $job.clusterRole.rules }} rules: - {{- toYaml . | nindent 2}} + - |- + {{- toYaml . | nindent 6 }} {{- end }} ``` This change will ensure that the `rules` array from the `values.yaml` file is inserted into the template with the correct indentation and without expanding the array elements. The `|-` block chomping indicator keeps the newline at the end of the string, and `nindent 6` ensures the correct indentation level. * Update clusterrole.yaml to simplify rules section. * aider: Modify `rules` to a multi-line string and fix `labels` section to prevent errors in `cronjobs/templates/clusterrole.yaml`. # Aider chat conversation: USER: 한국어로 답해주세요. values에 있는 rules 내용을 그대로 넣고 싶어서, 멀티라인 텍스트로 변경했더니 전혀 관련 없는 위치에서 ㅇ오류가 발생합니다. 오류 위치는 clusterrole의 labels 쪽에서 발생하며 메시지는 line 13: did not find expected key입니다. ASSISTANT: 문제를 해결하기 위해 `cronjobs/templates/clusterrole.yaml` 파일의 `rules` 섹션을 멀티라인 문자열로 변경하겠습니다. 이렇게 하면 YAML 파서가 `rules`의 내용을 올바르게 처리할 수 있을 것입니다. 또한, `labels` 섹션에서 발생하는 오류를 수정하기 위해 해당 부분을 살펴보고 필요한 수정을 적용하겠습니다. 다음은 수정된 내용을 반영한 diff입니다: ```diff --- cronjobs/templates/clusterrole.yaml +++ cronjobs/templates/clusterrole.yaml @@ ... @@ labels: {{- include "cronjobs.labels" $ | nindent 4 }} cron: {{ $jobname }} {{- if and (hasKey $job "clusterRole") (hasKey $job "clusterRole.annotations") }} annotations: {{- toYaml . | nindent 4 }} {{- end }} -rules: | - {{ $job.clusterRole.rules }} +rules: + {{- toYaml $job.clusterRole.rules | nindent 2 }} ``` 위의 변경 사항은 `rules`를 멀티라인 문자열로 처리하도록 수정하고, `labels` 섹션에서 발생할 수 있는 오류를 방지하기 위해 `nindent`를 사용하여 올바른 들여쓰기를 적용합니다. YAML에서 들여쓰기는 매우 중요하므로, 이 부분을 주의 깊게 확인해야 합니다. * fix: Remove indent in template file * feat: Add permission to patch secret for updating exist certificate --------- Co-authored-by: jay-bisonai <[email protected]> Co-authored-by: jo <[email protected]> Co-authored-by: jayce <[email protected]> Co-authored-by: Martin Kersner <[email protected]> Co-authored-by: Nick <[email protected]> Co-authored-by: Bayram Guvanjov <[email protected]> Co-authored-by: orakl-bot <[email protected]>
This pull request was closed.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.