diff --git a/docs/LICENSE b/docs/LICENSE deleted file mode 100644 index 490da1fe..00000000 --- a/docs/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2022 Shu Ding - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/docs/justfile b/docs/justfile new file mode 100644 index 00000000..901b811e --- /dev/null +++ b/docs/justfile @@ -0,0 +1,17 @@ +default: + @just --list + +install: + pnpm install + +build: install + pnpm build + +clean: + rm -rf node_modules/ .next/ pnpm-lock.json + +run: + npx next start + +dev: install + pnpm dev diff --git a/docs/pages/For_Contributors/0_for_devs_index.mdx b/docs/pages/For_Contributors/0_for_devs_index.mdx index a3365c19..1984c3ff 100644 --- a/docs/pages/For_Contributors/0_for_devs_index.mdx +++ b/docs/pages/For_Contributors/0_for_devs_index.mdx @@ -1,8 +1,15 @@ # Overview of the Contributor Documentation -This section of the documentation contains the following. First a brief overview of the architecture is given. Then, in sections two and three the frontend and backend of the portal are described. Fourth, the gateway (or proxy) is documented. Fifth, the gateway kit is described. Thereafter the schema is described, documenting the interation between the backend and the proxy. Seventh, the documentation describes the smart contract payment infrastructure. Lastly, this documentation covers the POKT integration. +This section of the documentation contains the following. +First a brief overview of the architecture is given. +Then, in sections two and three the frontend and backend of the portal are described. +Fourth, the gateway (or proxy) is documented. +Fifth, the gateway server (previously gateway kit) is described. +Thereafter the schema is described, documenting the interation between the backend and the proxy. +Seventh, the documentation describes the smart contract payment infrastructure. +Lastly, this documentation covers the POKT integration. -For documentation on PORTERS PORTALS please refer to the [PORTAL section](/gateway-demo/docs/pages/PORTALS/0-PORTALS-index.mdx) of this documentation. +For documentation on PORTERS PORTALS please refer to the [PORTAL section](/PORTALS/0-PORTALS-index.mdx) of this documentation. ## Table of Contents @@ -10,7 +17,7 @@ For documentation on PORTERS PORTALS please refer to the [PORTAL section](/gatew 2. Frontend 3. Backend 4. Gateway (Proxy) -5. Gateway Kit +5. Gateway Server 6. Schema 7. Smart Contract Payment Infrastructure 8. POKT Integration diff --git a/docs/pages/For_Contributors/10_Redis.mdx b/docs/pages/For_Contributors/10_Redis.mdx deleted file mode 100644 index 56324c71..00000000 --- a/docs/pages/For_Contributors/10_Redis.mdx +++ /dev/null @@ -1,10 +0,0 @@ -# Redis Data Store - -The Redis data store is located between the gateway and the postgres server. Thus, it is a local cache for the information described in the schema. - -It is important to note that there is one postgress instance, but one Redis instance per region, so that there is a local copy that can be quickly read or written. - -Due to this setup individual Redis instances may differ, but are synced eventually. This is achieved because the only write of the Redis data store to the postgres instance is `append only`. Hence, no conflicts are possible. - -For example, given two users of the same app are in two regions, they both will be accessing different Redis instances. However, their usage will be reflected in the postgres instance eventually. - diff --git a/docs/pages/For_Contributors/1_Architecture_Overview.mdx b/docs/pages/For_Contributors/1_Architecture_Overview.mdx index f3c40837..ee2ba6ca 100644 --- a/docs/pages/For_Contributors/1_Architecture_Overview.mdx +++ b/docs/pages/For_Contributors/1_Architecture_Overview.mdx @@ -1,21 +1,20 @@ # Overview of RPC Gateway Architecture -A few changes to the initial architectures are needed. +Below is an architecture diagram for reference. The architecture of this system is broken into four main parts. +- The gateway portal allows users to manage their account and create apps to use on the gateway proxy +- The gateway proxy handles RPC requests and charges the account for access to the POKT network +- The on-chain portion manages the financial flow to buy credits and pay the POKT network +- The POKT network is interfaced through the gateway server product -- some services are external that were initially conceptualised as internal -- changes to the payment infrastructure -- the data flow should be described +## Database Design -Generally the data flows from the frontend to the backend and then to the schema. +The portal manages creation and updates to accounts in the postgres database. The gateway proxy uses a redis database primarily to cache data from this postgres database, but also writes from the proxy are cached to be updated in chunks to the postgres database. -The frontend and the backend manage access to accounts for setting up the database. A redis database primarily caches postgres calls, but also usage is being cached and is being updated in chunks to the postgress database. +- See [schema](6_Schema) for furter details on the postgres canonical data store -- future enhancement may include a flow chart -- postgress is the canonical data store for all bits -- frontend calls the backend, which in turn alters the postgres for configuring the proxy - -- smart contracts are burning tokens -- event watcher for looking for these events and updating the account +## PORTR Token +- Usage credits can be purchased and can be applied to increase account balance via the ERC-20 smart contract +- An event watcher listens for these events and credits the account in the database ## Architecture Diagram diff --git a/docs/pages/For_Contributors/4_Gateway.mdx b/docs/pages/For_Contributors/4_Gateway.mdx index ae25cd10..5948e8a4 100644 --- a/docs/pages/For_Contributors/4_Gateway.mdx +++ b/docs/pages/For_Contributors/4_Gateway.mdx @@ -4,8 +4,8 @@ This document provides an overview of the gateway architecture and functionality ## Overview -The gateway primarily relies on the `[net/http](https://pkg.go.dev/net/http)` library and its `ReverseProxy` functionality. -Requests are handled by `[gorilla/mux](https://github.com/gorilla/mux)`, mapping paths to the reverse proxy. +The gateway primarily relies on the [net/http](https://pkg.go.dev/net/http) library and its `ReverseProxy` functionality. +Requests are handled by [gorilla/mux](https://github.com/gorilla/mux), mapping paths to the reverse proxy. ## Reverse Proxy and Middleware @@ -15,24 +15,102 @@ Contributors can start by exploring plugins located in the **plugins** package. ## Plugins Package -The `main.go` file configures the server and allows adding plugins. Contributors can add new logic by providing new plugins. +The `main.go` file configures the server and allows adding plugins to the registry. +Contributors can add new logic by providing new plugins. These plugins are essential for customizing gateway behavior. +The plugin interface is defined as: + +```go +type Plugin interface { + Name() string + Key() string + Load() +} +``` + +`Name()` is just the user-friendly name by which to describe the plugin. +`Key()` is used to avoid collisions between Plugins, any cache data specific to a plugin should be prefixed with `Key()`. +`Load()` is called on application start to perform any steps needed to initialize the plugin + +There are two additional sub-interfaces which are called for each request. +`PreHandler` introduces a function `HandleRequest(*http.Request) error` which can be implemented to be called before requests are forwarded to the POKT network. +This should be used to reject requests and make any precondition checks. +`PostHandler` introduces a function `HandleResponse(*http.Response) error` which may be implemented to be called after the Response from the gateway server. +It should be used to modify the response or clean up any errant responses. +In either case, an `error` may be returned which will reflect in the HTTP response. + +### Lifecycle Management + +This gateway introduces the concept of a lifecycle, where each stage must be fulfilled by a plugin for the request to be considered valid. +Plugins are not required to fulfill any lifecycle stages, but may fulfill several. +This concept guides the development and integration of plugins into the gateway architecture. + +Currently there are four stages: +- `Auth` +- `AccountLookup` +- `BalanceCheck` +- `RateLimit` + +## Proxy Package + +The primary package in this program is the `proxy` package. +This defines the above `Plugin` type and calls it as part of the request lifecycle. +The proxy can be started as it is in the `gateway.go` file by calling `Start()` and requests will be proxyed to the gateway server defined by the environment variable `PROXY_TO` according to the established usage pattern. + ## Database Package The `db` package contains logic for interactions between Redis and PostgreSQL, handling data storage and retrieval. +Redis acts as a pass-through cache with a few additional features. +The goal is to keep all database specific interaction in this package (`go-redis` and `pq`). ## Utils Package The `utils` package consists of small utility packages, providing helper functions and tools for various tasks. +- `gatewaykit.go` defines `Target` which builds the URL to proxy to +- `rate.go` defines the encoding of rate limits using `ISO 8601` inspired format +- `sha256.go` is a wrapper on the `crypto/sha256` library for easily producing hashes ## Commons Package The `commons` package includes Prometheus metrics and configuration files essential for gateway operations. -## Lifecycle Management - -The concept of the lifecycle, where each stage must be fulfilled by a plugin for the request to be considered valid. -This concept guides the development and integration of plugins into the gateway architecture. - -For more detailed information on each aspect, refer to the corresponding sections above. +### Config + +Environment variables are used to configure the gateway. +In the future configuration may be moved to the database or a config file. +Current environment variables are: +- **SHUTDOWN_DELAY**: How long to wait for processes to finish on graceful shutdown (default: 5sec) +- **JOB_BUFFER_SIZE**: How many worker tasks to buffer before blocking (default: 50) +- **NUM_WORKERS**: How many goroutines to run to process the job buffer (default: 10) +- **PROXY_TO**: Internal URL to gateway server +- **HOST**: Domain this is hosted on, used to extract chain name from subdomain +- **PORT**: Network port for server to listen on (default: 9000) +- **DATABASE_URL**: Postgres connection URL +- **REDIS_URL**: Redis connection URL, alternative to decomposed vars +- **REDIS_ADDR**: Host of redis server +- **REDIS_USER**: Redis username +- **REDIS_PASSWORD**: Redis password +- **INSTRUMENT_ENABLED**: Debugging feature flag to add instrumentation +- **LOG_LEVEL**: How verbose should logs be (default: INFO) + +### Worker Pool + +To avoid random goroutines being managed throughout the code, there is a task queue to coordinate asynchronous jobs. +Implement the `Runnable` interface and add to the queue for a job runner to pick it up. +Set the environment variables described above to increase or decrease the queue size and number of workers based on needs. + +### Healthcheck + +There is a healthcheck endpoint exposed at `/health` which reports on the status of the gateway proxy. +Any external service can be monitored by adding its health to this service. +In addition, any internal metrics that should cause infrastructure to respond in some way can also be added. + +### Prometheus + +Prometheus is used for tracking metrics. It is exposed with a `/metrics` endpoint. +In addition to the common metrics, we add: +- **EndpointUsage**: A counter for usage on each endpoint, used for reporting +- **JobGauge**: Shows the current size of the job queue, used for monitoring +- **LatencyHistogram**: Instrumentation to show how much latency is added by the proxy process +- **RateLimitGauge**: Notifications that a rate limit has been hit, resets when resolved diff --git a/docs/pages/For_Contributors/5_Gateway-Kit.mdx b/docs/pages/For_Contributors/5_Gateway-Kit.mdx deleted file mode 100644 index 2e988572..00000000 --- a/docs/pages/For_Contributors/5_Gateway-Kit.mdx +++ /dev/null @@ -1,15 +0,0 @@ -# Gateway Kit - -## Introduction - -This page documents our usage of the gateway kit. More thorough documentation is available at the [Gateway Kit documentation](). - -## Description of our implementation - -We are proxing to the Gateway Kit. The Gateway Kit, in turn coordinates traffic with the POKT Network, i.e. selecting nodes to route traffic to based on the alocated [app stakes](). The private keys from the app stakes are imported to the Gateway Kit, which distributes POKT-specific traffic among the staked nodes according to the app stakes, latency, and available nodes. - -Furthermore, the gateway kit manages the quality of services and the pool of nodes on the POKT network. In short, it tracks connections between app stakes and nodes and distributes traffic among them. - -## Our usage - -We included a Docker image of the gateway kit and included it in our repository. Thus, the gateway kit is not built from source. diff --git a/docs/pages/For_Contributors/5_Gateway-Server.mdx b/docs/pages/For_Contributors/5_Gateway-Server.mdx new file mode 100644 index 00000000..c12de83d --- /dev/null +++ b/docs/pages/For_Contributors/5_Gateway-Server.mdx @@ -0,0 +1,18 @@ +# Gateway Server + +## Introduction + +This page documents our usage of the gateway server (previously gateway kit). More thorough documentation is available at the [Gateway Server documentation](https://github.com/pokt-network/gateway-server/blob/main/docs/overview.md). + +## Description of our implementation + +We are proxying to the Gateway Server. The Gateway Server, in turn coordinates traffic with the POKT Network, i.e. selecting nodes to route traffic to based on the alocated app stakes. +We use [this script](https://github.com/baaspoolsllc/pokt-stake-apps-script) to create app stakes. +The private keys from the app stakes are imported to the Gateway Server, which distributes POKT-specific traffic among the staked nodes according to the app stakes, latency, and available nodes. + +Furthermore, the gateway server manages the quality of services and the pool of nodes on the POKT network. In short, it tracks connections between app stakes and nodes and distributes traffic among them. + +## Our usage + +We included a Docker image for the gateway server in our repository. +The version to pull and deploy can be managed in the appropriate `docker-compose.yaml` or `fly.toml` file. diff --git a/docs/pages/For_Contributors/6_Schema.mdx b/docs/pages/For_Contributors/6_Schema.mdx index afae543e..4bb13a50 100644 --- a/docs/pages/For_Contributors/6_Schema.mdx +++ b/docs/pages/For_Contributors/6_Schema.mdx @@ -2,35 +2,52 @@ ## Description -It is the contract layer between the gateway and the portal. Any change to the schema must be coordinated between the portal and the gateway. It is a shared component between the portal and the gateway, as such connecting the two components. +It is the data contract between the gateway and the portal. +Any change to the schema must be coordinated between the portal and the gateway. +It is a shared component between the portal and the gateway, as such connecting the two components. +We put it in the `services/postgres` directory so neither codebase feels it has direct control. Schemas are managed by **Prisma object relational model (ORM)**. It allows to generate object models for the schemas. +You can run `just generate` to generate the javascript code for interacting with the database. +In addition changes to the schema can be pushed to the database with `just migrate`. ## Functionality -The schema consists of several tables. Tables contain a partition. There are portal-specific tables and some tables are shared between the gateway and the portal. Tables that are not interfacing with the gateway are frontend-specific. +The schema consists of several tables. +There is a conceptual partition in the schema based on whether the gateway proxy directly interfaces with it. +There are portal-specific tables and some tables which are shared between the gateway and the portal. +Tables that are not interfacing with the gateway are frontend-specific and can be modified without the need to coordinate with the golang packages. ### Shared Tables -- tenant table, managed by the portal - - stores and tracks the balance -- app(lication) table, managed by the portal -- app rule table, managed by the portal -- payment ledger, updated and managed by the watcher -- relay ledger, updated by the gateway -- product table, it is a look up table - - contains the supported chains, but generalises - - maps the chain name to an identifier +- **Tenant**: + - created by the portal, apps can be attached to it + - stores and tracks the relay balance +- **App**: + - application table, managed by the portal + - each app represents an set of endpoints for the enabled chains +- **AppRule**: + - app rules add security or constraints on the endpoints +- **RuleType**: + - types of app rules supported by the gateway +- **PaymentLedger**: + - updated and managed by the event watcher +- **RelayLedger**: + - updated by the gateway as relays are consumed +- **Product**: + - lookup table for endpoint + - contains the supported chains for the gateway + - maps the chain name to the POKT identifier - enables per product usage tracking -- rule-type table, a look up table - - describes the available rules - - it is used inconsequentially by the gateway ### Portal-only Tables -- user table, stores user and session information, as well as governing permissions -- org(anisation) table, currently not used, but allows for scalability in the future -- enterprise table, for future use - - an enterprise can have many tenants and many organisation - - organisations and tenants under an enterprise can autonomously manage balances -- user-org mapping is contained in another table +- **User**: + - stores user and session information +- **Org**: + - for future use + - allows multiple users to share a `Tenant` +- **Enterprise**: + - for future use + - an enterprise can have multiple tenants and organisations + - since balances are managed at the tenant level, allows multiple cost centers diff --git a/docs/pages/For_Contributors/7_Smart_Contracts_Payments.mdx b/docs/pages/For_Contributors/7_Smart_Contracts_Payments.mdx index 241102cf..e91e2891 100644 --- a/docs/pages/For_Contributors/7_Smart_Contracts_Payments.mdx +++ b/docs/pages/For_Contributors/7_Smart_Contracts_Payments.mdx @@ -4,24 +4,29 @@ It is a standard ERC-20 smart contract with a few additions, namely: -- open mint with a fixed price, which is maintained by Chainlink. - - the smart contract on Taiko Mainnet is using PYTH data feed through the Chainlink interface +- open `mint()` with a fixed price, which is maintained by Chainlink. + - the smart contract on Taiko Mainnet is using Pyth data feed through the Chainlink interface - it allows for minting by sending the native token as the payable, the rate of which is set by the Chainlink price feed -- admin mint is only called by the owner to mint without having to pay +- `adminMint()` is only called by the owner to mint without having to pay - this is used for internal usage and enterprise-onboarding -- apply to account function +- `applyToAccount()` used for adding balance to a tenant - initiates a burn and emits an event, which is used to increase the account balance within the gateway -- sweep and sweep token - - allows the operator to withdraw token balances of the contract, both native and ERC-20, sor settling internal accounts +- `sweep()` and `sweepToken()` + - allows the operator to withdraw token balances of the contract, both native and ERC-20, or settling internal accounts - this enables coverage of POKT relay costs - it completes the lifecycle of the smart contract -## PYTH Integration +## Pyth Integration -The Taiko deployment of the smart contract has a dependency of the PYTH-wrapper for Chainlink. It has been deployed by us, but the code-base is not maintained by PORTERS. The wrapper is then connected to our implementation. +The Taiko deployment of the smart contract has a dependency of the Pyth wrapper for Chainlink. +It has been deployed by us, but the code-base is not maintained by PORTERS. +The wrapper is then connected to our implementation as the price feed for minting. -A [script]() for deploying this wrapper exists and has to be modified for deploying any other price feed. +A [script](https://github.com/porters-xyz/gateway-demo/blob/develop/contracts/script/02_Pyth.s.sol) for deploying this wrapper exists and has to be modified for deploying any other price feed (currently ETH/USD). ## Smart Contract Deployments -Link Smart Contract Deployments +- [Taiko](https://taikoscan.io/address/0x54d5f8a0e0f06991e63e46420bcee1af7d9fe944) +- [Optimism](https://optimistic.etherscan.io/address/0x54d5f8a0e0f06991e63e46420bcee1af7d9fe944) +- [Base](https://basescan.org/address/0x54d5f8a0e0f06991e63e46420bcee1af7d9fe944) +- [Gnosis Chain](https://gnosisscan.io/address/0x54d5f8a0e0f06991e63e46420bcee1af7d9fe944) diff --git a/docs/pages/For_Contributors/8_POKT_Integration.mdx b/docs/pages/For_Contributors/8_POKT_Integration.mdx index 2d061eda..203feaa3 100644 --- a/docs/pages/For_Contributors/8_POKT_Integration.mdx +++ b/docs/pages/For_Contributors/8_POKT_Integration.mdx @@ -1,18 +1,16 @@ # PORTERS Integration with POKT -We are using a script by Nodies for the app stakes, which can be found [here](). It is a simple node script, which walks you through the app stake process. - -One sets up a file with the private keys for accounts that contain unstaked POKT. The private key goes into the input directory. Then the script is run. - -The inputs are the POKT nodes, which is required to do the RPC calls. +We are using a script by Nodies for the app stakes, which can be found [here](https://github.com/baaspoolsllc/pokt-stake-apps-script). +It is a simple node script, which walks you through the app stake process. ## Prompts -- POKT node, URL of the node the script is going to call to -- Chain IDs, a CSV of the chains you intend to stake to. It can be a list of one to fifteen chain IDs -- selector for POKT Mainnet and POKT Testnet -- amount of POKT to stake per account and chain - - mono relays are split across chains - +- POKT node, URL of the node the script is going to call to stake +- Chain IDs, a comma separated list of the chains you intend to stake to + - this can be a list of one to fifteen chain IDs +- selector for POKT Mainnet and POKT Testnet (y/n) +- amount of POKT to stake per private key + - number of relays available are split evenly across all chains staked -Thereafter, the script verifies the prompts. The script then sends the POKT to stake. +Thereafter, the script verifies the prompts. +The script then runs the transaction on the POKT network to stake. diff --git a/docs/pages/For_Contributors/9_Redis.mdx b/docs/pages/For_Contributors/9_Redis.mdx new file mode 100644 index 00000000..a1902c7e --- /dev/null +++ b/docs/pages/For_Contributors/9_Redis.mdx @@ -0,0 +1,11 @@ +# Redis Data Store + +The Redis data store is located between the gateway and the postgres server. Thus, it is a local cache for the information described in the schema. + +It is important to note that there is one postgres instance, but one Redis instance per region, so that there is a local copy that can be quickly read from or written to. + +Due to this setup, individual Redis instances may differ slightly, but are synced eventually. This is achieved because the only write from the Redis data store to the postgres instance is `append only`. +Hence, no conflicts are possible between instances. + +For example, given two users of the same app are in two regions, they both will be accessing different Redis instances. +However, their usage will be reflected in the postgres instance eventually. diff --git a/gateway/justfile b/gateway/justfile index 3d624dec..a6e0e53c 100644 --- a/gateway/justfile +++ b/gateway/justfile @@ -1,6 +1,9 @@ default: @just --list +clean: + echo + build: go install @@ -19,7 +22,7 @@ docker-run: docker-build prod-status: fly status -c fly.prod.toml -prod-deploy: +prod: fly scale count 3 --region sea,sin,ams -c fly.prod.toml # I was typing `just status` a lot so put this here diff --git a/justfile b/justfile index 11b50b5f..a158d485 100644 --- a/justfile +++ b/justfile @@ -1,31 +1,33 @@ default: @just --list +clean: + @just gateway/clean + @just web-portal/clean + @just docs/clean + test: @just gateway/test generate: - cd ./web-portal/backend && pnpm install && npx prisma generate + @just web-portal/backend/generate migrate: - cd ./web-portal/backend && pnpm install && npx prisma migrate dev - - -dev-backend: - cd ./web-portal/backend && pnpm install && pnpm start -build-backend: - cd ./web-portal/backend && pnpm install && pnpm build -serve-backend: - cd ./web-portal/backend && pnpm install && pnpm start:prod + @just web-portal/backend/migrate +dev: + @just web-portal/dev +build: + @just gateway/build + @just web-portal/build +run: + @just gateway/run + @just web-portal/run -dev-frontend: - cd ./web-portal/frontend && pnpm install && pnpm dev -build-frontend: - cd ./web-portal/frontend && pnpm install && pnpm build -serve-frontend: - cd ./web-portal/frontend && pnpm install && pnpm start +stage: + @just gateway/stage + @just web-portal/stage -deploy-prod: - @just gateway/prod-deploy - @just services/gatewaykit/prod-deploy - @just services/redis/prod-deploy +prod: + @just gateway/prod + @just services/gatewaykit/prod + @just services/redis/prod diff --git a/web-portal/backend/justfile b/web-portal/backend/justfile new file mode 100644 index 00000000..b55496fb --- /dev/null +++ b/web-portal/backend/justfile @@ -0,0 +1,29 @@ +default: + @just --list + +clean: + echo + +install: + pnpm install + +generate: install + npx prisma generate + +migrate: install + npx prisma migrate dev + +build: install + pnpm build + +dev: install + pnpm start + +run: install + pnpm start:prod + +stage: + fly deploy + +prod: + fly deploy -c fly.prod.toml diff --git a/web-portal/frontend/justfile b/web-portal/frontend/justfile new file mode 100644 index 00000000..35ba4334 --- /dev/null +++ b/web-portal/frontend/justfile @@ -0,0 +1,23 @@ +default: + @just --list + +clean: + rm -rf node_modules/ + +install: + pnpm install + +build: install + pnpm build + +dev: install + pnpm dev + +run: install + pnpm start + +stage: + fly deploy + +prod: + fly deploy -c fly.prod.toml diff --git a/web-portal/justfile b/web-portal/justfile new file mode 100644 index 00000000..004db282 --- /dev/null +++ b/web-portal/justfile @@ -0,0 +1,22 @@ +default: + @just --list + +clean: + @just backend/clean + @just frontend/clean + +build: + @just backend/build + @just frontend/build + +dev: + @just backend/dev + @just frontend/dev + +run: + @just backend/run + @just frontend/run + +prod: + @just backend/prod + @just frontend/prod