Skip to content

Commit

Permalink
fixed e2e test fail due to req ENV var
Browse files Browse the repository at this point in the history
  • Loading branch information
AdenForshaw committed Oct 31, 2024
1 parent 8d9f38b commit 1d71142
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 2 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -58,3 +58,4 @@ report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
.env
credentials.json
gcp-credentials.json
.env.test
11 changes: 10 additions & 1 deletion docs/api-design.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,21 @@

Response objects should be as close to the Overture Schema as possible, and use the `ext_` prefix for any additional fields. This allows us to easily map the response to the schema, and also allows us to add additional fields without breaking the schema.


## Query Parameters

Request parameters should use the Overture fields for reference, with underscore separators for filtering by nested fields. For example, `brand_wikidata` for filtering by `brand.wikidata`.

Where a filter is a name that could be used to filter against multiple fields, we should use the `name` field. For example, `brand_name=McDonalds` should filter against `brand.names.primary` and `brand.names.common`.

## Security

OWASP Top 10 security risks should be considered when designing the API

## Cost control

Rate limiting and caching should be used to control costs. We can use the free tier for Cloud Run and Cloud Storage to cache the data for faster response times. In production you should consider using Redis instead of Cloud storage for caching, and migrating the parts of the dataset you need to a private BigQuery dataset or a different database for speed and cost, especially for building shapes
Rate limiting and caching should be used to control costs. We can use the free tier for Cloud Run and Cloud Storage to cache the data for faster response times. In production you should consider using Redis instead of Cloud storage for caching, and migrating the parts of the dataset you need to a private BigQuery dataset or a different database for speed and cost, especially for building shapes

## Strictness

As this is an educational API we should stick to the principle of being flexible in what we accept and strict in what we return. This means we should accept a wide range of input parameters, but return a strict response format.
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"test:watch": "jest --watch",
"test:cov": "jest --coverage",
"test:debug": "node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand",
"test:e2e": "jest --config ./test/jest-e2e.json"
"test:e2e": "export $(cat .env.test | xargs) && NODE_ENV=test && jest --config ./test/jest-e2e.json"
},
"dependencies": {
"@google-cloud/bigquery": "^7.9.1",
Expand Down
5 changes: 5 additions & 0 deletions src/gcs/gcs.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,11 @@ export class GcsService {
logger = new Logger('GcsService');

constructor() {

if(!process.env.BIGQUERY_PROJECT_ID || process.env.GOOGLE_APPLICATION_CREDENTIALS || process.env.GCS_BUCKET_NAME){
this.logger.error('GCS environment variables not set');
return;
}
this.storage = new Storage({
projectId: process.env.BIGQUERY_PROJECT_ID,
keyFilename: process.env.GOOGLE_APPLICATION_CREDENTIALS,
Expand Down

0 comments on commit 1d71142

Please sign in to comment.