I am a software engineer working in the data space.
Here are some things I do :
For software engineers using Apache Spark
on Google Cloud Dataproc
, a common problem faced is to create a data pipeline to move data between Google Cloud BigQuery
and Google Cloud Spanner
. I solved this problem in a generic way by using Scala
to build a templated data pipeline
called BigQuery to Spanner using Apache Spark in Scala. Software engineers can use my template instead of writing pipeline code themselves. Software engineers simply have to provide some parameters to the template, e.g. the input BigQuery table & the output Spanner table.
For software engineers using Apache Airflow
on Google Cloud Composer
, a common problem faced is to understand the difference between 2 Google Cloud Composer environment. For example when debugging why the same Airflow DAG works in a DEV enrivonrment, but not a PROD environment. I solved this problem by using Python
to build a CLI tool
called cloudcomposerdiff which does a diff on 2 evironments. My solution saves software engineers the time & hassle of having to manually compare lots of different attributes across 2 environments.
Check out this YouTube video of my conference workshop on building a streaming data pipeline using Apache Beam. The code for the conference workshop can be found here and a cool viz tool for the pipeline output can be found here .
Check out my tutorial on building a batch data pipeline using Apache Beam.
See my codelab on optimising SQL query performance in BigQuery.
See my codelab on optimising the cost of data tables in BigQuery.