Skip to content

Latest commit

 

History

History
17 lines (12 loc) · 985 Bytes

README.md

File metadata and controls

17 lines (12 loc) · 985 Bytes

Crowdfunding data | Extract, Transform, and Load (ETL)

The ETL process was implemented to manipulate dataframes to create csv files into schemas and relationship tables from a Crowdfunding-related excel dataset.

Technologies Applied

Python, Jupyter Lab, Pandas, PostgreSQL, and QuickDBD were the platforms used to implement the ETL Process on Crowdfunding dataset.

  • Data was extracted and dataframes were created from the Crowdfunding dataset in Jupyter lab
  • The dataframes were converted to csv files and pushed/stored locally into a repository resource folder
  • The csv files were exported into Quick DBD and Postgres SQL/Pg4Admin databases where relationship tables and schemas were created, respectively

image

Please see the attached csv, sql, and ERD diagram files within the folders located above this ReadMe.

Authors:

  • Avis Randle
  • Irina Tenis