Skip to content

Commit

Permalink
Feat/jupyter book (#7)
Browse files Browse the repository at this point in the history
* first cut jupyter book

* update requirements

* copy workflow

* update invocation paths

* fix folder typos

* restructure + change toc

* update toc location + references

* update action flow

* update toc dir to root

* reorganize book project

* fix toc

* update toc

* dedup syllabus

* try out sections

* add titles to assignments
  • Loading branch information
mariogiampieri authored Jun 25, 2024
1 parent 59341bf commit 1a69281
Show file tree
Hide file tree
Showing 225 changed files with 1,140,107 additions and 156 deletions.
124 changes: 42 additions & 82 deletions .github/workflows/static.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,17 @@ name: deploy-book
on:
push:
branches:
- master
- main
- feat/jupyter-book
# If your git repository has the Jupyter Book within some-subfolder next to
# unrelated files, you can make this run only if a file within that specific
# folder has been modified.
#
# paths:
- Tutorials/sample-book/_build

- master
- main
- feat/jupyter-book
# If your git repository has the Jupyter Book within some-subfolder next to
# unrelated files, you can make this run only if a file within that specific
# folder has been modified.
#
# paths:
- Tutorials/sample-book/_build

workflow_dispatch:

# This job installs dependencies, builds the book, and pushes it to `gh-pages`
Expand All @@ -23,84 +25,42 @@ jobs:
pages: write
id-token: write
steps:
- uses: actions/checkout@v3

# Install dependencies
- name: Set up Python 3.11
uses: actions/setup-python@v4
with:
python-version: 3.11

- name: Install dependencies
run: |
pip install -r requirements.txt
# (optional) Cache your executed notebooks between runs
# if you have config:
# execute:
# execute_notebooks: cache
- name: cache executed notebooks
uses: actions/cache@v3
with:
path: _build/.jupyter_cache
key: jupyter-book-cache-${{ hashFiles('requirements.txt') }}

# Build the book
- name: Build the book
run: |
jupyter-book build ./Tutorials/sample-book

# Upload the book's HTML as an artifact
- name: Upload artifact
uses: actions/upload-pages-artifact@v2
with:
path: "./Tutorials/sample-book/_build/html"
- uses: actions/checkout@v3

# Deploy the book's HTML to GitHub Pages
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2
# Install dependencies
- name: Set up Python 3.11
uses: actions/setup-python@v4
with:
python-version: 3.11

# # Simple workflow for deploying static content to GitHub Pages
# name: Deploy static content to Pages
- name: Install dependencies
run: |
pip install -r requirements.txt
# on:
# # Runs on pushes targeting the default branch
# push:
# branches: ["main"]
# (optional) Cache your executed notebooks between runs
# if you have config:
# execute:
# execute_notebooks: cache
- name: cache executed notebooks
uses: actions/cache@v3
with:
path: _build/.jupyter_cache
key: jupyter-book-cache-${{ hashFiles('requirements.txt') }}

# # Allows you to run this workflow manually from the Actions tab
# workflow_dispatch:
# Build the book
- name: Build the book
run: |
jupyter-book build . --path-output Book/output_book
# # Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
# permissions:
# contents: read
# pages: write
# id-token: write
# Upload the book's HTML as an artifact
- name: Upload artifact
uses: actions/upload-pages-artifact@v2
with:
path: "Book/output_book/_build/html"

# # Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# # However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
# concurrency:
# group: "pages"
# cancel-in-progress: false
# Deploy the book's HTML to GitHub Pages
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2

# jobs:
# # Single deploy job since we're just deploying
# deploy:
# environment:
# name: github-pages
# url: ${{ steps.deployment.outputs.page_url }}
# runs-on: ubuntu-latest
# steps:
# - name: Checkout
# uses: actions/checkout@v4
# - name: Setup Pages
# uses: actions/configure-pages@v5
# - name: Upload artifact
# uses: actions/upload-pages-artifact@v3
# with:
# # Upload entire repository
# path: '.'
# - name: Deploy to GitHub Pages
# id: deployment
# uses: actions/deploy-pages@v4
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
Data/*
.DS_Store
Tutorials/cache/
Tutorials/cache/
Book/output_book/mappluto.fgb
1 change: 1 addition & 0 deletions Assignment_Descriptions/00_Getting_Started.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Getting Started
1. Install [VSCode](https://code.visualstudio.com/)
2. Clone the repository
3. Create a new Python environment. I recommend using [Conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html) for this, but you can also use virtualenv. This will protect our system Python installation from any changes we make.
Expand Down
1 change: 1 addition & 0 deletions Assignment_Descriptions/01_Loading_Visualizing.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Loading and visualizing data
Load and visualize a dataset of interest, either from [NYC Open Data](https://opendata.cityofnewyork.us/) or elsewhere. (If you already have an idea of what your final project topic will be, this would be a good opportunity to get a head start on that).

Explore its metadata and attributes, and hone in on one or two attributes in particular of interest. Provide explanation (numeric, graphical, textual) of what the attribute can and cannot tell us. Save the dataset out to the file format of your choice (but know why you chose that format).
2 changes: 2 additions & 0 deletions Assignment_Descriptions/02_Geoprocessing.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Geoprocessing

Create a dataset that expresses a narrative from part of your daily life, either now or in the past. This can be based on a mental map of your experience in New York / another city, based on geolocated map tracks (i.e. Google Maps history), or some other source. Based on the data type (point, line, polygon), consider how it would or could relate to other datasets that lend themselves to describing your mental image of the city- subway routes, the location of open space, your favorite coffee cart, etc. We will spend time in the next class relating these kinds of datasets together.
- Use https://geojson.io/, https://play.placemark.io/, QGIS, or another software to create your dataset
- **You must submit a GeoJSON file of your dataset, along with a proposed related dataset, via [tk], along with a proposed workflow for relating the two (expressed as a diagram)**. Ideally the related dataset will be something you have access to, but if not, describe how you would propose creating it. Come prepared to discuss, we will talk through a couple of examples next class.
2 changes: 2 additions & 0 deletions Assignment_Descriptions/03_Networks.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Networks

Define a network, and calculate the distance between different elements of it. Discuss the experiential differences between Euclidean and network distance for the objects in question. Bonus points (in the form of kudos) for quantitative exploration of the network as described in Xin et al. (2022).

Provide a notebook with the research statement clearly stated, and step through the creation of the network; the nodes for which you are measuring distance; and your reflection on the results. You should produce a series of maps and charts to show your work.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added Book/output_book/_build/.doctrees/intro.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
4 changes: 4 additions & 0 deletions Book/output_book/_build/html/.buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 53e4387f4d39099d62e0fd8944cbc4c4
tags: 645f666f9bcd5a90fca523b33c5a78b7
Loading

0 comments on commit 1a69281

Please sign in to comment.