Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding code and readme #1

Open
wants to merge 27 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
9888f54
commiting code written for preprocessing DICOMs
burdinskid13 Sep 25, 2023
2f7e2b4
Delete .gitignore
burdinskid13 Sep 25, 2023
d9ac71b
Merge remote-tracking branch 'origin/main'
burdinskid13 Sep 26, 2023
ea1325f
adding two missing scripts to preprocessing (additional MID events po…
burdinskid13 Sep 27, 2023
fcc4980
committing analysis scripts for GLM analysis of the 3 tasks and behav…
burdinskid13 Sep 27, 2023
d77c3c8
updating environment yml file to reflect latest python environment
burdinskid13 Sep 27, 2023
df0a378
removing older environment yml version
burdinskid13 Sep 27, 2023
138e1a7
Update README.md to include directory structure and code guidelines
burdinskid13 Sep 27, 2023
cef70c9
Update analysis/behavioral_analysis/cannabis_table.ipynb
burdinskid13 Oct 2, 2023
073e5fb
Update cannabis_table.ipynb
burdinskid13 Oct 2, 2023
a70fd73
Update analysis/behavioral_analysis/cannabis_table.ipynb
burdinskid13 Oct 2, 2023
0b3f088
Update README.md
burdinskid13 Oct 3, 2023
5d1141d
Update README.md
burdinskid13 Oct 3, 2023
0d917b7
Update HCP_smoothing_by_task.sh
burdinskid13 Oct 3, 2023
b1a4285
Update first_level_analysis_ss.py
burdinskid13 Oct 3, 2023
56d0ce3
Update group_level_analysis_surf.ipynb
burdinskid13 Oct 3, 2023
92520c6
Update group_level_analysis_surf.ipynb
burdinskid13 Oct 3, 2023
4aef2cb
Update group_level_analysis_vol.ipynb
burdinskid13 Oct 3, 2023
ca91302
Update and rename final_volume_plots.ipynb to figures_vol.ipynb
burdinskid13 Oct 3, 2023
073adcf
Update and rename final_surface_plots.ipynb to figures_surf.ipynb
burdinskid13 Oct 3, 2023
cae8ac7
preprocessing and analysis code as of Feb 22, 2024
burdinskid13 Feb 22, 2024
1098918
Update README.md to reflect code changes saved on Feb 22, 2024
burdinskid13 Feb 22, 2024
ee75657
added change score model and changed freq encoding
burdinskid13 Apr 29, 2024
f1bcbb4
updated readme
burdinskid13 Apr 29, 2024
514c0fc
analysis code changes for second submission of study
burdinskid13 Jun 29, 2024
fc1a96a
updating README.md accordingly with the second submission
burdinskid13 Jun 29, 2024
0925672
changing IQR representation to range format in table 1 and etable1
burdinskid13 Aug 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
160 changes: 0 additions & 160 deletions .gitignore

This file was deleted.

208 changes: 207 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,207 @@
# cannabis-paper
# Association of year-long cannabis use for medical symptoms with brain activation during cognitive processes

Debbie Burdinski, Alisha Kodibagkar, Kevin Potter, Randi Schuster, A. Eden Evins, Jodi Gilman*, Satrajit Ghosh*

*co-supervised


## Code Execution Guidelines

### Directory structure:

Main directory:
* Contains:
* BIDS-organized nifti and cifti data with matching events files
* _code_ directory
* _derivatives_ directory

Code directory:
* Structured like this GitHub repository
* Code is to be executed inside the folder in which a given script is located (to use relative paths)
* Contains:
* _analysis_ directory with code for demographic summaries, cannabis metrics, behavioral analysis, HCP-based smoothing, first, second and group linear modeling, and figure generation
* _environment_setup_ directory with yml files for the needed python environments for analysis and visualization
* _preprocessing_ directory with code for heudiconv, fmriprep, events file population and mriqc

Derivatives directory:
* Saves outputs from preprocessing and analyses and the final visualizations
* Contains:
* _behavioral_ directory with prepared data for the nback behavioral analysis
* _demographics_ directory with demographics and cannabis characteristics by group tables and with prepared data for the cannabis metrics comparison
* _HCP_smoothing_ directory with smoothed cifti files using the HCP pipeline's tool
* _mriqc_ directory with mriqc metrics by scan
* _ses-1year_ directory with fmriprep outputs from the one-year timepoint scans
* _ses-baseline_ directory with fmriprep outputs from the baseline timepoint scans
* _task_analysis_surface_ directory with first and second level outputs from the cifti-based analysis and intermediate and final visualizations
* _task_analysis_volume_ directory with first and second level outputs from the nifti-based analysis and intermediate and final visualizations


### Code Requirements:

1. Create a conda environment using _environment_setup/main_analysis_env.yml_ and load it before running any python scripts (exception: for fsleyes visualization use an environment created using _fsleyes_visualization_env.yml_ instead)
2. Download the following singularity containers (eg. using Dockerhub):
* heudiconv: heudiconv_0.11.3.sif
* fmriprep: fmriprep_23.0.1.sif
* in order to run fmriprep successfully, note that you also need a valid fs_license.txt (change path accordingly)
* mriqc: mriqc_0.16.1.sif
3. Install the following:
* singularity: Singularity 3.9.5
* hcp tools: Connectome Workbench v1.2.3
4. Install the necessary R packages noted in the R scripts

_Note that the path the containers and installations will need to be changed depending on where you keep yours!_

### Preprocessing:

1. heudiconv-based BIDS conversion

* Run the following using SLURM job scheduler:
* _preprocessing/bids_conversion/submit_job_array.sh_
* this will call for each participant: _preprocessing/bids_conversion/ss_heudiconv.sh_
* which will use: _preprocessing/bids_conversion/heuristic.py_

* Run the following python script to assign field maps to scans:
* _preprocessing/bids_conversion/add_intendedfor_excep.py_


2. fmriprep preprocessing pipeline

* HC baseline: Run the following using SLURM job scheduler:
* _preprocessing/fmriprep_HC_baseline/submit_job_array_baseline.sh_
* this will call for each task: _preprocessing/fmriprep_HC_baseline/ss_fmriprep_baseline.sh_
* which will use: _preprocessing/fmriprep_HC_baseline/baseline_unco_filter.json_

* MCC baseline: Run the following using SLURM job scheduler:
* _preprocessing/fmriprep_MM_baseline/submit_job_array_baseline.sh_
* this will call for each task: _preprocessing/fmriprep_MM_baseline/ss_fmriprep_baseline.sh_
* which will use: _preprocessing/fmriprep_MM_baseline/baseline_unco_filter.json_

* MCC one-year: Run the following using SLURM job scheduler:
* _preprocessing/fmriprep_MM_1year/submit_job_array_1year.sh_
* this will call for each task: _preprocessing/fmriprep_MM_1year/ss_fmriprep_1year.sh_
* which will use: _preprocessing/fmriprep_MM_1year/1year_unco_filter.json_


3. mriqc-based quality metric generation

* Run the following using SLURM job scheduler:
* _preprocessing/mriqc/submit_job_array_mriqc_participants.sh_
* this will call for each task: _preprocessing/mriqc/ss_mriqc_participants.sh_

* Run the following jupyter notebooks top to bottom:
* _preprocessing/mriqc/create_mriqc_tsv.ipynb_


4. Events file population with appropriate event timings and durations

* Run the following jupyter notebooks top to bottom:
* _preprocessing/events_file_population/create_SST_events_files.ipynb_
* _preprocessing/events_file_population/create_nback_events_files.ipynb_
* _preprocessing/events_file_population/create_MID_events_files.ipynb_


### Analysis:

1. Demographics and cannabis characteristics by group table generation

* Run the following jupyter notebooks top to bottom:
* _analysis/demographics/cannabis_table_and_data_preparation.ipynb_
* _analysis/demographics/demographics_table.ipynb_
* _analysis/demographics/demographics_comparison.ipynb_
* Run the following R scripts top to bottom:
* _analysis/demographics/demographics_fisher_exact_tests.R_
* _analysis/demographics/cannabis_comparison.R_


2. Behavioral Analysis

* Run the following jupyter notebooks top to bottom:
* _analysis/behavioral_analysis/nback_behavioral_data_preparation.ipynb_
* Run the following R scripts top to bottom:
* _analysis/behavioral_analysis/nback_behavioral_comparison.R_
* _analysis/behavioral_analysis/SST_behavioral_comparison.R_


3. Cifti smoothing using hcp tools

* Run the following using SLURM job scheduler:
* _analysis/HCP_smoothing/run_HCP_smoothing_for_tasks.sh_
* this will call for each task: _analysis/HCP_smoothing/HCP_smoothing_by_tasks.sh_


4. First, second (for mid and sst tasks given they have two runs) and group level modeling for nifti (volumetric) data

* First level: Run the following using SLURM job scheduler:
* Replace {task} with the task that you want to run the first level model for
* _analysis/task_analysis_volume/submit_job_array_first_level.sh {task}_
* this will call for each participant: _analysis/task_analysis_volume/first_level_analysis_ss.sh_
* which will call for each participant: _analysis/task_analysis_volume/first_level_analysis_ss.py_

* Second level (only for mid and sst tasks given they have two runs): Run the following using SLURM job scheduler:
* Replace {task} with the task that you want to run the first level model for
* _analysis/task_analysis_volume/submit_job_array_second_level_vol.sh {task}_
* this will call for each participant: _analysis/task_analysis_volume/second_level_analysis_vol_ss.sh_
* which will call for each participant: _analysis/task_analysis_volume/second_level_analysis_vol_ss.py_

* Group level: Run the following jupyter notebook top to bottom:
* Set _task=_ in the block that loops through all relevant participants to the task you want to analyze
* this happens in 4 locations in this script: for the group-wise, the across groups at baseline, the across timepoints of the MCC group analysis, and the change score model focusing on changes in cannabis frequency
* Note that you can use commenting to make modeling decisions, eg. outlier and covariate selection
* _analysis/task_analysis_volume/group_level_analysis_vol.ipynb_
* this will save the effect size and stat maps per group/session/task at _../../../derivatives/task_analysis_volume/group_level/group-{group}/ses-{ses}/task-{task}_ to be used by the fsleyes visualization


5. First, second (for mid and sst tasks given they have two runs) and group level modeling for cifti (grayordinate) data

* First level: Run the following using SLURM job scheduler:
* Replace {task} with the task that you want to run the first level model for
* _analysis/task_analysis_surface/submit_job_array_first_level.sh {task}_
* this will call for each participant: _analysis/task_analysis_surface/first_level_analysis_ss.sh_
* which will call for each participant: _analysis/task_analysis_surface/first_level_analysis_ss.py_

* Second level (only for mid and sst tasks given they have two runs): Run the following using SLURM job scheduler:
* Replace {task} with the task that you want to run the first level model for
* _analysis/task_analysis_surface/submit_job_array_second_level_surf.sh {task}_
* this will call for each participant: _analysis/task_analysis_surface/second_level_analysis_surf_ss.sh_
* which will call for each participant: _analysis/task_analysis_surface/second_level_analysis_surf_ss.py_

* Group level: Run the following jupyter notebook top to bottom:
* Set _task=_ in the block that loops through all relevant participants to the task you want to analyze
* this happens in 4 locations in this script: for the group-wise, the across groups at baseline, the across timepoints of the MCC group analysis, and the change score model focusing on changes in cannabis frequency
* Note that you can use commenting to make modeling decisions, eg. outlier and covariate selection
* _analysis/task_analysis_surface/group_level_analysis_surf.ipynb_
* this will save the left/right hemispheres and flat maps as well as the coronal display per group/session/task at _../../../derivatives/task_analysis_surface/visualization/raw_indiv_figures_ to be used by the final pillow visualization


6. fsleyes visualization for nifti/volumetric results

* Run the following jupyter notebook in order:
* Set _task=_ in the block that loops through all relevant participants to the task you want to visualize
* _analysis/fsleyes_vis/fsleyes_visualization.ipynb_
* this will save the fsleyes visualizations per group/session/task at _../../../derivatives/task_analysis_volume/visualization/fsleyes_indiv_figures_ to be used by pillow for figure generation


7. Figure generation using fsleyes outputs and pillow for nifti/volumetric results

* Run the following jupyter notebook top to bottom:
* Set _task=_ in the block that loops through all relevant participants to the task you want to visualize
* _analysis/figures/figures_vol.ipynb_
* this will save intermediate visualizations per task at _../../../derivatives/task_analysis_volume/visualization/per_contrast_figures_
* this will save the nifti/volumetric figures per task at _../../../derivatives/task_analysis_volume/visualization/complete_figures_ to be used by pillow for panel figure generation


8. Figure generation using nilearn and pillow for cifti/grayordinate results

* Run the following jupyter notebook top to bottom:
* Set _task=_ in the block that loops through all relevant participants to the task you want to visualize
* _analysis/figures/figures_surf.ipynb_
* this will save intermediate visualizations per task at _../../../derivatives/task_analysis_surface/visualization/indiv_figures_ and _../../../derivatives/task_analysis_surface/visualization/per_contrast_figures_
* this will save the cifti/grayordinate figures per task at _../../../derivatives/task_analysis_surface/visualization/complete_figures_ to be used by pillow for panel figure generation


9. Panel figure generation using pillow for the complete results

* Run the following jupyter notebook top to bottom:
* Set _task=_ in the block that loops through all relevant participants to the task you want to visualize
* _analysis/figures/figure_panels.ipynb_
* this will display the panel visualizations per task
Binary file added analysis/.DS_Store
Binary file not shown.
Binary file added analysis/._.DS_Store
Binary file not shown.
Loading