Skip to content

Commit

Permalink
deploy: 2a50ea4
Browse files Browse the repository at this point in the history
  • Loading branch information
rkansal47 committed Sep 6, 2024
1 parent 59a3207 commit ef56f0f
Show file tree
Hide file tree
Showing 35 changed files with 1,869 additions and 1,254 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions _sources/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ ______________________________________________________________________
## Introduction

This is a set of tutorials for the CMS Machine Learning Hands-on Advanced Tutorial Session (HATS).
They are intended to show you how to build machine learning models in python, using `Keras`, `TensorFlow`, and `PyTorch`, and use them in your `ROOT`-based analyses.
We will build event-level classifiers for differentiating VBF Higgs and standard model background 4 muon events and jet-level classifiers for differentiating boosted W boson jets from QCD jets using dense and convolutional neural networks.
They are intended to show you how to build machine learning models in python, using `xgboost`, `Keras`, `TensorFlow`, and `PyTorch`, and use them in your `ROOT`-based analyses.
We will build event-level classifiers for differentiating VBF Higgs and standard model background 4 muon events and jet-level classifiers for differentiating boosted W boson jets from QCD jets using BDTs, and dense and convolutional neural networks.
We will also explore more advanced models such as graph neural networks (GNNs), variational autoencoders (VAEs), and generative adversarial networks (GANs) on simple datasets.

## Setup
Expand Down
466 changes: 14 additions & 452 deletions _sources/notebooks/1-datasets-uproot.ipynb

Large diffs are not rendered by default.

556 changes: 556 additions & 0 deletions _sources/notebooks/2-boosted-decision-tree.ipynb

Large diffs are not rendered by default.

File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -55,14 +55,13 @@
"### Convolution Operation\n",
"Two-dimensional convolutional layer for image height $H$, width $W$, number of input channels $C$, number of output kernels (filters) $N$, and kernel height $J$ and width $K$ is given by:\n",
"\n",
"\\begin{align}\n",
"\\label{convLayer}\n",
"\\boldsymbol{Y}[v,u,n] &= \\boldsymbol{\\beta}[n] + \\sum_{c=1}^{C} \\sum_{j=1}^{J} \\sum_{k=1}^{K} \\boldsymbol{X}[v+j,u+k,c]\\, \\boldsymbol{W}[j,k,c,n]\\,,\n",
"\\end{align}\n",
"$$\n",
"\\boldsymbol{Y}[v,u,n] = \\boldsymbol{\\beta}[n] + \\sum_{c=1}^{C} \\sum_{j=1}^{J} \\sum_{k=1}^{K} \\boldsymbol{X}[v+j,u+k,c]\\, \\boldsymbol{W}[j,k,c,n]\\,,\n",
"$$\n",
"\n",
"where $Y$ is the output tensor of size $V \\times U \\times N$, $W$ is the weight tensor of size $J \\times K \\times C \\times N$ and $\\beta$ is the bias vector of length $N$ .\n",
"\n",
"The example below has $C=1$ input channel and $N=1$ ($J\\times K=3\\times 3$) kernel [credit](https://towardsdatascience.com/types-of-convolution-kernels-simplified-f040cb307c37):\n",
"The example below has $C=1$ input channel and $N=1$ ($J\\times K=3\\times 3$) kernel ([credit](https://towardsdatascience.com/types-of-convolution-kernels-simplified-f040cb307c37)):\n",
"\n",
"![convolution](https://miro.medium.com/v2/resize:fit:780/1*Eai425FYQQSNOaahTXqtgg.gif)"
]
Expand All @@ -84,7 +83,7 @@
"source": [
"### Pooling\n",
"\n",
"We also add pooling layers to reduce the image size between layers. For example, max pooling: (also from [here]([page](https://cs231n.github.io/convolutional-networks/))\n",
"We also add pooling layers to reduce the image size between layers. For example, max pooling (also from [here]([page](https://cs231n.github.io/convolutional-networks/))):\n",
"\n",
"![maxpool](https://cs231n.github.io/assets/cnn/maxpool.jpeg)"
]
Expand Down

Large diffs are not rendered by default.

File renamed without changes.
File renamed without changes.
17 changes: 9 additions & 8 deletions genindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -175,17 +175,18 @@
<p aria-level="2" class="caption" role="heading"><span class="caption-text">Tutorials</span></p>
<ul class="nav bd-sidenav">
<li class="toctree-l1"><a class="reference internal" href="notebooks/1-datasets-uproot.html">1. Loading Datasets</a></li>
<li class="toctree-l1 has-children"><a class="reference internal" href="notebooks/2-dense.html">2. Dense networks</a><input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"/><label class="toctree-toggle" for="toctree-checkbox-1"><i class="fa-solid fa-chevron-down"></i></label><ul>
<li class="toctree-l2"><a class="reference internal" href="notebooks/2.1-dense-keras.html">2.1. Dense neural network with Keras</a></li>
<li class="toctree-l2"><a class="reference internal" href="notebooks/2.2-dense-pytorch.html">2.2. Dense neural network with PyTorch</a></li>
<li class="toctree-l2"><a class="reference internal" href="notebooks/2.3-dense-bayesian-optimization.html">2.3. Optimize a dense network with Bayesian optimization</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/2-boosted-decision-tree.html">2. Boosted decision trees with xgboost</a></li>
<li class="toctree-l1 has-children"><a class="reference internal" href="notebooks/3-dense.html">3. Dense networks</a><input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"/><label class="toctree-toggle" for="toctree-checkbox-1"><i class="fa-solid fa-chevron-down"></i></label><ul>
<li class="toctree-l2"><a class="reference internal" href="notebooks/3.1-dense-keras.html">3.1. Dense neural network with Keras</a></li>
<li class="toctree-l2"><a class="reference internal" href="notebooks/3.2-dense-pytorch.html">3.2. Dense neural network with PyTorch</a></li>
<li class="toctree-l2"><a class="reference internal" href="notebooks/3.3-dense-bayesian-optimization.html">3.3. Optimize a dense network with Bayesian optimization</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/3-conv2d.html">3. Convolutional Neural Networks for Jet-Images</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/4-gnn-cora.html">4. Graph Neural Network (GNN) with PyTorch Geometric</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/5-vae-mnist.html">5. Variational Autoencoders with Keras and MNIST</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/4-conv2d.html">4. Convolutional Neural Networks for Jet-Images</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/5-gnn-cora.html">5. Graph Neural Network (GNN) with PyTorch Geometric</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/6-vae-mnist.html">6. Variational Autoencoders with Keras and MNIST</a></li>

<li class="toctree-l1"><a class="reference internal" href="notebooks/6-gan-mnist.html">7. Generative Adversarial Networks with Keras and MNIST</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/7-gan-mnist.html">8. Generative Adversarial Networks with Keras and MNIST</a></li>
</ul>

</div>
Expand Down
21 changes: 11 additions & 10 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -177,17 +177,18 @@
<p aria-level="2" class="caption" role="heading"><span class="caption-text">Tutorials</span></p>
<ul class="nav bd-sidenav">
<li class="toctree-l1"><a class="reference internal" href="notebooks/1-datasets-uproot.html">1. Loading Datasets</a></li>
<li class="toctree-l1 has-children"><a class="reference internal" href="notebooks/2-dense.html">2. Dense networks</a><input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"/><label class="toctree-toggle" for="toctree-checkbox-1"><i class="fa-solid fa-chevron-down"></i></label><ul>
<li class="toctree-l2"><a class="reference internal" href="notebooks/2.1-dense-keras.html">2.1. Dense neural network with Keras</a></li>
<li class="toctree-l2"><a class="reference internal" href="notebooks/2.2-dense-pytorch.html">2.2. Dense neural network with PyTorch</a></li>
<li class="toctree-l2"><a class="reference internal" href="notebooks/2.3-dense-bayesian-optimization.html">2.3. Optimize a dense network with Bayesian optimization</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/2-boosted-decision-tree.html">2. Boosted decision trees with xgboost</a></li>
<li class="toctree-l1 has-children"><a class="reference internal" href="notebooks/3-dense.html">3. Dense networks</a><input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"/><label class="toctree-toggle" for="toctree-checkbox-1"><i class="fa-solid fa-chevron-down"></i></label><ul>
<li class="toctree-l2"><a class="reference internal" href="notebooks/3.1-dense-keras.html">3.1. Dense neural network with Keras</a></li>
<li class="toctree-l2"><a class="reference internal" href="notebooks/3.2-dense-pytorch.html">3.2. Dense neural network with PyTorch</a></li>
<li class="toctree-l2"><a class="reference internal" href="notebooks/3.3-dense-bayesian-optimization.html">3.3. Optimize a dense network with Bayesian optimization</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/3-conv2d.html">3. Convolutional Neural Networks for Jet-Images</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/4-gnn-cora.html">4. Graph Neural Network (GNN) with PyTorch Geometric</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/5-vae-mnist.html">5. Variational Autoencoders with Keras and MNIST</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/4-conv2d.html">4. Convolutional Neural Networks for Jet-Images</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/5-gnn-cora.html">5. Graph Neural Network (GNN) with PyTorch Geometric</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/6-vae-mnist.html">6. Variational Autoencoders with Keras and MNIST</a></li>

<li class="toctree-l1"><a class="reference internal" href="notebooks/6-gan-mnist.html">7. Generative Adversarial Networks with Keras and MNIST</a></li>
<li class="toctree-l1"><a class="reference internal" href="notebooks/7-gan-mnist.html">8. Generative Adversarial Networks with Keras and MNIST</a></li>
</ul>

</div>
Expand Down Expand Up @@ -424,8 +425,8 @@ <h1>CMS Machine Learning Hands-on Advanced Tutorial Session (HATS)<a class="head
<section id="introduction">
<h2>Introduction<a class="headerlink" href="#introduction" title="Permalink to this heading">#</a></h2>
<p>This is a set of tutorials for the CMS Machine Learning Hands-on Advanced Tutorial Session (HATS).
They are intended to show you how to build machine learning models in python, using <code class="docutils literal notranslate"><span class="pre">Keras</span></code>, <code class="docutils literal notranslate"><span class="pre">TensorFlow</span></code>, and <code class="docutils literal notranslate"><span class="pre">PyTorch</span></code>, and use them in your <code class="docutils literal notranslate"><span class="pre">ROOT</span></code>-based analyses.
We will build event-level classifiers for differentiating VBF Higgs and standard model background 4 muon events and jet-level classifiers for differentiating boosted W boson jets from QCD jets using dense and convolutional neural networks.
They are intended to show you how to build machine learning models in python, using <code class="docutils literal notranslate"><span class="pre">xgboost</span></code>, <code class="docutils literal notranslate"><span class="pre">Keras</span></code>, <code class="docutils literal notranslate"><span class="pre">TensorFlow</span></code>, and <code class="docutils literal notranslate"><span class="pre">PyTorch</span></code>, and use them in your <code class="docutils literal notranslate"><span class="pre">ROOT</span></code>-based analyses.
We will build event-level classifiers for differentiating VBF Higgs and standard model background 4 muon events and jet-level classifiers for differentiating boosted W boson jets from QCD jets using BDTs, and dense and convolutional neural networks.
We will also explore more advanced models such as graph neural networks (GNNs), variational autoencoders (VAEs), and generative adversarial networks (GANs) on simple datasets.</p>
</section>
<section id="setup">
Expand Down
Loading

0 comments on commit ef56f0f

Please sign in to comment.