1
0
mirror of https://github.com/phlippe/uvadlc_notebooks.git synced 2022-04-17 03:27:25 +03:00

Preparing for DL2 tutorials

This commit is contained in:
phlippe
2022-03-02 13:26:33 +01:00
parent 1b64769d9d
commit 2e1f0e5b04
5 changed files with 256 additions and 3 deletions

View File

@@ -18,7 +18,7 @@ import sphinx_rtd_theme
# -- Project information -----------------------------------------------------
project = 'UvA DL Notebooks'
copyright = '2021, Phillip Lippe'
copyright = '2022, Phillip Lippe'
author = 'Phillip Lippe'
# The full version, including alpha/beta/rc tags

View File

@@ -100,7 +100,7 @@ This is the first time we present these tutorials during the Deep Learning cours
tutorial_notebooks/guide3/Debugging_PyTorch
.. toctree::
:caption: Jupyter notebooks
:caption: Deep Learning 1
:maxdepth: 2
tutorial_notebooks/tutorial2/Introduction_to_PyTorch
@@ -116,4 +116,10 @@ This is the first time we present these tutorials during the Deep Learning cours
tutorial_notebooks/tutorial12/Autoregressive_Image_Modeling
tutorial_notebooks/tutorial15/Vision_Transformer
tutorial_notebooks/tutorial16/Meta_Learning
tutorial_notebooks/tutorial17/SimCLR
tutorial_notebooks/tutorial17/SimCLR
.. toctree::
:caption: Deep Learning 2
:maxdepth: 2

View File

@@ -0,0 +1,10 @@
# Template for Notebook Tutorials in the Deep Learning 2 course
This folder represents a template for creating a new notebook tutorial for the Deep Learning 2 course at the University of Amsterdam. See `TemplateNotebook.ipynb` for instructions on how to write a tutorial. Below, we give instructions on how to create a new tutorial from scratch and add it later to this repository and the website:
* Fork this repository
* In your fork, duplicate this `template` folder and rename the new folder to the tutorial folder (we want to have one folder per notebook)
* Remove this README and the `example_image.svg` file in the new folder. If you want to add images to your notebook (which is recommended), add them to the same folder. The template gives an example of how you can include images.
* Once you finished your tutorial, give the notebook a full, new runthrough from top to bottom. The outputs of all cells will be shown on the website, so make sure everything is as you want it.
* Go to the most outer folder, and then to the file `docs/index.rst`. At the end of the file, you see a list of all notebooks. Under the tab 'Deep Learning 2', add the path to your notebook. This way it is added to the website (otherwise it will be ignored).
* It is recommended to build the website locally. You can install the necessary packages via the file `docs/requirements.txt`. Go to `docs/` and run `make html`. In the folder `_build/html/`, you can then start a simple python server, `python -m http.server`, and look at the website. Check if everything is shown as you want it.
* Finally, create a pull request from your forked repository to the `master` branch of the original uvadlc_notebooks repo. We can discuss changes to the notebook there before adding it to the repo.

View File

@@ -0,0 +1,202 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial N: Title\n",
"\n",
"**Filled notebook:** \n",
"[![View on Github](https://img.shields.io/static/v1.svg?logo=github&label=Repo&message=View%20On%20Github&color=lightgrey)](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/template/TemplateNotebook.ipynb)\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/template/TemplateNotebook.ipynb) \n",
"**Pre-trained models:** \n",
"[![View files on Github](https://img.shields.io/static/v1.svg?logo=github&label=Repo&message=View%20On%20Github&color=lightgrey)](https://github.com/phlippe/saved_models/tree/main/DL2/template/) \n",
"**Recordings:** \n",
"[![YouTube - Part N](https://img.shields.io/static/v1.svg?logo=youtube&label=YouTube&message=Part%20N&color=red)](https://youtu.be/waVZDFR-06U) \n",
"**Authors:**\n",
"Your name here"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__TODOs:__\n",
"\n",
"* Update the links for the filled notebook (both github and collab) to your new notebook\n",
"* Update the link for the saved models\n",
"* Update the link for the YouTube recording if you have any. If you want to upload one to the UvA DLC YouTube account, you can contact Phillip.\n",
"* Fill in the author names"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, you are supposed to add some intro about the topic. Give a short abstract motivating the tutorial, and then detail what will be done. It is good to have pictures here as well. If you add images, make sure to use SVGs for best resolution, and put them in the same folder as your notebook. An example is given below (use any HTML editing you like).\n",
"\n",
"<center width=\"100%\"><img src=\"example_image.svg\" width=\"350px\" style=\"padding: 20px\"></center>\n",
"\n",
"The next cell is where you import all your packages that you need. In case you have non-standard packages, make sure to install them to make it executable on GoogleColab (see for instance the PyTorch Lightning install)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Global seed set to 42\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Device: cuda:0\n"
]
}
],
"source": [
"## Standard libraries\n",
"import os\n",
"import numpy as np\n",
"\n",
"## Imports for plotting\n",
"import matplotlib.pyplot as plt\n",
"plt.set_cmap('cividis')\n",
"%matplotlib inline\n",
"from IPython.display import set_matplotlib_formats\n",
"set_matplotlib_formats('svg', 'pdf') # For export\n",
"import matplotlib\n",
"matplotlib.rcParams['lines.linewidth'] = 2.0\n",
"import seaborn as sns\n",
"sns.set()\n",
"\n",
"## tqdm for loading bars\n",
"from tqdm.notebook import tqdm\n",
"\n",
"## PyTorch\n",
"import torch\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
"import torch.utils.data as data\n",
"import torch.optim as optim\n",
"\n",
"## Torchvision (TODO: ONLY NEEDED FOR VISION-BASED DATASETS)\n",
"import torchvision\n",
"from torchvision import transforms\n",
"\n",
"# PyTorch Lightning\n",
"try:\n",
" import pytorch_lightning as pl\n",
"except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary\n",
" !pip install --quiet pytorch-lightning>=1.4\n",
" import pytorch_lightning as pl\n",
"from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint\n",
"\n",
"# Import tensorboard (TODO: REMOVE IF YOU DO NOT WANT TO RUN TENSORBOARDS INTERACTIVELY)\n",
"%load_ext tensorboard\n",
"\n",
"# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)\n",
"DATASET_PATH = \"../data\"\n",
"# Path to the folder where the pretrained models are saved (TODO: UPDATE LINK BELOW TO A FOLDER WITH THE NAME OF YOUR NOTEBOOK FOLDER)\n",
"CHECKPOINT_PATH = \"../../saved_models/DL2/template\"\n",
"\n",
"# Setting the seed\n",
"pl.seed_everything(42)\n",
"\n",
"# Ensure that all operations are deterministic on GPU (if used) for reproducibility\n",
"torch.backends.cudnn.determinstic = True\n",
"torch.backends.cudnn.benchmark = False\n",
"\n",
"device = torch.device(\"cuda:0\") if torch.cuda.is_available() else torch.device(\"cpu\")\n",
"print(\"Device:\", device)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will likely have some pretrained models that you want to share with the students, and download when running on GoogleColab. You can do this with the cell below. If you don't have any pretrained models, you can remove the cell."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import urllib.request\n",
"from urllib.error import HTTPError\n",
"# Github URL where saved models are stored for this tutorial (TODO: UPDATE URL BELOW TO YOUR NOTEBOOK FOLDER)\n",
"base_url = \"https://raw.githubusercontent.com/phlippe/saved_models/main/DL2/template/\"\n",
"# Files to download\n",
"pretrained_files = [] # (TODO: ADD A LIST OF STRINGS THAT ARE THE FILES YOU WANT TO DOWNLOAD. PATHS WITH RESPECT TO BASE_URL)\n",
"# Create checkpoint path if it doesn't exist yet\n",
"os.makedirs(CHECKPOINT_PATH, exist_ok=True)\n",
"\n",
"# For each file, check whether it already exists. If not, try downloading it.\n",
"for file_name in pretrained_files:\n",
" file_path = os.path.join(CHECKPOINT_PATH, file_name)\n",
" if \"/\" in file_name:\n",
" os.makedirs(file_path.rsplit(\"/\",1)[0], exist_ok=True)\n",
" if not os.path.isfile(file_path):\n",
" file_url = base_url + file_name\n",
" print(f\"Downloading {file_url}...\")\n",
" try:\n",
" urllib.request.urlretrieve(file_url, file_path)\n",
" except HTTPError as e:\n",
" print(\"Something went wrong. Please try later again, or contact the author with the full output including the following error:\\n\", e)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## My Tutorial Topic\n",
"\n",
"Start your notebook from here. Introduce the topics, go step by step, don't forget to explain the code, etc.\n",
"\n",
"You can make use of different heading levels, they will be shown as tabs on the RTD website."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"Give a conclusion and summary of the notebook. Give a retroview: what have the students learned from this notebook, what is there to further explore in this topic, anything critical to keep in mind?\n",
"\n",
"### References\n",
"\n",
"Give a list of references, especially the papers that introduce the methods you implemented in this notebook."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,35 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="100.01784mm"
height="55.491974mm"
viewBox="0 0 100.01784 55.491974"
version="1.1"
id="svg5"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs2" />
<g
id="layer1"
transform="translate(-63.191121,-27.480598)">
<rect
id="rect31"
width="99.452835"
height="54.926975"
x="63.473621"
y="27.763098"
style="fill:#e6e6e6;stroke:#000000;stroke-width:0.565;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:10.5833px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.264583"
x="73.636368"
y="57.409668"
id="text4956"><tspan
id="tspan4954"
style="stroke-width:0.264583"
x="73.636368"
y="57.409668">Example Image</tspan></text>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB