Skip to content

Commit

Permalink
Deploy to GitHub Pages on master [ci skip]
Browse files Browse the repository at this point in the history
  • Loading branch information
facebook-circleci-bot committed Jul 22, 2023
1 parent 30f87b3 commit 9fbe539
Show file tree
Hide file tree
Showing 2,316 changed files with 19,577 additions and 8,369 deletions.
12 changes: 6 additions & 6 deletions assets/hub/datvuthanh_hybridnets.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "f8875e26",
"id": "c4fca20c",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "088fa4c6",
"id": "fe8bffe9",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
"id": "4a694479",
"id": "aee100b0",
"metadata": {},
"source": [
"## Model Description\n",
Expand Down Expand Up @@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "8a0567a9",
"id": "06cda4c6",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -109,7 +109,7 @@
},
{
"cell_type": "markdown",
"id": "1efcd7ff",
"id": "517b757b",
"metadata": {},
"source": [
"### Citation\n",
Expand All @@ -120,7 +120,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "736e0258",
"id": "59d6887e",
"metadata": {
"attributes": {
"classes": [
Expand Down
12 changes: 6 additions & 6 deletions assets/hub/facebookresearch_WSL-Images_resnext.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "f38b7743",
"id": "97be2bbe",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "2add0149",
"id": "ae2d1163",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
"id": "de98a6da",
"id": "f31b4ee9",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
Expand All @@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "dbe83cb9",
"id": "b5953bce",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "47b8da04",
"id": "ec89f65f",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -99,7 +99,7 @@
},
{
"cell_type": "markdown",
"id": "c92b181d",
"id": "44b68d2b",
"metadata": {},
"source": [
"### Model Description\n",
Expand Down
10 changes: 5 additions & 5 deletions assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "623d35c5",
"id": "6e7932fd",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "dd34b06d",
"id": "4a2e4c70",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
"id": "510d173c",
"id": "22a50b4f",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 120)` where `N` is the number of images to be generated.\n",
Expand All @@ -45,7 +45,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "74a351f7",
"id": "342238f2",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -63,7 +63,7 @@
},
{
"cell_type": "markdown",
"id": "ec2e2542",
"id": "1399385f",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
Expand Down
10 changes: 5 additions & 5 deletions assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "19e5fa65",
"id": "21956cfb",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "99e3f95e",
"id": "1d8c1202",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -44,7 +44,7 @@
},
{
"cell_type": "markdown",
"id": "b06276f2",
"id": "4db5a2c3",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 512)` where `N` is the number of images to be generated.\n",
Expand All @@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "43bfd202",
"id": "79878792",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
"id": "21a47cdd",
"id": "9841826a",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
Expand Down
36 changes: 18 additions & 18 deletions assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "68fba058",
"id": "50e5d38c",
"metadata": {},
"source": [
"# 3D ResNet\n",
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "13eef51d",
"id": "54a7a6fc",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
"id": "1e5fbdd5",
"id": "f0384287",
"metadata": {},
"source": [
"Import remaining functions:"
Expand All @@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "9ba0c094",
"id": "8dac94f6",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -64,7 +64,7 @@
},
{
"cell_type": "markdown",
"id": "35080209",
"id": "b0d1f87c",
"metadata": {},
"source": [
"#### Setup\n",
Expand All @@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "849cc207",
"id": "3f7aa937",
"metadata": {
"attributes": {
"classes": [
Expand All @@ -94,7 +94,7 @@
},
{
"cell_type": "markdown",
"id": "0910989c",
"id": "c22cf102",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
Expand All @@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "e6ff7b48",
"id": "82a2d207",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -116,7 +116,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "6e934a27",
"id": "3226c98b",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -131,7 +131,7 @@
},
{
"cell_type": "markdown",
"id": "c28e3f6a",
"id": "0d284d6a",
"metadata": {},
"source": [
"#### Define input transform"
Expand All @@ -140,7 +140,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "8edd8ee4",
"id": "c4d07fef",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -174,7 +174,7 @@
},
{
"cell_type": "markdown",
"id": "0c94f73f",
"id": "80f0cbcb",
"metadata": {},
"source": [
"#### Run Inference\n",
Expand All @@ -185,7 +185,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "5069c1e1",
"id": "39ef850d",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -197,7 +197,7 @@
},
{
"cell_type": "markdown",
"id": "de89d32b",
"id": "43817feb",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
Expand All @@ -206,7 +206,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "9f229ada",
"id": "e125ecf2",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -231,7 +231,7 @@
},
{
"cell_type": "markdown",
"id": "c773caed",
"id": "29d2fcbd",
"metadata": {},
"source": [
"#### Get Predictions"
Expand All @@ -240,7 +240,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "88f2fb88",
"id": "bcdcdcdb",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -259,7 +259,7 @@
},
{
"cell_type": "markdown",
"id": "7190c0b2",
"id": "0ef2a458",
"metadata": {},
"source": [
"### Model Description\n",
Expand Down
Loading

0 comments on commit 9fbe539

Please sign in to comment.