From 84ae227cba13d736bdf55bb11013af21d11fab77 Mon Sep 17 00:00:00 2001 From: natke Date: Mon, 21 Oct 2024 12:24:02 -0700 Subject: [PATCH 1/9] Update nightly package name --- .../execution-providers/QNN-ExecutionProvider.md | 2 +- docs/get-started/with-c.md | 2 +- docs/get-started/with-cpp.md | 2 +- docs/get-started/with-csharp.md | 2 +- docs/get-started/with-python.md | 14 +++++++------- docs/install/index.md | 16 ++++++++-------- 6 files changed, 19 insertions(+), 19 deletions(-) diff --git a/docs/execution-providers/QNN-ExecutionProvider.md b/docs/execution-providers/QNN-ExecutionProvider.md index 1cf50ecadc517..595751e0442c5 100644 --- a/docs/execution-providers/QNN-ExecutionProvider.md +++ b/docs/execution-providers/QNN-ExecutionProvider.md @@ -43,7 +43,7 @@ Note: Starting version 1.18.0 , you do not need to separately download and insta - Python 3.11.x - Numpy 1.25.2 or >= 1.26.4 - Install: `pip install onnxruntime-qnn` - - Install nightly package `python -m pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ort-nightly-qnn` + - Install nightly package `python -m pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/onnxruntime-qnn` ## Qualcomm AI Hub Qualcomm AI Hub can be used to optimize and run models on Qualcomm hosted devices. diff --git a/docs/get-started/with-c.md b/docs/get-started/with-c.md index 22d6d70a4405f..8817a35c72717 100644 --- a/docs/get-started/with-c.md +++ b/docs/get-started/with-c.md @@ -20,7 +20,7 @@ nav_order: 3 | [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../reference/compatibility) | | [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | GPU - CUDA (Release) | Windows, Linux, Mac, X64...more details: [compatibility](../reference/compatibility) | | [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.directml) | GPU - DirectML (Release) | Windows 10 1709+ | -| [ort-nightly](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | CPU, GPU (Dev), CPU (On-Device Training) | Same as Release versions | +| [onnxruntime](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | CPU, GPU (Dev), CPU (On-Device Training) | Same as Release versions | | [Microsoft.ML.OnnxRuntime.Training](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU On-Device Training (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../reference/compatibility) | diff --git a/docs/get-started/with-cpp.md b/docs/get-started/with-cpp.md index bdbe29babaac6..8f58a31885dec 100644 --- a/docs/get-started/with-cpp.md +++ b/docs/get-started/with-cpp.md @@ -20,7 +20,7 @@ nav_order: 2 | [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../reference/compatibility.md) | | [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | GPU - CUDA (Release) | Windows, Linux, Mac, X64...more details: [compatibility](../reference/compatibility.md) | | [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.directml) | GPU - DirectML (Release) | Windows 10 1709+ | -| [ort-nightly](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | CPU, GPU (Dev), CPU (On-Device Training) | Same as Release versions | +| [onnxruntime](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | CPU, GPU (Dev), CPU (On-Device Training) | Same as Release versions | | [Microsoft.ML.OnnxRuntime.Training](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU On-Device Training (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../reference/compatibility.md) | .zip and .tgz files are also included as assets in each [Github release](https://github.com/microsoft/onnxruntime/releases). diff --git a/docs/get-started/with-csharp.md b/docs/get-started/with-csharp.md index 530c51c04d52e..c1532dc283c8c 100644 --- a/docs/get-started/with-csharp.md +++ b/docs/get-started/with-csharp.md @@ -217,7 +217,7 @@ The ONNX runtime provides a C# .NET binding for running inference on ONNX models | [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../reference/compatibility.md) | | [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | GPU - CUDA (Release) | Windows, Linux, Mac, X64...more details: [compatibility](../reference/compatibility.md) | | [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.directml) | GPU - DirectML (Release) | Windows 10 1709+ | -| [ort-nightly](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | CPU, GPU (Dev), CPU (On-Device Training) | Same as Release versions | +| [onnxruntime](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | CPU, GPU (Dev), CPU (On-Device Training) | Same as Release versions | | [Microsoft.ML.OnnxRuntime.Training](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | CPU On-Device Training (Release) |Windows, Linux, Mac, X64, X86 (Windows-only), ARM64 (Windows-only)...more details: [compatibility](../reference/compatibility.md) | diff --git a/docs/get-started/with-python.md b/docs/get-started/with-python.md index 7ff3d1048c58d..1a41bf0072812 100644 --- a/docs/get-started/with-python.md +++ b/docs/get-started/with-python.md @@ -258,24 +258,24 @@ If using pip, run `pip install --upgrade pip` prior to downloading. | Artifact | Description | Supported Platforms | |----------- |-------------|---------------------| |[onnxruntime](https://pypi.org/project/onnxruntime)|CPU (Release)| Windows (x64), Linux (x64, ARM64), Mac (X64), | -|[ort-nightly](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly)|CPU (Dev) | Same as above | +|[onnxruntime](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime)|CPU (Dev) | Same as above | |[onnxruntime-gpu](https://pypi.org/project/onnxruntime-gpu)|GPU (Release)| Windows (x64), Linux (x64, ARM64) | -|[ort-nightly-gpu for CUDA 11.*](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-11-nightly/PyPI/ort-nightly-gpu) |GPU (Dev) | Windows (x64), Linux (x64, ARM64) | -|[ort-nightly-gpu for CUDA 12.*](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-gpu) |GPU (Dev) | Windows (x64), Linux (x64, ARM64) | +|[onnxruntime-gpu for CUDA 11.*](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-11-nightly/PyPI/onnxruntime-gpu) |GPU (Dev) | Windows (x64), Linux (x64, ARM64) | +|[onnxruntime-gpu for CUDA 12.*](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-gpu) |GPU (Dev) | Windows (x64), Linux (x64, ARM64) | Before installing nightly package, you will need install dependencies first. ``` python -m pip install coloredlogs flatbuffers numpy packaging protobuf sympy ``` -Example to install ort-nightly-gpu for CUDA 11.*: +Example to install onnxruntime-gpu for CUDA 11.*: ``` -python -m pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-11-nightly/pypi/simple/ +python -m pip install onnxruntime-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-11-nightly/pypi/simple/ ``` -Example to install ort-nightly-gpu for CUDA 12.*: +Example to install onnxruntime-gpu for CUDA 12.*: ``` -python -m pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ +python -m pip install onnxruntime-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ ``` For Python compiler version notes, see [this page](https://github.com/microsoft/onnxruntime/tree/main/docs/Python_Dev_Notes.md) diff --git a/docs/install/index.md b/docs/install/index.md index 60057a88215bb..107bcd4cf7487 100644 --- a/docs/install/index.md +++ b/docs/install/index.md @@ -408,17 +408,17 @@ below: | | Official build | Nightly build | Reqs | |--------------|---------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------| | Python | If using pip, run `pip install --upgrade pip` prior to downloading. | | | -| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly/overview) | | -| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [ort-nightly-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) | -| | GPU (CUDA/TensorRT) for CUDA 11.x: [**onnxruntime-gpu**](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-11/PyPI/onnxruntime-gpu/overview/) | [ort-nightly-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-11-nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) | -| | GPU (DirectML): [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [ort-nightly-directml (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) | +| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly/overview) | | +| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | onnxruntime-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) | +| | GPU (CUDA/TensorRT) for CUDA 11.x: [**onnxruntime-gpu**](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-11/PyPI/onnxruntime-gpu/overview/) | onnxruntime-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-11-nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) | +| | GPU (DirectML): [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | onnxruntime-directml (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) | | | OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed* | | [View](../build/eps.md#openvino) | | | TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed* | | | | | Azure (Cloud): [**onnxruntime-azure**](https://pypi.org/project/onnxruntime-azure/) | | | -| C#/C/C++ | CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | [ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | | -| | GPU (CUDA/TensorRT): [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | [ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | [View](../execution-providers/CUDA-ExecutionProvider) | -| | GPU (DirectML): [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) | -| WinML | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | [ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) | +| C#/C/C++ | CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | | +| | GPU (CUDA/TensorRT): [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | [View](../execution-providers/CUDA-ExecutionProvider) | +| | GPU (DirectML): [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) | +| WinML | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) | | Java | CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime) | | [View](../api/java) | | | GPU (CUDA/TensorRT): [**com.microsoft.onnxruntime:onnxruntime_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu) | | [View](../api/java) | | Android | [**com.microsoft.onnxruntime:onnxruntime-android**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-android) | | [View](../install/index.md#install-on-android) | From 6f79cd641456421eadb6b4af628784cb7573747c Mon Sep 17 00:00:00 2001 From: natke Date: Mon, 21 Oct 2024 14:28:19 -0700 Subject: [PATCH 2/9] Update dev to nightly --- docs/install/index.md | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/docs/install/index.md b/docs/install/index.md index 107bcd4cf7487..19b5abf050f2b 100644 --- a/docs/install/index.md +++ b/docs/install/index.md @@ -6,7 +6,7 @@ nav_order: 1 redirect_from: /docs/how-to/install --- -# Install ONNX Runtime (ORT) +# Install ONNX Runtime See the [installation matrix](https://onnxruntime.ai) for recommended instructions for desired combinations of target operating system, hardware, accelerator, and language. @@ -31,14 +31,16 @@ under [Compatibility](../reference/compatibility). The latest version is recommended. ### CUDA and CuDNN + For ONNX Runtime GPU package, it is required to install [CUDA](https://developer.nvidia.com/cuda-toolkit) and [cuDNN](https://developer.nvidia.com/cudnn). Check [CUDA execution provider requirements](../execution-providers/CUDA-ExecutionProvider.md#requirements) for compatible version of CUDA and cuDNN. + * cuDNN 8.x requires ZLib. Follow the [cuDNN 8.9 installation guide](https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-890/install-guide/index.html) to install zlib in Linux or Windows. Note that the official gpu package does not support cuDNN 9.x. * The path of CUDA bin directory must be added to the PATH environment variable. * In Windows, the path of cuDNN bin directory must be added to the PATH environment variable. ## Python Installs -### Install ONNX Runtime (ORT) +### Install ONNX Runtime #### Install ONNX Runtime CPU @@ -47,6 +49,7 @@ pip install onnxruntime ``` #### Install ONNX Runtime GPU (CUDA 12.x) + The default CUDA version for [onnxruntime-gpu in pypi](https://pypi.org/project/onnxruntime-gpu) is 12.x since 1.19.0. ```bash @@ -57,6 +60,7 @@ For previous versions, you can download here: [1.18.1](https://aiinfra.visualstu #### Install ONNX Runtime GPU (CUDA 11.x) + For Cuda 11.x, please use the following instructions to install from [ORT Azure Devops Feed](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-11/PyPI/onnxruntime-gpu/overview) for 1.19.2 or later. ```bash @@ -66,6 +70,7 @@ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio. For previous versions, you can download here: [1.18.1](https://pypi.org/project/onnxruntime-gpu/1.18.1/), [1.18.0](https://pypi.org/project/onnxruntime-gpu/1.18.0/) #### Install ONNX Runtime GPU (ROCm) + For ROCm, please follow instructions to install it at the [AMD ROCm install docs](https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.0.0/). The ROCm execution provider for ONNX Runtime is built and tested with ROCm 6.0.0. To build from source on Linux, follow the instructions [here](https://onnxruntime.ai/docs/build/eps.html#amd-rocm). @@ -89,7 +94,7 @@ pip install skl2onnx ## C#/C/C++/WinML Installs -### Install ONNX Runtime (ORT) +### Install ONNX Runtime #### Install ONNX Runtime CPU @@ -408,17 +413,17 @@ below: | | Official build | Nightly build | Reqs | |--------------|---------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------| | Python | If using pip, run `pip install --upgrade pip` prior to downloading. | | | -| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly/overview) | | -| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | onnxruntime-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) | -| | GPU (CUDA/TensorRT) for CUDA 11.x: [**onnxruntime-gpu**](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-11/PyPI/onnxruntime-gpu/overview/) | onnxruntime-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-11-nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) | -| | GPU (DirectML): [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | onnxruntime-directml (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) | +| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly/overview) | | +| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [onnxruntime-gpu (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) | +| | GPU (CUDA/TensorRT) for CUDA 11.x: [**onnxruntime-gpu**](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-11/PyPI/onnxruntime-gpu/overview/) | [onnxruntime-gpu (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-11-nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) | +| | GPU (DirectML): [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [onnxruntime-directml (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) | | | OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed* | | [View](../build/eps.md#openvino) | | | TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed* | | | | | Azure (Cloud): [**onnxruntime-azure**](https://pypi.org/project/onnxruntime-azure/) | | | -| C#/C/C++ | CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | | -| | GPU (CUDA/TensorRT): [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | [View](../execution-providers/CUDA-ExecutionProvider) | -| | GPU (DirectML): [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) | -| WinML | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | onnxruntime (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) | +| C#/C/C++ | CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | | +| | GPU (CUDA/TensorRT): [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | [View](../execution-providers/CUDA-ExecutionProvider) | +| | GPU (DirectML): [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) | +| WinML | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) | | Java | CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime) | | [View](../api/java) | | | GPU (CUDA/TensorRT): [**com.microsoft.onnxruntime:onnxruntime_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu) | | [View](../api/java) | | Android | [**com.microsoft.onnxruntime:onnxruntime-android**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-android) | | [View](../install/index.md#install-on-android) | @@ -428,9 +433,9 @@ below: | Node.js | [**onnxruntime-node** (latest)](https://www.npmjs.com/package/onnxruntime-node) | [onnxruntime-node (dev)](https://www.npmjs.com/package/onnxruntime-node?activeTab=versions) | [View](../api/js) | | Web | [**onnxruntime-web** (latest)](https://www.npmjs.com/package/onnxruntime-web) | [onnxruntime-web (dev)](https://www.npmjs.com/package/onnxruntime-web?activeTab=versions) | [View](../api/js) | -*Note: Dev builds created from the master branch are available for testing newer changes between official releases. +*Note: Nightly builds created from the main branch are available for testing newer changes between official releases. Please use these at your own risk. We strongly advise against deploying these to production workloads as support is -limited for dev builds.* +limited for nightly builds.* ## Training install table for all languages From 49fce5322aa1b3eebba97aeb96eafa01513ee9b2 Mon Sep 17 00:00:00 2001 From: natke Date: Mon, 21 Oct 2024 14:56:19 -0700 Subject: [PATCH 3/9] Add sections for EPs + nightly --- docs/install/index.md | 61 ++++++++++++++++++++++++++++++------------- 1 file changed, 43 insertions(+), 18 deletions(-) diff --git a/docs/install/index.md b/docs/install/index.md index 19b5abf050f2b..928ad95088865 100644 --- a/docs/install/index.md +++ b/docs/install/index.md @@ -40,15 +40,19 @@ For ONNX Runtime GPU package, it is required to install [CUDA](https://developer ## Python Installs -### Install ONNX Runtime - -#### Install ONNX Runtime CPU +### Install ONNX Runtime CPU ```bash pip install onnxruntime ``` -#### Install ONNX Runtime GPU (CUDA 12.x) +#### Install nightly + +```bash +pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime +``` + +### Install ONNX Runtime GPU (CUDA 12.x) The default CUDA version for [onnxruntime-gpu in pypi](https://pypi.org/project/onnxruntime-gpu) is 12.x since 1.19.0. @@ -56,10 +60,28 @@ The default CUDA version for [onnxruntime-gpu in pypi](https://pypi.org/project/ pip install onnxruntime-gpu ``` +#### Install nightly + +```bash +pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-gpu +``` + For previous versions, you can download here: [1.18.1](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.1), [1.18.0](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.0) +### Install ONNX Runtime GPU (DirectML) -#### Install ONNX Runtime GPU (CUDA 11.x) +```bash +pip install onnxruntime-directml +``` + +#### Install nightly + +```bash +pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-directml +``` + + +### Install ONNX Runtime GPU (CUDA 11.x) For Cuda 11.x, please use the following instructions to install from [ORT Azure Devops Feed](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-11/PyPI/onnxruntime-gpu/overview) for 1.19.2 or later. @@ -69,29 +91,32 @@ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio. For previous versions, you can download here: [1.18.1](https://pypi.org/project/onnxruntime-gpu/1.18.1/), [1.18.0](https://pypi.org/project/onnxruntime-gpu/1.18.0/) -#### Install ONNX Runtime GPU (ROCm) -For ROCm, please follow instructions to install it at the [AMD ROCm install docs](https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.0.0/). The ROCm execution provider for ONNX Runtime is built and tested with ROCm 6.0.0. - -To build from source on Linux, follow the instructions [here](https://onnxruntime.ai/docs/build/eps.html#amd-rocm). +### Install ONNX Runtime QNN -### Install ONNX to export the model +### Install ONNX Runtime GPU (DirectML) ```bash -## ONNX is built into PyTorch -pip install torch +pip install onnxruntime-qnn ``` -```bash -## tensorflow -pip install tf2onnx -``` +#### Install nightly ```bash -## sklearn -pip install skl2onnx +pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-qnn ``` +### Install ONNX Runtime GPU (ROCm) + +For ROCm, please follow instructions to install it at the [AMD ROCm install docs](https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.0.0/). The ROCm execution provider for ONNX Runtime is built and tested with ROCm 6.0.0. + +To build from source on Linux, follow the instructions [here](https://onnxruntime.ai/docs/build/eps.html#amd-rocm). + + + + + + ## C#/C/C++/WinML Installs ### Install ONNX Runtime From 95e51daed0f3aec805ba1e297edc355c9acf0fb1 Mon Sep 17 00:00:00 2001 From: natke Date: Mon, 21 Oct 2024 15:18:00 -0700 Subject: [PATCH 4/9] Re-order Python packages --- docs/install/index.md | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/docs/install/index.md b/docs/install/index.md index 928ad95088865..dfd5193c4beb2 100644 --- a/docs/install/index.md +++ b/docs/install/index.md @@ -52,34 +52,34 @@ pip install onnxruntime pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime ``` -### Install ONNX Runtime GPU (CUDA 12.x) - -The default CUDA version for [onnxruntime-gpu in pypi](https://pypi.org/project/onnxruntime-gpu) is 12.x since 1.19.0. +### Install ONNX Runtime GPU (DirectML) ```bash -pip install onnxruntime-gpu +pip install onnxruntime-directml ``` #### Install nightly ```bash -pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-gpu +pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-directml ``` -For previous versions, you can download here: [1.18.1](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.1), [1.18.0](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.0) +### Install ONNX Runtime GPU (CUDA 12.x) -### Install ONNX Runtime GPU (DirectML) +The default CUDA version for [onnxruntime-gpu in pypi](https://pypi.org/project/onnxruntime-gpu) is 12.x since 1.19.0. ```bash -pip install onnxruntime-directml +pip install onnxruntime-gpu ``` #### Install nightly ```bash -pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-directml +pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-gpu ``` +For previous versions, you can download here: [1.18.1](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.1), [1.18.0](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.0) + ### Install ONNX Runtime GPU (CUDA 11.x) @@ -94,8 +94,6 @@ For previous versions, you can download here: [1.18.1](https://pypi.org/project/ ### Install ONNX Runtime QNN -### Install ONNX Runtime GPU (DirectML) - ```bash pip install onnxruntime-qnn ``` From 1bf6fb4ff10237da347f0babb3ec448ebace5e4f Mon Sep 17 00:00:00 2001 From: natke Date: Mon, 21 Oct 2024 17:13:30 -0700 Subject: [PATCH 5/9] Update ROCm version --- docs/install/index.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/docs/install/index.md b/docs/install/index.md index dfd5193c4beb2..33a9d2881e64f 100644 --- a/docs/install/index.md +++ b/docs/install/index.md @@ -106,15 +106,12 @@ pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/Public ### Install ONNX Runtime GPU (ROCm) -For ROCm, please follow instructions to install it at the [AMD ROCm install docs](https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.0.0/). The ROCm execution provider for ONNX Runtime is built and tested with ROCm 6.0.0. +For ROCm, please follow instructions to install it at the [AMD ROCm install docs](https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.2.0/). The ROCm execution provider for ONNX Runtime is built and tested with ROCm 6.2.0. To build from source on Linux, follow the instructions [here](https://onnxruntime.ai/docs/build/eps.html#amd-rocm). - - - ## C#/C/C++/WinML Installs ### Install ONNX Runtime From 78d8e571fb250f4466e0713653f58bc0ab052cee Mon Sep 17 00:00:00 2001 From: natke Date: Thu, 24 Oct 2024 18:42:40 -0700 Subject: [PATCH 6/9] Update after review --- docs/execution-providers/QNN-ExecutionProvider.md | 2 +- docs/get-started/with-python.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/execution-providers/QNN-ExecutionProvider.md b/docs/execution-providers/QNN-ExecutionProvider.md index 595751e0442c5..217e18d18a635 100644 --- a/docs/execution-providers/QNN-ExecutionProvider.md +++ b/docs/execution-providers/QNN-ExecutionProvider.md @@ -43,7 +43,7 @@ Note: Starting version 1.18.0 , you do not need to separately download and insta - Python 3.11.x - Numpy 1.25.2 or >= 1.26.4 - Install: `pip install onnxruntime-qnn` - - Install nightly package `python -m pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/onnxruntime-qnn` + - Install nightly package `python -m pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple` ## Qualcomm AI Hub Qualcomm AI Hub can be used to optimize and run models on Qualcomm hosted devices. diff --git a/docs/get-started/with-python.md b/docs/get-started/with-python.md index 1a41bf0072812..8f94e218a2dfa 100644 --- a/docs/get-started/with-python.md +++ b/docs/get-started/with-python.md @@ -275,7 +275,7 @@ python -m pip install onnxruntime-gpu --index-url=https://aiinfra.pkgs.visualstu Example to install onnxruntime-gpu for CUDA 12.*: ``` -python -m pip install onnxruntime-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ +python -m pip install onnxruntime-gpu pre --extra-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ ``` For Python compiler version notes, see [this page](https://github.com/microsoft/onnxruntime/tree/main/docs/Python_Dev_Notes.md) From 12d371a0c4762b80bee409f24aeec0372be90354 Mon Sep 17 00:00:00 2001 From: natke Date: Fri, 25 Oct 2024 10:24:15 -0700 Subject: [PATCH 7/9] Check --extra-index-url --- docs/get-started/with-python.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/get-started/with-python.md b/docs/get-started/with-python.md index 8f94e218a2dfa..de1c007265943 100644 --- a/docs/get-started/with-python.md +++ b/docs/get-started/with-python.md @@ -270,12 +270,12 @@ python -m pip install coloredlogs flatbuffers numpy packaging protobuf sympy Example to install onnxruntime-gpu for CUDA 11.*: ``` -python -m pip install onnxruntime-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-11-nightly/pypi/simple/ +python -m pip install onnxruntime-gpu --extra-index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-11-nightly/pypi/simple/ ``` Example to install onnxruntime-gpu for CUDA 12.*: ``` -python -m pip install onnxruntime-gpu pre --extra-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ +python -m pip install onnxruntime-gpu --pre --extra-index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ ``` For Python compiler version notes, see [this page](https://github.com/microsoft/onnxruntime/tree/main/docs/Python_Dev_Notes.md) From 04a63d1d9f6b7d8b8a4b179e6707e296ba28a673 Mon Sep 17 00:00:00 2001 From: natke Date: Fri, 25 Oct 2024 10:27:48 -0700 Subject: [PATCH 8/9] More check --extra-index-url --- docs/get-started/with-python.md | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/docs/get-started/with-python.md b/docs/get-started/with-python.md index de1c007265943..a93d9ec4a7c5a 100644 --- a/docs/get-started/with-python.md +++ b/docs/get-started/with-python.md @@ -253,20 +253,17 @@ print(pred_onx) [Go to the ORT Python API Docs](../api/python/api_summary.html){: .btn .mr-4 target="_blank"} ## Builds + If using pip, run `pip install --upgrade pip` prior to downloading. | Artifact | Description | Supported Platforms | |----------- |-------------|---------------------| |[onnxruntime](https://pypi.org/project/onnxruntime)|CPU (Release)| Windows (x64), Linux (x64, ARM64), Mac (X64), | -|[onnxruntime](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime)|CPU (Dev) | Same as above | +|[nightly](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime)|CPU (Dev) | Same as above | |[onnxruntime-gpu](https://pypi.org/project/onnxruntime-gpu)|GPU (Release)| Windows (x64), Linux (x64, ARM64) | |[onnxruntime-gpu for CUDA 11.*](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-11-nightly/PyPI/onnxruntime-gpu) |GPU (Dev) | Windows (x64), Linux (x64, ARM64) | |[onnxruntime-gpu for CUDA 12.*](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-gpu) |GPU (Dev) | Windows (x64), Linux (x64, ARM64) | -Before installing nightly package, you will need install dependencies first. -``` -python -m pip install coloredlogs flatbuffers numpy packaging protobuf sympy -``` Example to install onnxruntime-gpu for CUDA 11.*: ``` From 2035cd1d1c8de60e4661185b34aa3426c35078f0 Mon Sep 17 00:00:00 2001 From: natke Date: Fri, 25 Oct 2024 13:45:01 -0700 Subject: [PATCH 9/9] Fix onnxruntime-qnn --- docs/execution-providers/QNN-ExecutionProvider.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/execution-providers/QNN-ExecutionProvider.md b/docs/execution-providers/QNN-ExecutionProvider.md index 217e18d18a635..66d311ecb06e3 100644 --- a/docs/execution-providers/QNN-ExecutionProvider.md +++ b/docs/execution-providers/QNN-ExecutionProvider.md @@ -43,7 +43,7 @@ Note: Starting version 1.18.0 , you do not need to separately download and insta - Python 3.11.x - Numpy 1.25.2 or >= 1.26.4 - Install: `pip install onnxruntime-qnn` - - Install nightly package `python -m pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple` + - Install nightly package `python -m pip install --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple onnxruntime-qnn` ## Qualcomm AI Hub Qualcomm AI Hub can be used to optimize and run models on Qualcomm hosted devices.