Skip to content

Commit

Permalink
10.2 GA release update (#3998)
Browse files Browse the repository at this point in the history
* 10.2 GA updates

Signed-off-by: Yuan Yao (yuanyao) <[email protected]>

* update changelog

Signed-off-by: Yuan Yao (yuanyao) <[email protected]>

* revert plugin README format changes

Signed-off-by: Yuan Yao (yuanyao) <[email protected]>

---------

Signed-off-by: Yuan Yao (yuanyao) <[email protected]>
  • Loading branch information
yuanyao-nv authored Jul 11, 2024
1 parent 9db1508 commit 2332a71
Show file tree
Hide file tree
Showing 99 changed files with 1,242 additions and 539 deletions.
13 changes: 13 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
# TensorRT OSS Release Changelog

## 10.2.0 GA - 2024-07-10

Key Features and Updates:

- Demo changes
- Added [Stable Diffusion 3 demo](demo/Diffusion).
- Plugin changes
- Version 3 of the [InstanceNormalization plugin](plugin/instanceNormalizationPlugin/) (`InstanceNormalization_TRT`) has been added. This version is based on the `IPluginV3` interface and is used by the TensorRT ONNX parser when native `InstanceNormalization` is disabled.
- Tooling changes
- Pytorch Quantization development has transitioned to [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer). All developers are encouraged to use TensorRT Model Optimizer to benefit from the latest advancements on quantization and compression.
- Build containers
- Updated default cuda versions to `12.5.0`.

## 10.1.0 GA - 2024-06-17

Key Features and Updates:
Expand Down
56 changes: 28 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,13 +26,13 @@ You can skip the **Build** section to enjoy TensorRT with Python.
To build the TensorRT-OSS components, you will first need the following software packages.

**TensorRT GA build**
* TensorRT v10.1.0.27
* TensorRT v10.2.0.19
* Available from direct download links listed below

**System Packages**
* [CUDA](https://developer.nvidia.com/cuda-toolkit)
* Recommended versions:
* cuda-12.4.0 + cuDNN-8.9
* cuda-12.5.0 + cuDNN-8.9
* cuda-11.8.0 + cuDNN-8.9
* [GNU make](https://ftp.gnu.org/gnu/make/) >= v4.1
* [cmake](https://github.com/Kitware/CMake/releases) >= v3.13
Expand Down Expand Up @@ -73,25 +73,25 @@ To build the TensorRT-OSS components, you will first need the following software
If using the TensorRT OSS build container, TensorRT libraries are preinstalled under `/usr/lib/x86_64-linux-gnu` and you may skip this step.

Else download and extract the TensorRT GA build from [NVIDIA Developer Zone](https://developer.nvidia.com) with the direct links below:
- [TensorRT 10.1.0.27 for CUDA 11.8, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-11.8.tar.gz)
- [TensorRT 10.1.0.27 for CUDA 12.4, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz)
- [TensorRT 10.1.0.27 for CUDA 11.8, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/zip/TensorRT-10.1.0.27.Windows.win10.cuda-11.8.zip)
- [TensorRT 10.1.0.27 for CUDA 12.4, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/zip/TensorRT-10.1.0.27.Windows.win10.cuda-12.4.zip)
- [TensorRT 10.2.0.19 for CUDA 11.8, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/tars/TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-11.8.tar.gz)
- [TensorRT 10.2.0.19 for CUDA 12.5, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/tars/TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-12.5.tar.gz)
- [TensorRT 10.2.0.19 for CUDA 11.8, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/zip/TensorRT-10.2.0.19.Windows.win10.cuda-11.8.zip)
- [TensorRT 10.2.0.19 for CUDA 12.5, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/zip/TensorRT-10.2.0.19.Windows.win10.cuda-12.5.zip)


**Example: Ubuntu 20.04 on x86-64 with cuda-12.4**
**Example: Ubuntu 20.04 on x86-64 with cuda-12.5**

```bash
cd ~/Downloads
tar -xvzf TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz
export TRT_LIBPATH=`pwd`/TensorRT-10.1.0.27
tar -xvzf TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-12.5.tar.gz
export TRT_LIBPATH=`pwd`/TensorRT-10.2.0.19
```

**Example: Windows on x86-64 with cuda-12.4**
**Example: Windows on x86-64 with cuda-12.5**

```powershell
Expand-Archive -Path TensorRT-10.1.0.27.Windows.win10.cuda-12.4.zip
$env:TRT_LIBPATH="$pwd\TensorRT-10.1.0.27\lib"
Expand-Archive -Path TensorRT-10.2.0.19.Windows.win10.cuda-12.5.zip
$env:TRT_LIBPATH="$pwd\TensorRT-10.2.0.19\lib"
```

## Setting Up The Build Environment
Expand All @@ -101,27 +101,27 @@ For Linux platforms, we recommend that you generate a docker container for build
1. #### Generate the TensorRT-OSS build container.
The TensorRT-OSS build container can be generated using the supplied Dockerfiles and build scripts. The build containers are configured for building TensorRT OSS out-of-the-box.

**Example: Ubuntu 20.04 on x86-64 with cuda-12.4 (default)**
**Example: Ubuntu 20.04 on x86-64 with cuda-12.5 (default)**
```bash
./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.4
./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.5
```
**Example: Rockylinux8 on x86-64 with cuda-12.4**
**Example: Rockylinux8 on x86-64 with cuda-12.5**
```bash
./docker/build.sh --file docker/rockylinux8.Dockerfile --tag tensorrt-rockylinux8-cuda12.4
./docker/build.sh --file docker/rockylinux8.Dockerfile --tag tensorrt-rockylinux8-cuda12.5
```
**Example: Ubuntu 22.04 cross-compile for Jetson (aarch64) with cuda-12.4 (JetPack SDK)**
**Example: Ubuntu 22.04 cross-compile for Jetson (aarch64) with cuda-12.5 (JetPack SDK)**
```bash
./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda12.4
./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda12.5
```
**Example: Ubuntu 22.04 on aarch64 with cuda-12.4**
**Example: Ubuntu 22.04 on aarch64 with cuda-12.5**
```bash
./docker/build.sh --file docker/ubuntu-22.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu22.04-cuda12.4
./docker/build.sh --file docker/ubuntu-22.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu22.04-cuda12.5
```

2. #### Launch the TensorRT-OSS build container.
**Example: Ubuntu 20.04 build container**
```bash
./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.4 --gpus all
./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.5 --gpus all
```
> NOTE:
<br> 1. Use the `--tag` corresponding to build container generated in Step 1.
Expand All @@ -132,38 +132,38 @@ For Linux platforms, we recommend that you generate a docker container for build
## Building TensorRT-OSS
* Generate Makefiles and build.

**Example: Linux (x86-64) build with default cuda-12.4**
**Example: Linux (x86-64) build with default cuda-12.5**
```bash
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
make -j$(nproc)
```
**Example: Linux (aarch64) build with default cuda-12.4**
**Example: Linux (aarch64) build with default cuda-12.5**
```bash
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64-native.toolchain
make -j$(nproc)
```
**Example: Native build on Jetson (aarch64) with cuda-12.4**
**Example: Native build on Jetson (aarch64) with cuda-12.5**
```bash
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=12.4
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=12.5
CC=/usr/bin/gcc make -j$(nproc)
```
> NOTE: C compiler must be explicitly specified via CC= for native aarch64 builds of protobuf.

**Example: Ubuntu 22.04 Cross-Compile for Jetson (aarch64) with cuda-12.4 (JetPack)**
**Example: Ubuntu 22.04 Cross-Compile for Jetson (aarch64) with cuda-12.5 (JetPack)**
```bash
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64.toolchain -DCUDA_VERSION=12.4 -DCUDNN_LIB=/pdk_files/cudnn/usr/lib/aarch64-linux-gnu/libcudnn.so -DCUBLAS_LIB=/usr/local/cuda-12.4/targets/aarch64-linux/lib/stubs/libcublas.so -DCUBLASLT_LIB=/usr/local/cuda-12.4/targets/aarch64-linux/lib/stubs/libcublasLt.so -DTRT_LIB_DIR=/pdk_files/tensorrt/lib
cmake .. -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64.toolchain -DCUDA_VERSION=12.5 -DCUDNN_LIB=/pdk_files/cudnn/usr/lib/aarch64-linux-gnu/libcudnn.so -DCUBLAS_LIB=/usr/local/cuda-12.5/targets/aarch64-linux/lib/stubs/libcublas.so -DCUBLASLT_LIB=/usr/local/cuda-12.5/targets/aarch64-linux/lib/stubs/libcublasLt.so -DTRT_LIB_DIR=/pdk_files/tensorrt/lib
make -j$(nproc)
```

**Example: Native builds on Windows (x86) with cuda-12.4**
**Example: Native builds on Windows (x86) with cuda-12.5**
```powershell
cd $TRT_OSSPATH
mkdir -p build
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
10.1.0.27
10.2.0.19
7 changes: 4 additions & 3 deletions demo/BERT/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ This subfolder of the BERT TensorFlow repository, tested and maintained by NVIDI
* [TensorRT inference benchmark](#tensorrt-inference-benchmark)
* [Results](#results)
* [Inference performance: NVIDIA A100](#inference-performance-nvidia-a100-40gb)
* [Inference performance: NVIDIA A30](#inference-performance-nvidia-a30)
* [Inference performance: NVIDIA L4](#inference-performance-nvidia-l4)
* [Inference performance: NVIDIA L40S](#inference-performance-nvidia-l40s)


## Model overview
Expand Down Expand Up @@ -74,8 +75,8 @@ The following software version configuration has been tested:
|Software|Version|
|--------|-------|
|Python|>=3.8|
|TensorRT|10.1.0.27|
|CUDA|12.4|
|TensorRT|10.2.0.19|
|CUDA|12.5|

## Setup

Expand Down
6 changes: 3 additions & 3 deletions demo/DeBERTa/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Note that the performance gap between BERT's self-attention and DeBERTa's disent
## Environment Setup
It is recommended to use docker for reproducing the following steps. Follow the setup steps in TensorRT OSS [README](https://github.com/NVIDIA/TensorRT#setting-up-the-build-environment) to build and launch the container and build OSS:

**Example: Ubuntu 20.04 on x86-64 with cuda-12.4 (default)**
**Example: Ubuntu 20.04 on x86-64 with cuda-12.5 (default)**
```bash
# Download this TensorRT OSS repo
git clone -b main https://github.com/nvidia/TensorRT TensorRT
Expand All @@ -84,10 +84,10 @@ git submodule update --init --recursive

## at root of TensorRT OSS
# build container
./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.4
./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.5

# launch container
./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.4 --gpus all
./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.5 --gpus all

## now inside container
# build OSS (only required for pre-8.4.3 TensorRT versions)
Expand Down
4 changes: 2 additions & 2 deletions demo/Diffusion/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This demo application ("demoDiffusion") showcases the acceleration of Stable Dif
### Clone the TensorRT OSS repository

```bash
git clone [email protected]:NVIDIA/TensorRT.git -b release/10.1 --single-branch
git clone [email protected]:NVIDIA/TensorRT.git -b release/10.2 --single-branch
cd TensorRT
```

Expand Down Expand Up @@ -48,7 +48,7 @@ onnx 1.15.0
onnx-graphsurgeon 0.5.2
onnxruntime 1.16.3
polygraphy 0.49.9
tensorrt 10.1.0.27
tensorrt 10.2.0.19
tokenizers 0.13.3
torch 2.2.0
transformers 4.33.1
Expand Down
20 changes: 10 additions & 10 deletions docker/rockylinux8.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
# limitations under the License.
#

ARG CUDA_VERSION=12.4.0
ARG CUDA_VERSION=12.5.0

FROM nvidia/cuda:${CUDA_VERSION}-devel-rockylinux8
LABEL maintainer="NVIDIA CORPORATION"
Expand All @@ -25,7 +25,7 @@ ENV NV_CUDNN_VERSION 8.9.6.50-1
ENV NV_CUDNN_PACKAGE libcudnn8-${NV_CUDNN_VERSION}.cuda12.2
ENV NV_CUDNN_PACKAGE_DEV libcudnn8-devel-${NV_CUDNN_VERSION}.cuda12.2

ENV TRT_VERSION 10.1.0.27
ENV TRT_VERSION 10.2.0.19
SHELL ["/bin/bash", "-c"]

RUN dnf install -y \
Expand Down Expand Up @@ -62,15 +62,15 @@ RUN dnf install -y python38 python38-devel &&\

# Install TensorRT
RUN if [ "${CUDA_VERSION:0:2}" = "11" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& tar -xf TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& cp -a TensorRT-10.1.0.27/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.1.0.27/python/tensorrt-10.1.0-cp38-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/tars/TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& tar -xf TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& cp -a TensorRT-10.2.0.19/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.2.0.19/python/tensorrt-10.2.0-cp38-none-linux_x86_64.whl ;\
elif [ "${CUDA_VERSION:0:2}" = "12" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz \
&& tar -xf TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz \
&& cp -a TensorRT-10.1.0.27/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.1.0.27/python/tensorrt-10.1.0-cp38-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/tars/TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-12.5.tar.gz \
&& tar -xf TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-12.5.tar.gz \
&& cp -a TensorRT-10.2.0.19/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.2.0.19/python/tensorrt-10.2.0-cp38-none-linux_x86_64.whl ;\
else \
echo "Invalid CUDA_VERSION"; \
exit 1; \
Expand Down
20 changes: 10 additions & 10 deletions docker/rockylinux9.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
# limitations under the License.
#

ARG CUDA_VERSION=12.4.0
ARG CUDA_VERSION=12.5.0

FROM nvidia/cuda:${CUDA_VERSION}-devel-rockylinux9
LABEL maintainer="NVIDIA CORPORATION"
Expand All @@ -25,7 +25,7 @@ ENV NV_CUDNN_VERSION 8.9.6.50-1
ENV NV_CUDNN_PACKAGE libcudnn8-${NV_CUDNN_VERSION}.cuda12.2
ENV NV_CUDNN_PACKAGE_DEV libcudnn8-devel-${NV_CUDNN_VERSION}.cuda12.2

ENV TRT_VERSION 10.1.0.27
ENV TRT_VERSION 10.2.0.19
SHELL ["/bin/bash", "-c"]

RUN dnf install -y \
Expand Down Expand Up @@ -67,15 +67,15 @@ RUN dnf -y install \

# Install TensorRT
RUN if [ "${CUDA_VERSION:0:2}" = "11" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& tar -xf TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& cp -a TensorRT-10.1.0.27/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.1.0.27/python/tensorrt-10.1.0-cp39-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/tars/TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& tar -xf TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& cp -a TensorRT-10.2.0.19/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.2.0.19/python/tensorrt-10.2.0-cp39-none-linux_x86_64.whl ;\
elif [ "${CUDA_VERSION:0:2}" = "12" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz \
&& tar -xf TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz \
&& cp -a TensorRT-10.1.0.27/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.1.0.27/python/tensorrt-10.1.0-cp39-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/tars/TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-12.5.tar.gz \
&& tar -xf TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-12.5.tar.gz \
&& cp -a TensorRT-10.2.0.19/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.2.0.19/python/tensorrt-10.2.0-cp39-none-linux_x86_64.whl ;\
else \
echo "Invalid CUDA_VERSION"; \
exit 1; \
Expand Down
20 changes: 10 additions & 10 deletions docker/ubuntu-20.04.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
# limitations under the License.
#

ARG CUDA_VERSION=12.4.0
ARG CUDA_VERSION=12.5.0

FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu20.04
LABEL maintainer="NVIDIA CORPORATION"
Expand All @@ -28,7 +28,7 @@ ENV CUDA_VERSION_MAJOR_MINOR=12.2
ENV NV_CUDNN_PACKAGE "libcudnn8=$NV_CUDNN_VERSION-1+cuda${CUDA_VERSION_MAJOR_MINOR}"
ENV NV_CUDNN_PACKAGE_DEV "libcudnn8-dev=$NV_CUDNN_VERSION-1+cuda${CUDA_VERSION_MAJOR_MINOR}"

ENV TRT_VERSION 10.1.0.27
ENV TRT_VERSION 10.2.0.19
SHELL ["/bin/bash", "-c"]

RUN apt-get update && apt-get install -y --no-install-recommends \
Expand Down Expand Up @@ -84,15 +84,15 @@ RUN apt-get install -y --no-install-recommends \

# Install TensorRT
RUN if [ "${CUDA_VERSION:0:2}" = "11" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& tar -xf TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& cp -a TensorRT-10.1.0.27/lib/*.so* /usr/lib/x86_64-linux-gnu \
&& pip install TensorRT-10.1.0.27/python/tensorrt-10.1.0-cp38-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/tars/TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& tar -xf TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-11.8.tar.gz \
&& cp -a TensorRT-10.2.0.19/lib/*.so* /usr/lib/x86_64-linux-gnu \
&& pip install TensorRT-10.2.0.19/python/tensorrt-10.2.0-cp38-none-linux_x86_64.whl ;\
elif [ "${CUDA_VERSION:0:2}" = "12" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz \
&& tar -xf TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz \
&& cp -a TensorRT-10.1.0.27/lib/*.so* /usr/lib/x86_64-linux-gnu \
&& pip install TensorRT-10.1.0.27/python/tensorrt-10.1.0-cp38-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/tars/TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-12.5.tar.gz \
&& tar -xf TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-12.5.tar.gz \
&& cp -a TensorRT-10.2.0.19/lib/*.so* /usr/lib/x86_64-linux-gnu \
&& pip install TensorRT-10.2.0.19/python/tensorrt-10.2.0-cp38-none-linux_x86_64.whl ;\
else \
echo "Invalid CUDA_VERSION"; \
exit 1; \
Expand Down
6 changes: 3 additions & 3 deletions docker/ubuntu-22.04-aarch64.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@
# limitations under the License.
#

ARG CUDA_VERSION=12.4.0
ARG CUDA_VERSION=12.5.0

# Multi-arch container support available in non-cudnn containers.
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu22.04

ENV TRT_VERSION 10.1.0.27
ENV TRT_VERSION 10.2.0.19
SHELL ["/bin/bash", "-c"]

# Setup user account
Expand Down Expand Up @@ -71,7 +71,7 @@ RUN apt-get install -y --no-install-recommends \
# Install TensorRT. This will also pull in CUDNN
RUN ver="${CUDA_VERSION%.*}" &&\
if [ "${ver%.*}" = "12" ] ; then \
ver="12.4"; \
ver="12.5"; \
fi &&\
v="${TRT_VERSION}-1+cuda${ver}" &&\
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/sbsa/3bf863cc.pub &&\
Expand Down
Loading

0 comments on commit 2332a71

Please sign in to comment.