Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 34 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,28 +75,53 @@ NanoOWL runs real-time on Jetson Orin Nano.
<a id="setup"></a>
## 🛠️ Setup

1. Install the dependencies
### Option 1: Automated Setup Script (Recommended for Ubuntu 24.04 with CUDA 12.1)

1. Install PyTorch
For users on Ubuntu 24.04 with NVIDIA drivers and CUDA 12.1 already installed, a setup script is provided to automate the installation of dependencies, NVIDIA TensorRT, and torch2trt.

2. Install [torch2trt](https://github.com/NVIDIA-AI-IOT/torch2trt)
3. Install NVIDIA TensorRT
4. Install the Transformers library
1. **Ensure Prerequisites are Met:**
* NVIDIA GPU drivers are installed.
* CUDA Toolkit 12.1 is installed and `nvcc` is in your PATH.
* You have `sudo` privileges.

2. **Run the script:**
```bash
chmod +x setup.sh
./setup.sh
```
This script will:
* Check for NVIDIA drivers and CUDA 12.1.
* Install `git`, `python3-pip`, and `cmake`.
* Install/upgrade Python packaging tools (`pip`, `wheel`, `setuptools`, `packaging`).
* Install NVIDIA TensorRT (via pip, compatible with CUDA 12.x).
* Clone the `torch2trt` repository and install it with plugins.

After running the script, proceed to step 2 of the "Option 2: Manual Installation" steps below ("Install the NanoOWL package") and continue with the project-specific setup.

### Option 2: Manual Installation

If you are not using the automated script or are on a different setup, follow these manual steps:

1. **Install Core Dependencies:**
1. Install PyTorch (ensure compatibility with your CUDA version).
2. Install [torch2trt](https://github.com/NVIDIA-AI-IOT/torch2trt) (refer to their documentation for compatibility with your PyTorch and TensorRT versions).
3. Install NVIDIA TensorRT (ensure compatibility with your CUDA and PyTorch versions).
*Note: Steps 1.2 and 1.3 are handled by the `setup.sh` script if you used Option 1.*
4. Install the Transformers library:
```bash
python3 -m pip install transformers
```
5. (optional) Install NanoSAM (for the instance segmentation example)
5. (Optional) Install NanoSAM (for the instance segmentation example). Refer to the [NanoSAM repository](https://github.com/NVIDIA-AI-IOT/nanosam) for instructions.

2. Install the NanoOWL package.
2. **Install the NanoOWL package.**

```bash
git clone https://github.com/NVIDIA-AI-IOT/nanoowl
cd nanoowl
python3 setup.py develop --user
```

3. Build the TensorRT engine for the OWL-ViT vision encoder
3. **Build the TensorRT engine for the OWL-ViT vision encoder**

```bash
mkdir -p data
Expand All @@ -105,7 +130,7 @@ NanoOWL runs real-time on Jetson Orin Nano.
```


4. Run an example prediction to ensure everything is working
4. **Run an example prediction to ensure everything is working**

```bash
cd examples
Expand Down
69 changes: 69 additions & 0 deletions setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#!/bin/bash
set -e

echo "Starting the setup process..."

# Check for NVIDIA drivers
echo "Checking for NVIDIA drivers..."
if ! command -v nvidia-smi &> /dev/null
then
echo "WARNING: nvidia-smi command not found. Please ensure NVIDIA drivers are installed."
exit 1
else
echo "NVIDIA drivers found."
nvidia-smi | grep "CUDA Version"
fi

# Check for CUDA toolkit (nvcc) and version
echo "Checking for CUDA Toolkit (nvcc)..."
if ! command -v nvcc &> /dev/null
then
echo "WARNING: nvcc command not found. Please ensure CUDA Toolkit is installed and in your PATH."
exit 1
else
CUDA_VERSION_OUTPUT=$(nvcc --version)
echo "CUDA Toolkit found:"
echo "$CUDA_VERSION_OUTPUT"
if ! echo "$CUDA_VERSION_OUTPUT" | grep -q "release 12.1"
then
echo "WARNING: CUDA version is not 12.1. This script is intended for CUDA 12.1."
echo "Please ensure you have CUDA 12.1 installed, or modify the script if you intend to use a different version."
# Consider exiting here if strict 12.1 compliance is required, or just warn.
# For now, let's warn and continue, but an exit might be safer in a real scenario.
# exit 1
else
echo "CUDA version 12.1 confirmed."
fi
fi

echo "Updating package lists..."
sudo apt-get update

echo "Installing essential packages (git, python3-pip, cmake)..."
sudo apt-get install -y git python3-pip cmake

echo "Upgrading pip and installing Python build/packaging tools..."
python3 -m pip install --upgrade pip wheel setuptools packaging

echo "Installing NVIDIA TensorRT (for CUDA 12.x via pip)..."
python3 -m pip install --upgrade tensorrt

echo "Starting torch2trt installation..."
if [ -d "torch2trt" ]; then
echo "torch2trt directory already exists. Removing it to start fresh."
rm -rf torch2trt
fi
echo "Cloning the torch2trt repository..."
git clone https://github.com/NVIDIA-AI-IOT/torch2trt

echo "Changing into the torch2trt directory..."
cd torch2trt

echo "Installing torch2trt with plugins..."
python3 setup.py install --plugins

echo "Changing back to the original directory..."
cd ..

echo "Setup successfully completed!"
echo "Please ensure your environment (like LD_LIBRARY_PATH) is correctly configured if needed, especially for torch2trt plugins."