Use WEAVE’s pre-built environments to get a performant PyTorch setup, especially on AMD GPUs, with minimal configuration.

WEAVE stands for Workflow Enablement and AdVanced Environment.

 It provides virtual environments with pre-installed open source tools (including PyTorch) on all LC machines.

Users can:

  • Use pre-installed, shared, read-only virtual environments.
  • Create their own virtual environments based on WEAVE.
  • Create a Jupyter kernel based on WEAVE environment so that users can access all WEAVE tools from within a Jupyter notebook.

 Useful Links:

WEAVE documentations:  https://lc.llnl.gov/weave/index.html.

WEAVE environments:  https://lc.llnl.gov/weave/llnl/environment.html

Setup instructions

Below are the recommended steps for AMD GPU systems (El Capitan, RZAdams, RZVernal, Tenaya, Tioga, and Tuolumne).

Refer to the documentation for info on other options.

Using the WEAVE module

The simplest setup is to use the pre-installed read-only environments provided via the module system. Simply running

module load weave/develop-gpu

will load a Python v3.11 environment with PyTorch v2.7, built for AMD GPUs. 

Create a virtual environment based on WEAVE

To create a virtual environment based on the same Python v3.11 environment with PyTorch v2.7, run

/usr/apps/weave/tools/create_venv.sh -p gpu -e my-weave-env -v latest-develop

and then, every time you use this environment, activate it via

source my-weave-env/bin/activate

Create a Jupyter kernel

To work with WEAVE's PyTorch build from a Jupyter notebook, create a custom kernel using the weave command:

weave create_jupyter_kernel --kernel_name 'weave-env' --kernel_display_name 'WEAVE env'

using either the WEAVE module or your own virtual environment. This creates a version of kernel.json with the correct LD_LIBRARY_PATH.