Vulcan

From Alliance Doc
Jump to navigation Jump to search
Availability: April 15, 2025
Login node: vulcan.alliancecan.ca
Globus endpoint: Vulcan Globus v5
System Status Page: TBA
Status: Testing

Vulcan is a cluster dedicated to the needs of the Canadian scientific Artificial Intelligence community. Vulcan is located at the University of Alberta and is managed by the University of Alberta and Amii. It is named after the town Vulcan, AB, located in southern Alberta.

This cluster is part of the Pan-Canadian AI Compute Environment (PAICE).

Site-specific Policies

Internet access is not generally available from the compute nodes. A globally available Squid proxy is enabled by default with certain domains whitelisted. Contact technical support if you are not able to connect to a domain and we will evaluate whether it belongs on the whitelist.

Maximum duration of jobs is 7 days.

Vulcan is currently open to Amii affiliated PIs with CCAI Chairs. Further access will be announced at a later date.

Vulcan hardware specifications

Performance Tier Nodes Model CPU Cores System Memory GPUs per node Total GPUs
Standard Compute 205 Dell R760xa 2 x Intel Xeon Gold 6448Y 64 512 GB 4 x NVIDIA L40s 48GB 820

Storage System

Vulcan's storage system uses a combination of NVMe flash and HDD storage running on the Dell PowerScale platform with a total usable capacity of approximately 5PB. Home, Scratch, and Project are on the same Dell PowerScale system.

Home space
  • Location of /home directories.
  • Each /home directory has a small fixed quota.
  • Not allocated via RAS or RAC. Larger requests go to the /project space.
  • Has daily backup
Scratch space
  • For active or temporary (scratch) storage.
  • Not allocated.
  • Large fixed quota per user.
  • Inactive data will be purged.
Project space
  • Large adjustable quota per project.
  • Has daily backup.

Network Interconnects

Standard Compute nodes are interconnected with 100Gbps Ethernet with RoCE (RDMA over Converged Ethernet) enabled.

Scheduling

The Vulcan cluster uses the Slurm scheduler to run user workloads. The basic scheduling commands are similar to the other national systems.

Software

  • Module-based software stack.
  • Both the standard Alliance software stack as well as cluster-specific software.