Skip to content
GitHub

P1 DTU HPC

The P1 DTU HPC is hosted at DTU and provides high-performance computing resources for P1 members (PhD and above). It is particularly suitable for medium to large-scale machine learning experiments and research projects.

Requirements: PhD or higher (exceptions may apply), valid Danish university email, and registered P1 affiliation.

  1. P1 Affiliation Form
    Before accessing the P1 DTU HPC, you must first register to become a member of P1.
  2. P1 HPC Access Exception Form
    PhD P1 affiliates and above can skip this step. If you are a P1 member currently enrolled in a MSc, research assistant (RA) role, or similar, then you are not eligible for access without a written approval of exception from a responsible person (a P1 co-lead or faculty member) who can vouch for your request. Please fill in the following form and have the responsible person send a confirmation to the governance mail at compute-governance-p1@aicentre.dk so they can audit the request.
  3. DTU Account Signup Form
    Fill out the DTU account request form below. Only signup using an official university email address is accepted. The form will be processed by Henning Christiansen, head of DTU’s compute center and you will receive account details via email once created.

The compute cluster is accessible at login9.hpc.dtu.dk via SSH. Note that:

  • Home directories have limited storage (30GB)
  • Additional storage is available at /dtu/p1/
  • Interactive node is available for package installation and test runs
  • Heavy jobs should be submitted as batch jobs
  1. Download Cisco AnyConnect VPN client (see OpenConnect for Linux)
  2. Go to https://dtubasen.dtu.dk and sign in via Azure multi-factor auth using your full DTU username
  3. Set up multi-factor authentication
  4. Connect to vpn.dtu.dk using AnyConnect
  5. SSH to login9.hpc.dtu.dk using your DTU credentials

For persistent access, you can set up SSH keys:

  1. Generate key

    ssh-keygen -t ed25519 -f ~/.ssh/keyname
  2. Copy public key

    ssh-copy-id -i ~/.ssh/keyname.pub username@login9.hpc.dtu.dk
  3. Connect

    ssh -i ~/.ssh/keyname username@login9.hpc.dtu.dk

TIP: Consider setting up a SSH host alias for login9.hpc.dtu.dk in your ~/.ssh/config file to make it easier to connect to the cluster.

Technical Support

For technical issues, contact the DTU HPC Support Team

Policy Support

For policy issues, contact compute-governance-p1@aicentre.dk

General Questions

Use the #compute or #ask-compute channels on P1 Slack

Compute Coordinator

Contact bstja@dtu.dk for general or technical compute-related questions

For more technical information, refer to the P1 compute cluster documentation at DTU DCC.

The following rules are in place to ensure fair use of the P1 DTU HPC:

  • Maximum wall time: 72 hours
  • Maximum number of GPUs in a job: 2 (one node)
  • Maximum running jobs: ~50% of total available GPUs
  • 500 gb of storage (+30gb in home directory)

If you have a project that requires more storage resources than the above, please contact the governance group compute-governance-p1@aicentre.dk to discuss your needs.

  • 7 Lenovo ThinkSystem SR665 V3 servers
  • Each node specifications:
    • 2 AMD EPYC 9354 32-Core Processors
    • 768GB RAM
    • 2 NVIDIA H100 PCIe GPUs (80GB each)
  • Storage: 60TiB shared storage
  • Operating System: Alma Linux
  • Scheduling Environment: LSF
  • Resource Allocation:
    • 7 nodes available for batch jobs (queue: p1)
    • 1 node reserved for interactive usage (queue: p1i)