Skip to content
GitHub

P1 NGC HPC

The P1 NGC HPC is hosted at the National Genome Centre and is designed for secure data processing with GDPR compliance. It provides a secure environment for handling sensitive data and research projects.

Requirements: PhD or higher (exceptions may apply), valid Danish university email, and registered P1 affiliation.

Each project needs to bring a record (a signed Data Processing Agreement should do) that explicitly mentions NGC as a data processor and that the data is allowed to be stored there. If the project poses a high risk to individuals whose personal data is being processed a Data Protection Impact Assessment (DPIA) will be needed also.

  1. P1 Affiliation Form
    Before accessing the P1 NGC HPC, you must first register to become a member of P1.
  2. Complete and sign the NGC user creation form and forward it to the Compute Coordinator to request access.

You will be added to the NGC slack channel once you gain access.

The P1 NGC HPC is an air-gapped system requiring:

  • Multi-factor authentication (MFA)
  • A client for accessing the remote VM entrypoint
  • Specific access instructions will follow after registration. But you can expect to use SFTP for transferring data into the system.

Once connected with the Omnissa Remote Desktop client you can access the login node using ssh -X <your-username>@login.

Important hosts:
- https://console.cld076.vmc/status # Internal Status page
- cld076-0004.cld076.vmc # Internal (SFTP)
- sftp.spc.ngc.dk # External Ingress/Egress (SFTP)

Technical Support

For technical issues, contact the NGC HPC Support Team

Policy Support

For policy issues, contact compute-governance-p1@aicentre.dk

General Questions

Use the #compute or #ask-compute channels on P1 Slack

Compute Coordinator

Contact bstja@dtu.dk for general or technical compute-related questions
  • Air-gapped system for secure data processing
  • GDPR compliant infrastructure
  • Secure storage solutions
  • Specific hardware details available upon access approval
  • Scheduling Environment: SLURM
  • Resource allocation details provided during onboarding

You can transfer sensitive and large data to the cluster using SFTP under the supervision of an admin. You will need to request access to the /data/upload directory as this acts as a data gateway.

Then it’s recommended to set up a SSH entry in your ~/.ssh/config.

Host ngc
HostName sftp.spc.ngc.dk
Port 6433
User <your-username>_sftp
HostKeyAlgorithms +ssh-rsa

From here you can connect with sftp ngc and then put to the /data/upload directory from the outside and use get from the inside. During the transfer period the data is only accessible to you and the admins.

As an alternative to using SFTP you can use scp to transfer the data to the cluster which is arguably easier:

scp ~/datasets/ISLES-2022.zip ngc:/data/upload/

Then inside the cluster you can transfer the data to your home directory using:

scp <your-username>_sftp@cld076-0004.cld076.vmc:/data/upload/ISLES-2022.zip ~/datasets/

For miscellaneous and small data such as personal dotfiles or source code you can:

  • Transfer via SFTP (tedious and requires admin supervision)
  • Use the internal server running a simple GitHub proxy/tunnel for public repositories.
  • Mount a host directory using the Omnissa Remote Desktop client (must be enabled by an NGC admin)
  • SSH into the admin node (if you have access to it)

Your host clipboard works into the remote desktop client but not the other way around. You can take screenshots of the remote display during the session.

Navigate to https://cld076-0006.cld076.vmc on the internal network.

Terminal window
conda config --add channels https://vmc-nexus-01.ngc.vmc/repository/ngc-cloud-conda/main
conda config --remove channels defaults
conda config --set ssl_verify /etc/pki/ca-trust/source/anchors/ngc-cloud-root-ca.pem
Terminal window
export VENV_PATH=MYPATH
python3 -m venv $VENV_PATH
$VENV_PATH/bin/pip install -U pip setuptools
$VENV_PATH/bin/pip install poetry
$VENV_PATH/bin/poetry source add –priority=primary NGC https://vmc-nexus-01-ngc.vmc/repository/ngc-coud-pypi/simple
$VENV_PATH/bin/poetry config certificates.NGC.cert false

In MobaXterm you can start an interactive session by using iqsub.

Terminal window
# Quick'n dirty interactive session
iqsub
# example: interactive session for a user in group named ngc-bio, 1 node and 4
# CPUs, 20 GB memory and running for 2 hours
qsub -I -X -W group_list=ngc-bio -A ngc-bio -l nodes=1:ppn=4,mem=20gb,walltime=02:00:00

To see info about your user, including the department/group(s) you belong to.

Terminal window
id
Terminal window
# List info about available nodes
pbsnodes
Terminal window
# fat nodes
showbf -f fatnode
# thin nodes
showbf -f thinnode
Terminal window
# Get all jobs
qstat -a
# Get extended status of a specific job
qstat -f <jobid>
# Using a helper script
checkjob -v <jobid>
Terminal window
# get the status of your submitted jobs (Q: in queue but not yet running; R:
running, C: complete)
qstat
Submitting jobs to the queue system
11 / 21
# get extended status of your submitted jobs like requested resources and elapsed
time
qstat -a
# see the nodes allocated to each run
qstat -n
# get the status of a specific job
qstat <jobid>
# note: jobid is the value in the first column of output from qstat
# more info about a specific job
qstat -f <jobid>
# with this option you can check used time and memory of a completed job (see
resources_used.walltime and resources_used.mem)
# to get more options, check the manual
man qstat
# view the qsub queue, including qstat all users
showq
# view running jobs (+ give additional info, including an estimate on how
efficiently they are using the CPUs)
showq -r
# this will give you more information about your eligible jobs
showq -i
# for more options with the showq command:
showq -h
# if you want to check the status of a particular jobid
checkjob <jobid>
# add the -v flag to increase the verbosity
checkjob -v <jobid>
# check when a job will run
showstart <jobid>
# check resource usage of completed job (privileged command)
tracejob -v <jobid>
# to cancel a job, first find the job-id using qstat, then type:
qdel <jobid>
Submitting jobs to the queue system
12 / 21
# for more options with the qdel command:
man qdel
# to cancel a job using mjobclt, first find the job-id using qstat, then type:
mjobctl -c <jobid>