Gefion HPC (Partner)
The Gefion HPC is a national-level facility that can support both Tier 2 and some Tier 1 workloads. It is particularly suitable for large-scale model training and distributed computing tasks.
Playground Compute Vouchers WIP
Section titled “Playground Compute Vouchers ”P1 has allocated a portion of its compute budget to support researchers through “Gefion Playground” vouchers. These vouchers enable researchers to test their compatibility with the Gefion HPC system and explore the possibilities of large-scale computing.
Submit Project Proposal
Forward a copy of your project proposal for a quick assessment.
- Briefly describe your project, data and workload with clear objectives and scope
- Include specific compute requirements and estimated timeline
- Have you applied for or received NNF credits or similar?
- Would you be willing to share a link to your project repository? While completely optional, this helps us provide more personalized guidance and tailored support for your specific use case
Wait for Review
We'll assess your proposal and project requirements.
- Assessment of alignment with P1 research priorities
- If we have access to your project we can gain more detailed insights into how to best support your when onboarding Gefion
- Contacting Gefion if approved to initiate the onboarding process
Start Onboarding
Once approved, you'll receive access credentials and onboarding instructions to begin using Gefion.
- Receive access credentials and login instructions
- Complete technical onboarding with Gefion team
- Set up your development environment
- Begin testing your code with compute voucher credits
Support
Section titled “Support”Technical Support
For technical issues, contact the DCAI Support Team
Policy Support
For policy issues, contact compute-governance-p1@aicentre.dk
General Questions
Use the
#compute
or
#ask-compute
channels on P1 Slack
Compute Coordinator
Contact bstja@dtu.dk for general or technical compute-related questions
Specification
Section titled “Specification”- NVIDIA DGX SuperPOD
- Multiple DGX nodes (8 x H100s) for a total of 1.528 H100 GPUs with high-performance interconnects
- Large-scale GPU resources for distributed training
- High-bandwidth storage solutions
- Scheduling Environment: SLURM