Facilities
Our group has access to cutting-edge super-computing facilities to do our calculations.
Trinity College High-Performance Computing (TCHPC)
Lonsdale
Available to: TCD Researchers
Processor Type: Opteron
Architecture: 64
Number of Nodes: 154
RAM per node: 16 GB
Clock Speed: 2.30 GHz
Interconnect: Infiniband DDR
Theoretical Peak Performance: 11.33 TF
Total number of cores: 1232
Number of sockets per Node: 2
Number of Cores per Socket: 4
Linpack Score: 8.9 TF
Kelvin
Available to: Irish Researchers
Processor Type: Intel
Architecture: 64
Number of Nodes: 100
RAM per node: 24 GB
Clock Speed: 2.66 GHz
Interconnect: Qlogic Infiniband QDR
Theoretical Peak Performance: 12.76 TF
Total number of cores: 1200
Number of sockets per Node: 2
Number of Cores per Socket: 6
Dalton
Available to: CCEM Researchers
Processor Type: Intel
Architecture: 64
Number of Nodes: 4
RAM per node: 48 GB
Clock Speed: 2.20 GHz
Theoretical Peak Performance: 3.34 TF
Total number of cores: 80
VIS-Room
As part of the TCHPC centre, we have also access to state-of-the-art graphic technologies to visualize our results. The VIS-Room can provide 3D visualization, panoramic-style projections, and is equipped with a sound surround system.
Irish Centre for High-End Computing (ICHEC)
At CCEM, we have the privilege to have access to the main super-computing facility at ICHEC, Kay, which is comprised of five main computing services:
-
- Cluster, a cluster of 336 nodes where each node has 2x 20-core 2.4 GHz Intel Xeon Gold 6148 (Skylake) processors, 192 GiB of RAM, a 400 GiB local SSD for scratch space and a 100Gbit OmniPath network adaptor. This partition has a total of 13,440 cores and 63 TiB of distributed memory.
-
- GPU, a partition of 16 nodes with the same specification as above, plus 2x NVIDIA Tesla V100 16GB PCIe (Volta architecture) GPUs on each node. Each GPU has 5,120 CUDA cores and 640 Tensor Cores.
-
- Phi, a partition of 16 nodes, each containing 1x self-hosted Intel Xeon Phi Processor 7210 (Knights Landing or KNL architecture) with 64 cores @ 1.3 GHz, 192 GiB RAM and a 400 GiB local SSD for scratch space.
-
- High Memory, a set of 6 nodes each containing 1.5 TiB of RAM, 2x 20-core 2.4 GHz Intel Xeon Gold 6148 (Skylake) processors and 1 TiB of dedicated local SSD for scratch storage.
-
- Service & Storage, a set of service and administrative nodes to provide user login, batch scheduling, management, networking, etc. Storage is provided via Lustre filesystems on a high-performance DDN SFA14k system with 1 PiB of capacity.