High Performance Computing

CRS4 offers support for research to the scientific community also through the High Performance Computing, thanks to a constantly updated computing center. The HPC group supports researchers and users, also in the industrial sector, providing adequate services that will suit all their needs.
The computing power theoretically attainable, obtained with X86_64 architecture clusters, GPU accelerators, MIC and FPGA exceeds the value of 240 TFlops, divided as follows:

  • Dell cluster with AMD firepro 54 Tflops acceletors
  • Huawei cluster 14,5 TFlops
  • Intel Phi cluster 10 TFlops
  • GPU Nvidia Kepler K40 34 Tflops
  • GPU NVidia Kepler 90 TFlops
  • HP Cluster 34,6 TFlops (low & medium latency)
  • FPGA Maxeler
  • IBM Cluster and other resources 3 TFlops

The most recent CRS4 high performance computing acquisitions are:

  1. Sparky Cluster, beowulf, based on Intel Haswell, with 10 nodes with AMD firepro 9150 accelerators, connessions Infiniband FDR.
  2. Huawei Cluster general purpouse, based on Intel Ivy Bridge, 33 nodes with the following characterics:
    • CPU ten-core Intel E5-2680V2, 2600Mhz
    • 128 GB RAM DDR3 ECC
    • two GB Ethernet port Broadcom BCM5720
    • HD 600 GB 10000 rpm
    • InfiniBand Mellanox MT27500 ConnectX-3 interface
  3. A small Intel PHI cluster for studying the new Intel processor
  4. Some small size installations, composed by 14 nodes with 28 Nvidia Kepler GPU accelerators. This cluster has low latency connections (Infiniband QDR), with a variable amount between 64 GB and 128 GB and more of 100 TFlop of theoretical computing power.

The center hosts several computing cluster including a medium installation, composed by 400 nodes dual quadcore, 3200 computation units in total.
This cluster is divided into two subsystems characterized by the type of network connection. The most important subsystem has a low latency network connections / broadband connection to the other subsystem has average latency. A small section of the nodes is dedicated to the provision of services for the two clusters.
These service nodes have network connections at both low and medium latency.

In detail, the cluster is divided as follows:

  • 256 compute nodes with connections InfiniBand / Ethernet - Low Latency Cluster
  • 128 compute nodes with Ethernet Cluster of medium-latency
  • 16 service nodes with FibreChannel connection / Infiniband / Ethernet - Cluster service nodes.

All the nodes computation, both Ethernet and InfiniBand, are HP BL460c blades.

The service nodes are HP BL480c blades.

HPC


People


Publications


Projects


Questo sito utilizza cookie tecnici e assimilati. Possono essere presenti anche cookie profilazione di terze parti. Se vuoi saperne di più o negare il consenso a tutti o ad alcuni cookie leggi l'informativa completa. Proseguendo nella navigazione (anche con il semplice scrolling) acconsenti all'uso dei cookie. This site uses technical and anonymized analytics cookies only. There may also be profiling third-party cookies. Please read the cookie information page to learn more about how we use cookies or blocking them. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close