- Geographic Information Systems
- High Performance Computing
- Networks & Security
- System Management & Storage
The computing power theoretically attainable, obtained with X86_64 architecture clusters, GPU accelerators, MIC and FPGA exceeds the value of 240 TFlops, divided as follows:
- Dell cluster with AMD firepro 54 Tflops acceletors
- Huawei cluster 14,5 TFlops
- Intel Phi cluster 10 TFlops
- GPU Nvidia Kepler K40 34 Tflops
- GPU NVidia Kepler 90 TFlops
- HP Cluster 34,6 TFlops (low & medium latency)
- FPGA Maxeler
- IBM Cluster and other resources 3 TFlops
The most recent CRS4 high performance computing acquisitions are:
- Sparky Cluster, beowulf, based on Intel Haswell, with 10 nodes with AMD firepro 9150 accelerators, connessions Infiniband FDR.
- Huawei Cluster general purpouse, based on Intel Ivy Bridge, 33 nodes with the following characterics:
- CPU ten-core Intel E5-2680V2, 2600Mhz
- 128 GB RAM DDR3 ECC
- two GB Ethernet port Broadcom BCM5720
- HD 600 GB 10000 rpm
- InfiniBand Mellanox MT27500 ConnectX-3 interface
- A small Intel PHI cluster for studying the new Intel processor
- Some small size installations, composed by 14 nodes with 28 Nvidia Kepler GPU accelerators. This cluster has low latency connections (Infiniband QDR), with a variable amount between 64 GB and 128 GB and more of 100 TFlop of theoretical computing power.
The center hosts several computing cluster including a medium installation, composed by 400 nodes dual quadcore, 3200 computation units in total.
This cluster is divided into two subsystems characterized by the type of network connection. The most important subsystem has a low latency network connections / broadband connection to the other subsystem has average latency. A small section of the nodes is dedicated to the provision of services for the two clusters.
These service nodes have network connections at both low and medium latency.
In detail, the cluster is divided as follows:
- 256 compute nodes with connections InfiniBand / Ethernet - Low Latency Cluster
- 128 compute nodes with Ethernet Cluster of medium-latency
- 16 service nodes with FibreChannel connection / Infiniband / Ethernet - Cluster service nodes.
All the nodes computation, both Ethernet and InfiniBand, are HP BL460c blades.
The service nodes are HP BL480c blades.