HPC for theoretical physics
Numerical strategies are constantly developed by our group to benefit from the progress in HPC systems, with an emphasis on efficient exploitation of multi/many-cores architectures. Our group is currently involved in the development of a new multi-level computational strategy based on the multi-boson domain-decomposed Hybrid Monte Carlo. Our future plans include the further development of this numerical strategy to integrate it with the master-field paradigm and to perform a precise study of the string breaking in QCD.
APE Project
The team of Computational Physics of the Bicocca Theory Group has a wide experience in the study of computing architectures for calculation-intensive problems such as the numerical simulations of strong interactions. In the early 1980s some members of the team have been among the pioneers in the development of numerical simulations of Quantum Chromodynamics (QCD) in the non perturbative regime and have also been involved in the design and manufacturing of four generations of computers with an architecture specifically suited for the calculation structures typical of the numerical simulation of QCD on a lattice (LQCD). Starting from the first machine, APE (1985) with a maximum power of 1 GFlops, a power of 800 GFlops has been reached with apeNEXT (2002) with two other projects of intermediate power.
These computers have given the Italian LQCD community a tool to obtain physics results that, as far as statistical and systematic reliability is concerned, were among the best available at the time and impossible to attain with ordinary computing facilities.
The APE project has been constantly funded by INFN and has soon become an international collaboration with installations in many European research centers.
A description of the project and useful references for a further insight can be found in:
Software
Our group is directly involved in the development of highly-parallel codes, optimized for the latest generation architectures and with excellent scalability. Moreover we have actively contributed to several other publicly available packages, such as openQCD, DDHMC, Grid and gpt. Finally we authored and currently maintain the open-source analysis package pyobs.
Local resources

Research and development: cluster KNUTH
- 12 nodes with Intel Xeon Silver, 20 nodes with AMD Epyc 7302
- 1 node with 2x AMD Epyc 7302 and 4x Nvidia A100 SXM4
- 4.5 TB total RAM
- Infiniband EDR 100 Gbit/s
- Master node with 2x Intel Xeon 4208 and 192 GB RAM
- Storage 16 TB SSD
Teaching: cluster WILSON
- 20 nodes with 2x Intel Xeon E5-2630
- 1.3 TB total RAM
- Infiniband 40 Gbit/s
- Master node 2x Intel Xeon E5-2603, 32GB RAM
- Storage 6 TB
- 30 Raspberry Pi 3B+ workstations remotely connected to the cluster (Laboratorio Fisica Computazionale “Marco Comi”)