HPC

Note: For members of the UCLouvain/MODL division, you can also look for compilation instruction on our intranet.
1. Connect yourself to the intranet
2. https://uclouvain.be/en/research-institutes/imcn/modl3. Click on "Intranet"4. Click on the document "Compilation and use of codes on clusters"

LUMI Cluster

Quantum Espresso and EPW using the GNU compiler (Courtesy of Junfeng Qiao) 

First load the following modules 

module purgemodule load PrgEnv-gnu/8.3.3module load craype-x86-milanmodule load cray-libsci/23.02.1.1module load cray-fftw/3.3.10.3module load cray-hdf5-parallel/1.12.2.3module load cray-netcdf-hdf5parallel/4.9.0.3
 Then go inside the QE folder and issue:
./configure --enable-parallel --with-scalapack=yes \
CC=cc CXX=CC FC=ftn F77=ftn F90=ftn MPICC=cc MPICXX=CC MPIF77=ftn MPIF90=ftn \
SCALAPACK_LIBS="-L$CRAY_LIBSCI_PREFIX_DIR/lib -lsci_gnu_mpi" \
BLAS_LIBS="-L$CRAY_LIBSCI_PREFIX_DIR/lib -lsci_gnu_mpi" \
LAPACK_LIBS="-L$CRAY_LIBSCI_PREFIX_DIR/lib -lsci_gnu_mpi" \
FFT_LIBS="-L$FFTW_ROOT/lib -lfftw3" \
HDF5_LIBS="-L$HDF5_ROOT/lib -lhdf5_fortran -lhdf5hl_fortran"

Then you have to manually edit the make.inc file and add the " -D__FFTW" flag:

DFLAGS         = -D__FFTW  -D__MPI -D__SCALAPACK -fallow-argument-mismatch

Then you can compile with

make epw 

Abinit 9.6.2 with the GNU compiler and NETCDF-IO + aggressive optimisation

First load the following modules 

module load LUMI/21.12
module load PrgEnv-gnu/8.2.0
module load cray-libsci/21.08.1.2
module load cray-mpich/8.1.12
module load cray-hdf5-parallel/1.12.0.7module load cray-netcdf-hdf5parallel/4.7.4.7
module load cray-fftw/3.3.8.12

Using the following lumi_gnu.ac file:

FCFLAGS="-ffree-line-length-none -fallow-argument-mismatch -g $FCFLAGS"FFLAGS=" $FFLAGS"
enable_openmp="no"
# Ensure MPIwith_mpi="yes"enable_mpi_io="yes"enable_mpi_inplace="yes"FC=ftnCC=ccCXX=CCF90=ftn
# FFTWwith_fft_flavor="fftw3"with_fftw3=$FFTW_DIR
# libxc support
with_libxc="/scratch/project_465000061/jubouqui/libxc-5.2.3/build"
# hdf5/netcdf4 support
with_netcdf="${CRAY_NETCDF_HDF5PARALLEL_PREFIX}"
with_netcdf-fortran="${CRAY_NETCDF_HDF5PARALLEL_PREFIX}"
with_hdf5="${CRAY_HDF5_PARALLEL_PREFIX}"

Then issue:

export LD_LIBRARY_PATH="$CRAY_LD_LIBRARY_PATH:$LD_LIBRARY_PATH"../configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'

Abinit will then complain that libxc is missing, then go inside the created fallback and compile the fallback. Once this is done, update the lumi_gnu.ac file and re-issue:

./configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'
make

Abinit 9.8.2

module --force purgemodule load LUMI/22.08module load init-lumi/0.2module load lumi-tools/23.02module load PrgEnv-gnu/8.3.3module load cray-libsci/22.08.1.1module load cray-mpich/8.1.18module load cray-hdf5/1.12.1.5module load cray-fftw/3.3.10.1

Using the following lumi_gnu.ac file:

FCFLAGS="-ffree-line-length-none -fallow-argument-mismatch -g $FCFLAGS"FFLAGS=" $FFLAGS"
enable_openmp="no"
# Ensure MPIwith_mpi="yes"enable_mpi_io="yes"enable_mpi_inplace="yes"FC=ftnCC=ccCXX=CCF90=ftn
# fftw3 (sequential)with_fft_flavor="fftw3"FFTW3_CFLAGS="-I${FFTW_INC}"FFTW3_FCFLAGS="-I${FFTW_INC}"FFTW3_LDFLAGS="-L${FFTW_DIR} -lfftw3 -lfftw3f"
#with_libxc=XX#with_hdf5=XX#with_netcdf=XX#with_netcdf_fortran=XX#with_xmlf90=XX#with_libpsml=X

Then issue:

../configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'

Abinit will then complain that some libraries are missing, then go inside the created fallback and compile the fallback. Once this is done, update the lumi_gnu.ac file (change the XXX and uncomment) and re-issue:

./configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'
make

MareNostrum5 Cluster

Abinit 10.0.7 with the GNU compiler

First load the following modules

module purgemodule load gcc/13.2.0module load openmpi/4.1.5-gccmodule load openblas/0.3.27-gccmodule load libxc/6.2.0-gccmodule load hdf5/1.14.1-2-gcc-openmpimodule load pnetcdf/1.12.3-gcc-openmpimodule load netcdf/c-4.9.2_fortran-4.6.1_cxx4-4.3.1_hdf5-1.14.1-2_pnetcdf-1.12.3-gcc-openmpi

The issue the following command in your build directory:../configure FC=mpif90 CC=mpicc --with-netcdf-fortran=/apps/GPP/NETCDF/c-4.9.2_fortran-4.6.1_cxx4-4.3.1_hdf5-1.14.1-2_pnetcdf-1.12.3/GCC/OPENMPI/
make

Zenobe

Abinit 9.8.2

First load the following modules 

module purgemodule load EasyBuild/4.3.1module load compiler/intel/comp_and_lib/2018.1.163module load intelmpi/5.1.3.181/64module load Python/3.9.6-foss-2018b

Using the following zenobe.ac file:

with_mpi="${I_MPI_ROOT}/intel64"enable_openmp="no"
FC="mpiifort"  # Use intel wrappers. Important!CC="mpiicc"    # See warning belowCXX="mpiicpc"CFLAGS="-O2 -g"CXXFLAGS="-O2 -g"FCFLAGS="-O2 -g"
# BLAS/LAPACK with MKLwith_linalg_flavor="mkl"
LINALG_CPPFLAGS="-I${MKLROOT}/include"LINALG_FCFLAGS="-I${MKLROOT}/include"LINALG_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"
# FFT from MKLwith_fft_flavor="dfti"
FFT_CPPFLAGS="-I${MKLROOT}/include"FFT_FCFLAGS="-I${MKLROOT}/include"FFT_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"
# fallbacks !!!!! TO BE UPDATED !!!!!with_libxc=XXXX/build/fallbacks/install_fb/intel/18.0/libxc/6.0.0with_hdf5=XXXX/build/fallbacks/install_fb/intel/18.0/hdf5/1.10.8with_netcdf=XXXX/build/fallbacks/install_fb/intel/18.0/netcdf4/4.9.0with_netcdf_fortran=XXXX/build/fallbacks/install_fb/intel/18.0/netcdf4_fortran/4.6.0with_xmlf90=XXXX/build/fallbacks/install_fb/intel/18.0/xmlf90/1.5.6with_libpsml=XXXXbuild/fallbacks/install_fb/intel/18.0/libpsml/1.1.12

Then issue:

../configure --with-config-file='zenobe.ac' --with-optim-flavor='aggressive'

Abinit will then complain some libraries are missing

cd fallbacks./build-abinit-fallbacks.shcd ../../configure --with-config-file='zenobe.ac' --with-optim-flavor='aggressive'
make

Abipy

You can find configuration script for Zenobe here:  https://abinit.github.io/abipy/workflows/manager_examples.html#zenobe


Lucia

Abinit v9.11.6 (Courtesy of J.-M. Beuken)

module purgemodule load module load Cray/22.09 PrgEnv-cray/8.3.3 load cray-hdf5/1.12.2.1 cray-netcdf/4.9.0.1 cray-fftw/3.3.8.13
cd abinit/mkdir buildcd build../configurecd fallbacks./build-abinit-fallbacks.sh

Abinit v9.8.4 (Courtesy of W. Chen and T. van Waas)

module purgemodule load Cray/22.09ml PrgEnv-gnu cray-fftw cray-hdf5 cray-netcdf
wget https://www.abinit.org/sites/default/files/packages/abinit-9.8.4.tar.gztar xzf abinit-9.8.4.tar.gz
cd abinit-9.8.4/
mkdir build cd buildcat > build.ac9 << EOF# beginCC=ccCXX=CCFC=ftnFCFLAGS="-O2 -unroll-aggressive -march=znver3 -fallow-argument-mismatch --free-line-length-none" CXXFLAGS="-O2" CFLAGS="-O2"LINALG_LIBS="-L/opt/cray/pe/libsci/21.08.1.2/GNU/9.1/x86_64/lib -lsci_gnu_mpi -lsci_gnu"with_mpi_flavor="native"with_netcdf="/opt/cray/pe/netcdf/4.9.0.1/gnu/9.1"with_netcdf_fortran="/opt/cray/pe/netcdf/4.9.0.1/gnu/9.1"with_hdf5="/opt/cray/pe/hdf5/1.12.2.1/gnu/9.1"with_libxc="/gpfs/projects/acad/elph/share/libxc622"with_fftw3="/opt/cray/pe/fftw/3.3.10.1/x86_milan"# endEOF
../configure --with-config-file='build.ac9'make -j 16

Then to run, one can use:


srun --mpi=cray_shasta $home/programmes/abinit-9.8.4/src/98_main/abinit $1

Abinit v9.8.3

First load the following modules: 

module purgeml load  OpenBLAS/0.3.20-GCC-11.3.0
ml load  OpenMPI/4.1.4-GCC-11.3.0
ml load  FFTW/3.3.10-GCC-11.3.0

Then do:


mkdir ABINIT
cd ABINIT
wget https://www.abinit.org/sites/default/files/packages/abinit-9.8.3.tar.gz
tar xzf abinit-9.8.3.tar.gz
cd abinit-9.8.3/
mkdir build  && cd build 
../configure
cd fallbacks  && ./build-abinit-fallbacks.sh
cd ..

Then create a "lucia.ac9" file in the build folder with the following lines:

  with_libxc=~/ABINIT/abinit-9.8.3/build/fallbacks/install_fb/gnu/11.3/libxc/6.0.0
  with_hdf5=~/ABINIT/abinit-9.8.3/build/fallbacks/install_fb/gnu/11.3/hdf5/1.10.8
  with_netcdf=~/ABINIT/abinit-9.8.3/build/fallbacks/install_fb/gnu/11.3/netcdf4/4.9.0
  with_netcdf_fortran=~/ABINIT/abinit-9.8.3/build/fallbacks/install_fb/gnu/11.3/netcdf4_fortran/4.6.0

Finally, configure and compile: 

../configure --with-config-file='lucia.ac9'
make -j 12

Abipy

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.shbash Miniconda3-latest-Linux-x86_64.shconda install numpy scipy netcdf4 pyyaml conda config --add channels conda-forgeconda install abipyconda install pymatgen=2022.7.25

Quantum Espresso v7.2  (Courtesy of W. Chen and T. van Waas)

Go into the QE folder which should be unpacked somewhere in your $HOME or $PROJECTS_HOME, then execute:

ml PrgEnv-intel./configure --with-scalapack=intel CC=cc FC=ftn MPIF90=ftn FFLAGS="-O2 -assume byterecl -unroll" LDFLAGS="-qmkl" CFLAGS="-O2" SCALAPACK_LIBS="-lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" LAPACK_LIBS="-lpthread -lm"make -j 8 epw

A job execution – executed somewhere within your $SCRATCH should look like this:

srun --mpi=cray_shasta $PATHQE/bin/pw.x -npool 4 -in scf.in > scf.out

Lemaitre4

Abinit v9.11.6

module purgemodule load Python/3.11.3-GCCcore-12.3.0module load Automake/1.16.5-GCCcore-12.3.0 Autoconf/2.71-GCCcore-12.3.0 libtool/2.4.7-GCCcore-12.3.0module load intel-compilers/2023.1.0 libxc/6.2.2-intel-compilers-2023.1.0 netCDF/4.9.2-iimpi-2023a netCDF-Fortran/4.6.1-iimpi-2023a HDF5/1.14.0-iimpi-2023a imkl/2023.1.0 imkl-FFTW/2023.1.0-iimpi-2023a

cd abinit/mkdir buildcd build../configuremake -j 8

Abipy


module purgemodule load Python/3.11.3-GCCcore-12.3.0 matplotlib/3.7.2-gfbf-2023amodule load Automake/1.16.5-GCCcore-12.3.0 Autoconf/2.71-GCCcore-12.3.0 libtool/2.4.7-GCCcore-12.3.0module load intel-compilers/2023.1.0 libxc/6.2.2-intel-compilers-2023.1.0 netCDF/4.9.2-iimpi-2023a netCDF-Fortran/4.6.1-iimpi-2023a HDF5/1.14.0-iimpi-2023a imkl/2023.1.0 imkl-FFTW/2023.1.0-iimpi-2023a
wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.shchmod +x Anaconda3-2020.02-Linux-x86_64.sh ./Anaconda3-2020.02-Linux-x86_64.sh conda create -n abipyconda update -n base -c defaults condaconda activate abipyconda install pyyaml netcdf4conda config --add channels conda-forgegit clone https://github.com/abinit/abipycd abipypython setup.py install --prefix=/home/ucl/modl/XXX/program/abipy/build
or conda install abipy

Manneback and the Keira partition. 

Important: read this carefully 

If you are part of the MODL division of IMCN/UCLouvain, you can have a higher priorty to run on the "keira" partition. The Keira partition is composed of 16 nodes of 128 cores AMD EPYC 7763 processor@3.4GHz (256 with threads) and  6 nodes of  128 cores AMD EPYC 7742 processor@1.8GHz for a total of 2816 cores. 
To run with high priority on them, you must be added to the keira group. You can check if you are part of the group by issuing 
sacctmgr show Association | grep "USER_NAME"
which should return:keira,normal,preemp+
If this is not the case, you should contact the CISM to ask to be added in that group. Once this is done, you should add to your slurm scripts:
--partition=keira--qos=keira

Nic5

Quantum Espresso 7.1 

First load the following modules 

module load foss/2019bmodule load HDF5/1.10.5-gompi-2019bmodule load ELPA/2019.11.001-foss-2019bmodule load libxc/4.3.4-GCC-8.3.0

 Then go inside the QE folder and issue: 

./configuremake all 

phono3py

First download Miniconda for Linux from https://docs.conda.io/en/latest/miniconda.htmlThen install itchmod +x Miniconda3-latest-Linux-x86_64.sh./Miniconda3-latest-Linux-x86_64.sh

Create a virtual environment at the end of the install. Logout and in again to be in the environment. 

Install the code https://phonopy.github.io/phono3py/install.html

conda install -c conda-forge phono3py

How to run phono3py with QE

Look at the example for Si there https://github.com/phonopy/phono3py/tree/master/example/Si-QE

phono3py --qe -d --dim 2 2 2 -c scf.in --pa auto

This will create all the supercell files. Then create a "header" file for QE that you will aggregate Then run all the calculation, for example using the following submission script

#!/bin/bash##SBATCH --job-name=si-10#SBATCH --output=res.txt#SBATCH --partition=batch##SBATCH --time=01:00:00 # hh:mm:ss#SBATCH --ntasks=36#SBATCH --nodes=1#SBATCH --cpus-per-task=1
module load foss/2019bmodule load HDF5/1.10.5-gompi-2019bmodule load ELPA/2019.11.001-foss-2019bmodule load libxc/4.3.4-GCC-8.3.0
for i in {00000..00111};do  cat header.in supercell-$i.in > Si-$i.in  mpirun -np 36  /home/ucl/modl/sponce/program/q-e-June2022/bin/pw.x -input Si-$i.in > Si-$i.out;done

Then create the force file with phono3py:

phono3py --qe --cf3 Si-{00001..00111}.outphono3py --sym-fc

Finally, compute the thermal conductivity and converge the fine grids (i.e. increase the 10 10 10 grid below):

phono3py --mesh 10 10 10 --fc3 --fc2 --br

What to do if your calculation uses too much IO

You should use the local scratch on each node (this implies running on 1 node maximum). 

You can use a script similar to this:

#!/bin/bash##SBATCH --job-name=test#SBATCH --output=res.txt#SBATCH --partition=hmem#SBATCH --time=00:20:00#SBATCH --ntasks=6#SBATCH --nodes=1#SBATCH --cpus-per-task=1#####SBATCH --mem-per-cpu=100
export OMP_NUM_THREADS=1
module load foss/2019bmodule load HDF5/1.10.5-gompi-2019bmodule load ELPA/2019.11.001-foss-2019bmodule load libxc/4.3.4-GCC-8.3.0
echo '$LOCALSCRATCH ',$LOCALSCRATCH echo '$SLURM_JOB_ID ',$SLURM_JOB_IDecho '$SLURM_SUBMIT_DIR ',$SLURM_SUBMIT_DIRecho 'copy ', cp -rL "$SLURM_SUBMIT_DIR/" "$LOCALSCRATCH/$SLURM_JOB_ID"
mkdir -p "$LOCALSCRATCH/$SLURM_JOB_ID"cp -rL "$SLURM_SUBMIT_DIR/" "$LOCALSCRATCH/$SLURM_JOB_ID/"
cd $LOCALSCRATCH/$SLURM_JOB_ID/test/ &&\mpirun -np 6 /home/ucl/modl/sponce/program/qe-EPW5.6/EPW/src/epw.x -npool 6 -input epw1.in > epw1.out
cp -rL "$LOCALSCRATCH/$SLURM_JOB_ID/" "$SLURM_SUBMIT_DIR/" &&\rm -rf "$LOCALSCRATCH"

DISCOVERER - EuroHPC - Bulgaria - 144k AMD EPYC cores

Abinit 9.11.6 (Courtesy of Matthieu J Verstraete, 22/03/24)

First load the following modules: 

export MODULEPATH="/home/mverstraete/modulefiles/:/opt/software/modulefiles"
module purgemodule load python/3/3.10/3.10.1module load cmake/3/3.28.0module load oclfpga/2023.2.1module load tbb/2021.11module load intelmodule load compiler-rt/2023.2.1 compiler/2023.2.1module load openmpi/4/intel/4.1.6-classicmodule load mkl/2023.2.0module load libzip/1/1.10.1-intelmodule load bzip2/1/1.0.8-intelmodule load szip/2/2.1.1-intelmodule load zlib/1/1.3-intelmodule load libaec/1/1.0.6-intelmodule load zstd/1/1.5.5-intelmodule load hdf5/1/1.14/1.14.3-intel-openmpimodule load netcdf/c/4.9/4.9.2-intel-openmpimodule load netcdf/cxx/4.3/4.3.1-intel-openmpimodule load netcdf/fortran/4.6/4.6.1-intel-openmpimodule load wannier90/3/3.1.0-intel-openmpimodule load fftw/3/3.3.10-intel-openmpimodule load libxc/5/5.2.2-intel
unset FCFLAGS CFLAGS CC FC CXX F90

Using the following discover.ac file:

FC=mpif90CC=mpiccCXX=mpicxxH5CC=h5pccwith_hdf5=/opt/software/hdf/5/1.14.3-intel-openmpiwith_netcdf=/opt/software/netcdf-c/4/4.9.2-intel-openmpiwith_netcdf_fortran=/opt/software/netcdf-fortran/4/4.6.1-intel-openmpiwith_libxc=/opt/software/libxc/5/5.2.2-intel/

Then issue:

mkdir buildcd build../configure --with-config-file=discover.ac --with-optim-flavor='aggressive'make -j4

Abinit 9.6.2 (Update of the software stack 05/12/23)

 First load the following modules: 

module purgemodule load python3/latestmodule load intelmodule load tbbmodule load compiler-rtmodule load mkl/latestmodule load zlib/1/latest-intelmodule load module load oclfpgamodule load netcdf/c/4.8/4.8.1-intel-hdf5_1.12_api_v18-intel_mpimodule load mpi/latestmodule load gcc/12/latestmodule load gcc/11/latestmodule load netcdf/fortran/4.5/4.5.4-intel-hdf5_1.12_api_v112-intel_mpimodule load tbb/latestmodule load oclfpga/latestmodule load szip/2/latest-intelmodule load compiler-rt/latestmodule load hdf5/1/1.12/1.12.1-intel-zlib_1-szip-api_v112-intel_mpimodule load libxc/5/5.2.2-intelmodule unload --force gcc/11unset CC FC CXX F90

Using the following discover.ac file:

FC=mpiifortCC=mpiiccCXX=mpiicpcH5CC=h5ccFCFLAGS=" -I/opt/software/libxc/5/5.2.2-intel/include -I/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/netcdf-c-4.8.1-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include -I/opt/software/szip/2/2.1.1-intel/include -I/opt/software/zlib/1/1.2.11-intel/include -I/opt/software/gnu/gcc-12/isl-0.24/include -I/opt/software/gnu/gcc-12/mpc-1.2.1/include -I/opt/software/gnu/gcc-12/mpfr-4.1.0/include -I/opt/software/gnu/gcc-12/gmp-6.2.1/include -I/opt/software/gnu/gcc-12/gcc-12.1.0/include "FCFLAGS+=" -O2 -xHost "CFLAGS=" -I/opt/software/libxc/5/5.2.2-intel/include -I/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/netcdf-c-4.8.1-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include -I/opt/software/szip/2/2.1.1-intel/include -I/opt/software/zlib/1/1.2.11-intel/include -I/opt/software/gnu/gcc-12/isl-0.24/include -I/opt/software/gnu/gcc-12/mpc-1.2.1/include -I/opt/software/gnu/gcc-12/mpfr-4.1.0/include -I/opt/software/gnu/gcc-12/gmp-6.2.1/include -I/opt/software/gnu/gcc-12/gcc-12.1.0/include "CFLAGS+=" -O2 -xHost "CFLAGS+="-I /opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include/"FCFLAGS+=" -I /opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include/"CPPFLAGS+=" -I /opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include/"with_hdf5=/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/with_netcdf=/opt/software/netcdf-c-4.8.1-intel-hdf5_1.12-api_v112-intel_mpi/with_netcdf_fortran=/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/with_libxc=/opt/software/libxc/5/5.2.2-intel/

Then issue:

mkdir buildcd build../configure --with-config-file=discover.ac --with-optim-flavor='aggressive'make -j4

Abinit 9.6.2 (Courtesy of Matthieu J Verstraete) 

First load the following modules: 

module purgemodule load python3/latestmodule load intelmodule load mkl/latestmodule load zlib/1/latest-intelmodule load netcdf/c/4.8/4.8.1-intel-hdf5_1.12_api_v18-intel_mpimodule load mpi/latestmodule load gcc/12/latestmodule load gcc/11/latestmodule load netcdf/fortran/4.5/4.5.4-intel-hdf5_1.12_api_v112-intel_mpimodule load tbb/latestmodule load oclfpga/latestmodule load szip/2/latest-intelmodule load compiler-rt/latestmodule load hdf5/1/1.12/1.12.1-intel-zlib_1-szip-api_v112-intel_mpimodule load libxc/5/5.2.2-intelmodule unload --force gcc/11unset CC FC CXX F90

Using the following discover.ac file:

FC=mpiifortCC=mpiiccCXX=mpiicpcH5CC=h5ccFCFLAGS=" -I/opt/software/libxc/5/5.2.2-intel/include -I/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/netcdf-c-4.8.1-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include -I/opt/software/szip/2/2.1.1-intel/include -I/opt/software/zlib/1/1.2.11-intel/include -I/opt/software/gnu/gcc-12/isl-0.24/include -I/opt/software/gnu/gcc-12/mpc-1.2.1/include -I/opt/software/gnu/gcc-12/mpfr-4.1.0/include -I/opt/software/gnu/gcc-12/gmp-6.2.1/include -I/opt/software/gnu/gcc-12/gcc-12.1.0/include "FCFLAGS+=" -O2 -xHost "CFLAGS=" -I/opt/software/libxc/5/5.2.2-intel/include -I/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/netcdf-c-4.8.1-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include -I/opt/software/szip/2/2.1.1-intel/include -I/opt/software/zlib/1/1.2.11-intel/include -I/opt/software/gnu/gcc-12/isl-0.24/include -I/opt/software/gnu/gcc-12/mpc-1.2.1/include -I/opt/software/gnu/gcc-12/mpfr-4.1.0/include -I/opt/software/gnu/gcc-12/gmp-6.2.1/include -I/opt/software/gnu/gcc-12/gcc-12.1.0/include "CFLAGS+=" -O2 -xHost "CFLAGS+="-I /opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include/"FCFLAGS+=" -I /opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include/"CPPFLAGS+=" -I /opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include/"with_hdf5=/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/with_netcdf=/opt/software/netcdf-c-4.8.1-intel-hdf5_1.12-api_v112-intel_mpi/with_netcdf_fortran=/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/with_libxc=/opt/software/libxc/5/5.2.2-intel/

Then issue:

mkdir buildcd build../configure --with-config-file=discover.ac --with-optim-flavor='aggressive'make -j4
To make sure everything is ok, you can run the test-suite:cd tests/python3 ../../tests/runtests.py -j16

Example of slurm submission script 

#!/bin/bash#SBATCH --partition=ALL#SBATCH --job-name=NAME##SBATCH --mem-per-cpu=63000#SBATCH --account=XXX#SBATCH --qos=XXX#SBATCH --time=0-23:30:00#SBATCH --output=slurm.out#SBATCH --error=slurm.err#SBATCH --nodes     1#SBATCH --ntasks 128#SBATCH --cpus-per-task   1##SBATCH --exclusive
# OpenMp Environmentexport OMP_NUM_THREADS=1# Shell Environmentexport PATH=PATH_TO_ABINIT/build/src/98_main:$PATH
# Commands before executionulimit -s unlimitedsource PATH_TO_MODULE_LOAD/mod.sh
mpirun  -n $SLURM_NTASKS abinit input.in  > log 2> err

QE 7.2 - Intel compilation IFX and openmpi (Courtesy of T. van Waas and V. Kolev) 

module force purgemodule load intelmodule load compiler/latestmodule load openblas/0/0.3.23-intelmodule load openmpi/4/intel/4.1.5module load fftw/3/latest-intel-openmpiexport CC=icxexport FC=ifxexport CXX=ipcxCFLAGS+=" -mfma -mavx2"FCFLAGS+=" -mfma -mavx2"CXXFLAGS+=" -mfma -mavx2"export BLAS_LIBS="-L/opt/software/openblas/0/0.3.23-intel/lib64  -lopenblas"export LAPACK_LIBS="-L/opt/software/openblas/0/0.3.23-intel/lib64  -lopenblas"export FFTW_LIBS="-L/opt/software/fftw/3/3.3.10-intel-openmpi/lib/ -lfftw3"

Go inside qe/:

/configuremake -j 8 epw

QE 7.1 with openmpi and OpenMP support using cmake (Courtesy of V. Kolev) 

export BUILD_DIR=XXXXexport INSTALL_PREFIX=XXXXexport MPI="openmpi"export COMPILER="intel"

Load the following modules: 

module purgemodule load cmake/latestmodule load ${COMPILER}module load compiler/latestmodule load blas/latest-${COMPILER}module load lapack/latest-${COMPILER}module load ${MPI}/4/${COMPILER}/latestmodule load fftw/3/latest-${COMPILER}-${MPI}
export LDFLAGS+=" -llapack -lblas"
cmake -B build-${COMPILER}-${MPI} \      -DCMAKE_C_COMPILER=mpicc \      -DCMAKE_Fortran_COMPILER=mpifort \      -DCMAKE_C_FLAGS="-O3 -mtune=native -ipo" \      -DCMAKE_Fortran_FLAGS="-O3 -mtune=native -ipo" \      -DQE_ENABLE_OPENMP=ON \      -DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX}-${COMPILER}-${MPI}
cmake --build build-${COMPILER}-${MPI} -j 24 || exit

QE 7.1 with nv & openmpi 

Note: This works but might not be the fastest

First load the following modules: 

module purgemodule load nvidiamodule load nvhpc-nompi/latestmodule load blas/latest-nvidiamodule load lapack/latest-nvidiamodule load openmpi/4/nvidia/latestmodule load fftw/3/latest-nvidia-openmpi

Go inside qe/:

./configure

Then you need to modify manually the following line in the make.inc file: 

DFLAGS         =  -D__PGI -D__FFTW3 -D__MPI -I/opt/software/fftw/3/3.3.10-nvidia-openmpi/include
and then issue: make epw

General comments

QE

Sometime it is useful to allow for argument mismatch in make.inc

FFLAGS         = -O3 -g -fallow-argument-mismatch

Acknowledgments

Affiliation: European Theoretical Spectroscopy Facility, Institute of Condensed Matter and Nanosciences, Université catholique de Louvain, Chemin des Étoiles 8, B-1348 Louvain-la-Neuve, Belgium. WEL Research Institute, avenue Pasteur, 6, 1300 Wavre, Belgium.Acknowledgements: 
S. P. acknowledges support from the Fonds de la Recherche Scientifique de Belgique (FRS-FNRS). 
This work was supported by the Fonds de la Recherche Scientifique - FNRS under Grants number T.0183.23 (PDR) and  T.W011.23 (PDR-WEAVE). This publication was supported by the Walloon Region in the strategic axe FRFS-WEL-T.
Computational resources have been provided by the PRACE award granting access to MareNostrum4 at Barcelona Supercomputing Center (BSC), Spain and Discoverer in SofiaTech, Bulgaria (OptoSpin project id. 2020225411), and by the Consortium des Équipements de Calcul Intensif (CÉCI), funded by the FRS-FNRS under Grant No. 2.5020.11 and by the Walloon Region, as well as computational resources awarded on the Belgian share of the EuroHPC LUMI supercomputer.

Note for Wel-T funding:
  • Any publications or communications (talk/poster) should be sent to the WEL Research Institute before submission (3 weeks before) for approbation. 
  • A copy of each paper should be sent to the WEL Research Institute before publication date.  
  • A Wel-T stickers must be placed in the office where research paid on the project are conducted (Office and CISM computers), provided by the WEL Research Institute. 
  • For the 31 of January a progress report should be submitted to the WEL Research Institute.
  • For the 15 of November at the end of the 4 years, a final activity report should be submitted to WEL and FNRS.
  •  3 meetings per year with the valorisation committee. 

Quantum Espresso v 6.5

First load the following modules 

module purgemodule load releases/2020bmodule load imkl/2020.4.304-iimpi-2020b./configure make all