HPC
LUMI Cluster
Quantum Espresso and EPW using the GNU compiler (Courtesy of Junfeng Qiao)
First load the following modules
module load PrgEnv-gnu/8.3.3module load craype-x86-milanmodule load cray-libsci/21.08.1.2module load cray-fftw/3.3.10.1module load cray-hdf5-parallel/1.12.0.7module load cray-netcdf-hdf5parallel/4.7.4.7Then go inside the QE folder and issue:
./configure --enable-parallel --with-scalapack=yes \
CC=cc CXX=CC FC=ftn F77=ftn F90=ftn MPICC=cc MPICXX=CC MPIF77=ftn MPIF90=ftn \
SCALAPACK_LIBS="-L$CRAY_LIBSCI_PREFIX_DIR/lib -lsci_gnu_mpi" \
BLAS_LIBS="-L$CRAY_LIBSCI_PREFIX_DIR/lib -lsci_gnu_mpi" \
LAPACK_LIBS="-L$CRAY_LIBSCI_PREFIX_DIR/lib -lsci_gnu_mpi" \
FFT_LIBS="-L$FFTW_ROOT/lib -lfftw3" \
HDF5_LIBS="-L$HDF5_ROOT/lib -lhdf5_fortran -lhdf5hl_fortran"
Then you have to manually edit the make.inc file and add the " -D__FFTW" flag:
DFLAGS = -D__FFTW -D__MPI -D__SCALAPACKThen you can compile with
make epwAbinit 9.6.2 with the GNU compiler and NETCDF-IO + aggressive optimisation
First load the following modules
module load LUMI/21.12module load PrgEnv-gnu/8.2.0
module load cray-libsci/21.08.1.2
module load cray-mpich/8.1.12module load cray-hdf5-parallel/1.12.0.7module load cray-netcdf-hdf5parallel/4.7.4.7
module load cray-fftw/3.3.8.12
Using the following lumi_gnu.ac file:
FCFLAGS="-ffree-line-length-none -fallow-argument-mismatch -g $FCFLAGS"FFLAGS=" $FFLAGS"enable_openmp="no"
# Ensure MPIwith_mpi="yes"enable_mpi_io="yes"enable_mpi_inplace="yes"FC=ftnCC=ccCXX=CCF90=ftn
# FFTWwith_fft_flavor="fftw3"with_fftw3=$FFTW_DIR
# libxc support
with_libxc="/scratch/project_465000061/jubouqui/libxc-5.2.3/build"# hdf5/netcdf4 support
with_netcdf="${CRAY_NETCDF_HDF5PARALLEL_PREFIX}"
with_netcdf-fortran="${CRAY_NETCDF_HDF5PARALLEL_PREFIX}"
with_hdf5="${CRAY_HDF5_PARALLEL_PREFIX}"
Then issue:
export LD_LIBRARY_PATH="$CRAY_LD_LIBRARY_PATH:$LD_LIBRARY_PATH"../configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'Abinit will then complain that libxc is missing, then go inside the created fallback and compile the fallback. Once this is done, update the lumi_gnu.ac file and re-issue:
./configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'make
Abinit 9.8.2
module --force purgemodule load LUMI/22.08module load init-lumi/0.2module load lumi-tools/23.02module load PrgEnv-gnu/8.3.3module load cray-libsci/22.08.1.1module load cray-mpich/8.1.18module load cray-hdf5/1.12.1.5module load cray-fftw/3.3.10.1Using the following lumi_gnu.ac file:
FCFLAGS="-ffree-line-length-none -fallow-argument-mismatch -g $FCFLAGS"FFLAGS=" $FFLAGS"enable_openmp="no"
# Ensure MPIwith_mpi="yes"enable_mpi_io="yes"enable_mpi_inplace="yes"FC=ftnCC=ccCXX=CCF90=ftn
# fftw3 (sequential)with_fft_flavor="fftw3"FFTW3_CFLAGS="-I${FFTW_INC}"FFTW3_FCFLAGS="-I${FFTW_INC}"FFTW3_LDFLAGS="-L${FFTW_DIR} -lfftw3 -lfftw3f"
#with_libxc=XX#with_hdf5=XX#with_netcdf=XX#with_netcdf_fortran=XX#with_xmlf90=XX#with_libpsml=X
Then issue:
../configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'Abinit will then complain that some libraries are missing, then go inside the created fallback and compile the fallback. Once this is done, update the lumi_gnu.ac file (change the XXX and uncomment) and re-issue:
./configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'make
Zenobe
Abinit 9.8.2
First load the following modules
module purgemodule load EasyBuild/4.3.1module load compiler/intel/comp_and_lib/2018.1.163module load intelmpi/5.1.3.181/64module load Python/3.9.6-foss-2018bUsing the following zenobe.ac file:
with_mpi="${I_MPI_ROOT}/intel64"enable_openmp="no"FC="mpiifort" # Use intel wrappers. Important!CC="mpiicc" # See warning belowCXX="mpiicpc"CFLAGS="-O2 -g"CXXFLAGS="-O2 -g"FCFLAGS="-O2 -g"
# BLAS/LAPACK with MKLwith_linalg_flavor="mkl"
LINALG_CPPFLAGS="-I${MKLROOT}/include"LINALG_FCFLAGS="-I${MKLROOT}/include"LINALG_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"
# FFT from MKLwith_fft_flavor="dfti"
FFT_CPPFLAGS="-I${MKLROOT}/include"FFT_FCFLAGS="-I${MKLROOT}/include"FFT_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"
# fallbacks !!!!! TO BE UPDATED !!!!!with_libxc=XXXX/build/fallbacks/install_fb/intel/18.0/libxc/6.0.0with_hdf5=XXXX/build/fallbacks/install_fb/intel/18.0/hdf5/1.10.8with_netcdf=XXXX/build/fallbacks/install_fb/intel/18.0/netcdf4/4.9.0with_netcdf_fortran=XXXX/build/fallbacks/install_fb/intel/18.0/netcdf4_fortran/4.6.0with_xmlf90=XXXX/build/fallbacks/install_fb/intel/18.0/xmlf90/1.5.6with_libpsml=XXXXbuild/fallbacks/install_fb/intel/18.0/libpsml/1.1.12
Then issue:
../configure --with-config-file='zenobe.ac' --with-optim-flavor='aggressive'Abinit will then complain some libraries are missing
cd fallbacks./build-abinit-fallbacks.shcd ../../configure --with-config-file='zenobe.ac' --with-optim-flavor='aggressive'make
Abipy
You can find configuration script for Zenobe here: https://abinit.github.io/abipy/workflows/manager_examples.html#zenobe
Nic5
Quantum Espresso 7.1
First load the following modules
module load foss/2019bmodule load HDF5/1.10.5-gompi-2019bmodule load ELPA/2019.11.001-foss-2019bmodule load libxc/4.3.4-GCC-8.3.0Then go inside the QE folder and issue:
./configuremake allphono3py
First download Miniconda for Linux from https://docs.conda.io/en/latest/miniconda.htmlThen install itchmod +x Miniconda3-latest-Linux-x86_64.sh./Miniconda3-latest-Linux-x86_64.shCreate a virtual environment at the end of the install. Logout and in again to be in the environment.
Install the code https://phonopy.github.io/phono3py/install.html
conda install -c conda-forge phono3pyHow to run phono3py with QE
Look at the example for Si there https://github.com/phonopy/phono3py/tree/master/example/Si-QE
phono3py --qe -d --dim 2 2 2 -c scf.in --pa autoThis will create all the supercell files. Then create a "header" file for QE that you will aggregate Then run all the calculation, for example using the following submission script
#!/bin/bash##SBATCH --job-name=si-10#SBATCH --output=res.txt#SBATCH --partition=batch##SBATCH --time=01:00:00 # hh:mm:ss#SBATCH --ntasks=36#SBATCH --nodes=1#SBATCH --cpus-per-task=1module load foss/2019bmodule load HDF5/1.10.5-gompi-2019bmodule load ELPA/2019.11.001-foss-2019bmodule load libxc/4.3.4-GCC-8.3.0
for i in {00000..00111};do cat header.in supercell-$i.in > Si-$i.in mpirun -np 36 /home/ucl/modl/sponce/program/q-e-June2022/bin/pw.x -input Si-$i.in > Si-$i.out;done
Then create the force file with phono3py:
phono3py --qe --cf3 Si-{00001..00111}.outphono3py --sym-fcFinally, compute the thermal conductivity and converge the fine grids (i.e. increase the 10 10 10 grid below):
phono3py --mesh 10 10 10 --fc3 --fc2 --brWhat to do if your calculation uses too much IO
You should use the local scratch on each node (this implies running on 1 node maximum).
You can use a script similar to this:
#!/bin/bash##SBATCH --job-name=test#SBATCH --output=res.txt#SBATCH --partition=hmem#SBATCH --time=00:20:00#SBATCH --ntasks=6#SBATCH --nodes=1#SBATCH --cpus-per-task=1#####SBATCH --mem-per-cpu=100export OMP_NUM_THREADS=1
module load foss/2019bmodule load HDF5/1.10.5-gompi-2019bmodule load ELPA/2019.11.001-foss-2019bmodule load libxc/4.3.4-GCC-8.3.0
echo '$LOCALSCRATCH ',$LOCALSCRATCH echo '$SLURM_JOB_ID ',$SLURM_JOB_IDecho '$SLURM_SUBMIT_DIR ',$SLURM_SUBMIT_DIRecho 'copy ', cp -rL "$SLURM_SUBMIT_DIR/" "$LOCALSCRATCH/$SLURM_JOB_ID"
mkdir -p "$LOCALSCRATCH/$SLURM_JOB_ID"cp -rL "$SLURM_SUBMIT_DIR/" "$LOCALSCRATCH/$SLURM_JOB_ID/"
cd $LOCALSCRATCH/$SLURM_JOB_ID/test/ &&\mpirun -np 6 /home/ucl/modl/sponce/program/qe-EPW5.6/EPW/src/epw.x -npool 6 -input epw1.in > epw1.out
cp -rL "$LOCALSCRATCH/$SLURM_JOB_ID/" "$SLURM_SUBMIT_DIR/" &&\rm -rf "$LOCALSCRATCH"
DISCOVERER - EuroHPC - Bulgaria - 144k AMD EPYC cores
Abinit 9.6.2 (Courtesy of Matthieu J Verstraete)
First load the following modules:
module purgemodule load python3/latestmodule load intelmodule load mkl/latestmodule load zlib/1/latest-intelmodule load netcdf/c/4.8/4.8.1-intel-hdf5_1.12_api_v18-intel_mpimodule load mpi/latestmodule load gcc/12/latestmodule load gcc/11/latestmodule load netcdf/fortran/4.5/4.5.4-intel-hdf5_1.12_api_v112-intel_mpimodule load tbb/latestmodule load oclfpga/latestmodule load szip/2/latest-intelmodule load compiler-rt/latestmodule load hdf5/1/1.12/1.12.1-intel-zlib_1-szip-api_v112-intel_mpimodule load libxc/5/5.2.2-intelmodule unload --force gcc/11unset CC FC CXX F90Using the following discover.ac file:
FC=mpiifortCC=mpiiccCXX=mpiicpcH5CC=h5ccFCFLAGS=" -I/opt/software/libxc/5/5.2.2-intel/include -I/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/netcdf-c-4.8.1-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include -I/opt/software/szip/2/2.1.1-intel/include -I/opt/software/zlib/1/1.2.11-intel/include -I/opt/software/gnu/gcc-12/isl-0.24/include -I/opt/software/gnu/gcc-12/mpc-1.2.1/include -I/opt/software/gnu/gcc-12/mpfr-4.1.0/include -I/opt/software/gnu/gcc-12/gmp-6.2.1/include -I/opt/software/gnu/gcc-12/gcc-12.1.0/include "FCFLAGS+=" -O2 -xHost "CFLAGS=" -I/opt/software/libxc/5/5.2.2-intel/include -I/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/netcdf-c-4.8.1-intel-hdf5_1.12-api_v112-intel_mpi/include -I/opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include -I/opt/software/szip/2/2.1.1-intel/include -I/opt/software/zlib/1/1.2.11-intel/include -I/opt/software/gnu/gcc-12/isl-0.24/include -I/opt/software/gnu/gcc-12/mpc-1.2.1/include -I/opt/software/gnu/gcc-12/mpfr-4.1.0/include -I/opt/software/gnu/gcc-12/gmp-6.2.1/include -I/opt/software/gnu/gcc-12/gcc-12.1.0/include "CFLAGS+=" -O2 -xHost "CFLAGS+="-I /opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include/"FCFLAGS+=" -I /opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include/"CPPFLAGS+=" -I /opt/software/hdf/5/1.12.1-intel-zlib_1-szip-api_v112-intel_mpi/include/"with_hdf5=/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/with_netcdf=/opt/software/netcdf-c-4.8.1-intel-hdf5_1.12-api_v112-intel_mpi/with_netcdf_fortran=/opt/software/netcdf-fortran-4.5.4-intel-hdf5_1.12-api_v112-intel_mpi/with_libxc=/opt/software/libxc/5/5.2.2-intel/Then issue:
mkdir buildcd build../configure --with-config-file=discover.ac --with-optim-flavor='aggressive'make -j4To make sure everything is ok, you can run the test-suite:cd tests/python3 ../../tests/runtests.py -j16
QE 7.1 with impi (Courtesy of M. Giantomassi)
Note: Issues with OpenMPI on AMD hardware with over about 20 cores.
First load the following modules:
module --force purgemodule load intelmodule load compiler/2022.1.0module load mkl/latestmodule load mpi/latestGo inside qe/:
export FC=mpiifort # Use intel wrappers. Important!export CC=mpiiccexport CXX=mpiicpc./configuremake epwQE 7.1 with openmpi and OpenMP support using cmake (Courtesy of Veselin Kolev)
export BUILD_DIR=/home/sponce/program/q-e-qe-7.1-opt/export INSTALL_PREFIX=/home/sponce/program/q-e-qe-7.1-opt/export MPI="openmpi"export COMPILER="intel"Load the following modules:
module purgemodule load cmake/latestmodule load ${COMPILER}module load compiler/latestmodule load blas/latest-${COMPILER}module load lapack/latest-${COMPILER}module load ${MPI}/4/${COMPILER}/latestmodule load fftw/3/latest-${COMPILER}-${MPI}export LDFLAGS+=" -llapack -lblas"
cmake -B build-${COMPILER}-${MPI} \ -DCMAKE_C_COMPILER=mpicc \ -DCMAKE_Fortran_COMPILER=mpifort \ -DCMAKE_C_FLAGS="-O3 -mtune=native -ipo" \ -DCMAKE_Fortran_FLAGS="-O3 -mtune=native -ipo" \ -DQE_ENABLE_OPENMP=ON \ -DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX}-${COMPILER}-${MPI}
cmake --build build-${COMPILER}-${MPI} -j 24 || exit
QE 7.1 with openmpi
Note: This works but might not be the fastest.
First load the following modules:
module purgemodule load nvidiamodule load nvhpc-nompi/latestmodule load blas/latest-nvidiamodule load lapack/latest-nvidiamodule load openmpi/4/nvidia/latestmodule load fftw/3/latest-nvidia-openmpiGo inside qe/:
./configureThen you need to modify manually the following line in the make.inc file:
DFLAGS = -D__PGI -D__FFTW3 -D__MPI -I/opt/software/fftw/3/3.3.10-nvidia-openmpi/includeand then issue: make epw