HPC

LUMI Cluster

Quantum Espresso and EPW using the GNU compiler (Courtesy of Junfeng Qiao)

First load the following modules

module load PrgEnv-gnu/8.3.3module load craype-x86-milanmodule load cray-libsci/21.08.1.2module load cray-fftw/3.3.10.1module load cray-hdf5-parallel/1.12.0.7module load cray-netcdf-hdf5parallel/4.7.4.7
Then go inside the QE folder and issue:
./configure --enable-parallel --with-scalapack=yes \
CC=cc CXX=CC FC=ftn F77=ftn F90=ftn MPICC=cc MPICXX=CC MPIF77=ftn MPIF90=ftn \
SCALAPACK_LIBS="-L$CRAY_LIBSCI_PREFIX_DIR/lib -lsci_gnu_mpi" \
BLAS_LIBS="-L$CRAY_LIBSCI_PREFIX_DIR/lib -lsci_gnu_mpi" \
LAPACK_LIBS="-L$CRAY_LIBSCI_PREFIX_DIR/lib -lsci_gnu_mpi" \
FFT_LIBS="-L$FFTW_ROOT/lib -lfftw3" \
HDF5_LIBS="-L$HDF5_ROOT/lib -lhdf5_fortran -lhdf5hl_fortran"

Then you have to manually edit the make.inc file and add the " -D__FFTW" flag:

DFLAGS = -D__FFTW -D__MPI -D__SCALAPACK

Then you can compile with

make epw

Abinit 9.6.2 with the GNU compiler and NETCDF-IO + aggressive optimisation

First load the following modules

module load LUMI/21.12
module load PrgEnv-gnu/8.
2.0
module load cray-libsci/21.08.1.2
module load cray-mpich/8.1.
12
module load cray-hdf5-parallel/1.12.0.7module load cray-netcdf-hdf5parallel/4.7.4.7
module load cray-fftw/3.3.8.1
2

Using the following lumi_gnu.ac file:

FCFLAGS="-ffree-line-length-none -fallow-argument-mismatch -g $FCFLAGS"FFLAGS=" $FFLAGS"
enable_openmp="no"
# Ensure MPIwith_mpi="yes"enable_mpi_io="yes"enable_mpi_inplace="yes"FC=ftnCC=ccCXX=CCF90=ftn
# FFTWwith_fft_flavor="fftw3"with_fftw3=$FFTW_DIR
# libxc support
with_libxc="/scratch/project_465000061/jubouqui/libxc-5.2.3/build"
# hdf5/netcdf4 support
with_netcdf="${CRAY_NETCDF_HDF5PARALLEL_PREFIX}"
with_netcdf-fortran="${CRAY_NETCDF_HDF5PARALLEL_PREFIX}"
with_hdf5="${CRAY_HDF5_PARALLEL_PREFIX}"

Then issue:

export LD_LIBRARY_PATH="$CRAY_LD_LIBRARY_PATH:$LD_LIBRARY_PATH"../configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'

Abinit will then complain that libxc is missing, then go inside the created fallback and compile the fallback. Once this is done, update the lumi_gnu.ac file and re-issue:

./configure --with-config-file='lumi_gnu.ac' --with-optim-flavor='aggressive'
make


Nic5

Quantum Espresso 7.1

First load the following modules

module load foss/2019bmodule load HDF5/1.10.5-gompi-2019bmodule load ELPA/2019.11.001-foss-2019bmodule load libxc/4.3.4-GCC-8.3.0

Then go inside the QE folder and issue:

./configuremake all

phono3py

First download Miniconda for Linux from https://docs.conda.io/en/latest/miniconda.htmlThen install itchmod +x Miniconda3-latest-Linux-x86_64.sh./Miniconda3-latest-Linux-x86_64.sh

Create a virtual environment at the end of the install. Logout and in again to be in the environment.

Install the code https://phonopy.github.io/phono3py/install.html

conda install -c conda-forge phono3py


How to run phono3py with QE

Look at the example for Si there https://github.com/phonopy/phono3py/tree/master/example/Si-QE

phono3py --qe -d --dim 2 2 2 -c scf.in --pa auto

This will create all the supercell files. Then create a "header" file for QE that you will aggregate Then run all the calculation, for example using the following submission script

#!/bin/bash##SBATCH --job-name=si-10#SBATCH --output=res.txt#SBATCH --partition=batch##SBATCH --time=01:00:00 # hh:mm:ss#SBATCH --ntasks=36#SBATCH --nodes=1#SBATCH --cpus-per-task=1
module load foss/2019bmodule load HDF5/1.10.5-gompi-2019bmodule load ELPA/2019.11.001-foss-2019bmodule load libxc/4.3.4-GCC-8.3.0
for i in {00000..00111};do cat header.in supercell-$i.in > Si-$i.in mpirun -np 36 /home/ucl/modl/sponce/program/q-e-June2022/bin/pw.x -input Si-$i.in > Si-$i.out;done

Then create the force file with phono3py:

phono3py --qe --cf3 Si-{00001..00111}.outphono3py --sym-fc

Finally, compute the thermal conductivity and converge the fine grids (i.e. increase the 10 10 10 grid below):

phono3py --mesh 10 10 10 --fc3 --fc2 --br


What to do if your calculation uses too much IO

You should use the local scratch on each node (this implies running on 1 node maximum).

You can use a script similar to this:

#!/bin/bash##SBATCH --job-name=test#SBATCH --output=res.txt#SBATCH --partition=hmem#SBATCH --time=00:20:00#SBATCH --ntasks=6#SBATCH --nodes=1#SBATCH --cpus-per-task=1#####SBATCH --mem-per-cpu=100
export OMP_NUM_THREADS=1
module load foss/2019bmodule load HDF5/1.10.5-gompi-2019bmodule load ELPA/2019.11.001-foss-2019bmodule load libxc/4.3.4-GCC-8.3.0
echo '$LOCALSCRATCH ',$LOCALSCRATCH echo '$SLURM_JOB_ID ',$SLURM_JOB_IDecho '$SLURM_SUBMIT_DIR ',$SLURM_SUBMIT_DIRecho 'copy ', cp -rL "$SLURM_SUBMIT_DIR/" "$LOCALSCRATCH/$SLURM_JOB_ID"
mkdir -p "$LOCALSCRATCH/$SLURM_JOB_ID"cp -rL "$SLURM_SUBMIT_DIR/" "$LOCALSCRATCH/$SLURM_JOB_ID/"
cd $LOCALSCRATCH/$SLURM_JOB_ID/test/ &&\mpirun -np 6 /home/ucl/modl/sponce/program/qe-EPW5.6/EPW/src/epw.x -npool 6 -input epw1.in > epw1.out
cp -rL "$LOCALSCRATCH/$SLURM_JOB_ID/" "$SLURM_SUBMIT_DIR/" &&\rm -rf "$LOCALSCRATCH"