HILA
|
Dependencies | Minimum Version | Required |
---|---|---|
Clang | 8 - | Yes |
NOTE:
If one opts to use a singularity container, skip to HILA applications dependencies.
If one opts to use a docker container, skip to installation section.
For building hilapp, you need clang development tools (actually, only include files). These can be found in most Linux distribution repos, e.g. in Ubuntu 22.04:
Dependencies | Minimum Version | Required |
---|---|---|
Clang / GCC | 8 - / x | Yes |
FFTW3 | x | Yes |
MPI | x | Yes |
OpenMP | x | No |
CUDA | x | No |
HIP | x | No |
NOTE: If one opts to use docker, skip directly to the installation section.
Installing non GPU dependencies on ubuntu**:
CUDA:
See NVIDIA drivers and CUDA documentation: https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html
HIP:
See ROCm and HIP documentation: https://docs.amd.com/, https://rocmdocs.amd.com/en/latest/Installation_Guide/HIP-Installation.html
Begin by cloning HILA repository:
The installation process is split into two parts. Building the HILA preprocessor and compiling HILA applications. Both can be installed from source, and both steps have their respective containerization options available. The variety in options is to address different issues which arise in platform dependencies.
When it comes to installing HILA applications there are many avenues one can take depending on their platform. The available platforms and offered methods are listed below, which link to the necessary section in the installation guide.
HILA has originally been developed on linux, hence all of the available options can be used. The HILA preprocessor can be built from source or with the use of a singualrity container. Additionally one can opt to use the docker container which installs the HILA preprocessor directly.
NOTE: It is advised to use the docker container only for development purposes, since containerization can add computational overhead. This is especially evident in containerized MPI communication.
Containerization of the hilapp on the other hand adds no computational overhead, except in the compilation process, thus for production runs one can use the singularity container and reach maximal computational performance.
On mac the installation of the HILA preprocessor dependencies and HILA application dependencies can be tedious, and in some cases impossible. Availability of clang libtoolbox is open ended. For this reason the best option is to use the available docker container.
On windows the installation of the HILA preprocessor dependencies and HILA application dependencies are untested. For this reason the best option is to use the available docker container. One can also opt to use WSL, in this case see LINUX installation instructions.
On supercomputing platforms the HILA application dependencies are most likely available. The only issue is the availability of the clang libtoolbox which is used in building the HILA preprocessor. Due to the availability of singularity on supercomputing platforms the best solution is to opt to use the singularity container.
After installing the HILA preprocessor with one of the above options one can move on to the building HILA applications section.
HILA comes with both a singularity and docker container for differing purposes. The aim is to make use easy on any platform be it linux, mac, windows or a supercomputer.
The docker container is meant to develop and produce HILA applications, libraries and hilapp with ease. One can produce HILA applications on their local machine and run them in a container without having to worry about dependencies. Note that there is overhead when running MPI communication in docker, thus one will not get optimal simulation performance when running highly paralelled code in a container. This is a non issue with small scale simulations or testing.
All commands are run in docker
folder
Docker image for HILA
Create docker image:
docker build -t hila -f Dockerfile .
Launch image interactively with docker compose
docker compose run --rm hila-applications
Developing with docker
The applications folder is automatically mounted from the local host to the docker image when launching the service hila-applications
../applications:/HILA/applications
This allows one to develop HILA applications directly from source and launch them in the docker image with ease.
When developing HILA libraries and hilapp one can also launch the service hila-source which mounts the HILA/libraries and HILA/hilapp/src folders to the container
docker compose run --rm hila-source
The singularity container offers a more packaged approach where one doesn't need to worry about clang libtoolbox support for compiling the HILA pre processor. Hence for HPC platforms where the access of such compiler libraries can be tedious one can simply opt to use the container version of hilapp. This approach is mainly meant to be used for pre processing applications on an HPC platform.
One can download the singularity container hilapp.sif directly from this github repositories release page. If downloaded skip directly to the Using singulartiy container section:
Installing singularity
Simplest way to install singularity is by downloading the latest .deb or .rpm from github release page and installing directly with ones package manager
Ubuntu:
Building singularity container
NOTE: sudo privileges are required for building a singularity container
For building the container we have two options. One can either build the container using the release version of hilapp from github or one can build using the local hilapp source. Especially in the situation that one is developing the HILA preprocessor and would like to test it on a HPC platform then building the singularity container from a local source is the preferred option. There are two different singularity definition files for both cases.
Building using release version:
Building using local source
Using singulartiy container
The hilapp.sif file will act as a singularity container and equivalently as the hilapp binary and can be used as such when pre processing HILA code. Thus you can move it to your HILA projects bin folder
Now one can simply move the singularity container to any give supercomputer.
Note that on supercomputers the default paths aren't the same as on default linux operating systems. Thus one will need to mount their HILA source folder to singularity using the APPTAINER_BIND environment variable. Simple navigate to the base of your HILA source directory and run
export APPTAINER_BIND=$(pwd)
Before building the preprocessor one must first install the dependencies. See the dependencies
Compile hilapp:
This builds hilapp in hila/hilapp/build, and make install
moves it to hila/hilapp/bin, which is the default location for the program. Build takes 1-2 min.
NOTE: clang dev libraries are not installed in most supercomputer systems. However, if the system has x86_64 processors (by far most common), you can use
make static
-command to build statically linked hilapp. Copyhila/hilapp/build/hilapp
to directoryhila/hilapp/bin
on the target machine. Simpler approach for HPC platforms is use of singularity containers
Test that hilapp works:
./bin/hilapp --help
First we will try to build and run a health check test application with the default computing platform which is CPU with MPI enabled. To do so navigate to the applications folder and try:
NOTE: Naturally the run time depends on your system**
And for running with multiple processes:
mpirun -n 4 ./build/hila_healthcheck
NOTE: Naturally the run time depends on your system**
Now we can try to perform the same health check by targeting a differing computing platform with:
make ARCH=<platform>
where ARCH can take the following values:
ARCH= | Description |
---|---|
vanilla | default CPU implementation |
AVX2 | AVX vectorization optimized program using vectorclass |
openmp | OpenMP parallelized program |
cuda | Parallel CUDA program |
hip | Parallel HIP |
For cuda compilation one needs to define their CUDA version and architercure either as environment variables or during the make process:
NOTE: Default cuda version is 11.6 and compute architecture is sm_61
Now if we execute the cuda version one should expect the following output
NOTE: Naturally the run time depends on your system**
Additionally we have some ARCH values tuned for specific HPC platforms:
ARCH | Description |
---|---|
lumi | CPU-MPI implementation for LUMI supercomputer |
lumi-hip | GPU-MPI implementation for LUMI supercomputer using HIP |
mahti | CPU-MPI implementation for MAHTI supercomputer |
mahti-cuda | GPU-MPI implementation for MAHTI supercomputer using CUDA |
We will discuss the computing platforms more in the creating a HILA application guide.