The following exercises will help you understand how to build your own software on the ATOS HPCF or ECS.
Before we start...
Ensure your environment is clean by running:
module reset |
Create a directory for this tutorial and cd into it:
mkdir -p compiling_tutorial cd compiling_tutorial |
With your favourite editor, create three hello world programs, one in C, one in C++ and one in Fortran
#include <stdio.h> int main(int argc, char** argv) { printf("Hello World from a C program\n"); return 0; } |
#include <iostream> int main() { std::cout << "Hello World from a C++ program!" << std::endl; } |
program hello print *, "Hello World from a Fortran Program!" end program hello |
Compile and run each one of them with the GNU compilers (gcc
, g++
, gfortran
)
We can build them with:
All going well, we should have now the executables that we can run
|
Now, use the generic environment variables for the different compilers ($CC
, $CXX
, $FC
) and rerun, you should see no difference to the above results.
We can rebuild them with:
We can now run them exactly in the same way:
|
We are now going to use a simple program that will display versions of different libraries linked to it. Create a file called versions.c
using your favourite editor with the following contents:
#include <stdio.h> #include <hdf5.h> #include <netcdf.h> #include <eccodes.h> int main() { #if defined(__INTEL_LLVM_COMPILER) printf("Compiler: Intel LLVM %d\n", __INTEL_LLVM_COMPILER); #elif defined(__INTEL_COMPILER) printf("Compiler: Intel Classic %d\n", __INTEL_COMPILER); #elif defined(__clang_version__) printf("Compiler: Clang %s\n", __clang_version__); #elif defined(__GNUC__) printf("Compiler: GCC %d.%d.%d\n", __GNUC__, __GNUC_MINOR__, __GNUC_PATCHLEVEL__); #else printf("Compiler information not available\n"); #endif // HDF5 version unsigned majnum, minnum, relnum; H5get_libversion(&majnum, &minnum, &relnum); printf("HDF5 version: %u.%u.%u\n", majnum, minnum, relnum); // NetCDF version printf("NetCDF version: %s\n", nc_inq_libvers()); // ECCODES version printf("ECCODES version: "); codes_print_api_version(stdout); printf("\n"); return 0; } |
Try to naively compile this program with:
$CC -o versions versions.c |
Let's use the existing software installed on the system with modules, and benefit from the corresponding environment variables *_DIR
which are defined in them to manually construct the include and library flags:
$CC -o versions versions.c -I$HDF5_DIR/include -I$NETCDF4_DIR/include -I$ECCODES_DIR/include -L$HDF5_DIR/lib -lhdf5 -L$NETCDF4_DIR/lib -lnetcdf -L$ECCODES_DIR/lib -leccodes |
Load the appropriate modules so that the line above completes successfully and generates the versions
executable:
You will need to load the following modules to have those variables defined:
The
|
Run ./versions
. You will get an error such as the one below:
./versions: error while loading shared libraries: libhdf5.so.200: cannot open shared object file: No such file or directory |
While you passed the location of the libraries at compile time, the program cannot not find the libraries at runtime. Inspect the executable with ldd
to see what libraries are missing
ldd is a utility that prints the shared libraries required by each program or shared library specified on the command line:
|
Can you make that program run successfully?
While you passed the location of the libraries at compile time, the program cannot not find the libraries at runtime. There are two solutions: Use the environment variable LD_LIBRARY_PATH- not recommended for long term Use the environment variable LD_LIBRARY_PATH. Check that ldd with the environment variable defined reports all libraries found:
Rebuild with Use the
Check that ldd with the environment variable defined reports all libraries found:
Final version For convenience, all those software modules define the
You can use those in for your compilation directly, with the following simplified compilation line:
Now you can run your program without any additional settings:
|
Can you rebuild the program so it uses the "old" versions of all those libraries in modules? Ensure the output of the program matches the versions loaded in modules? Do the same with the latest.
You need to load the desired versions or the modules:
And then rebuild and run the program:
The output should match the versions loaded by the modules:
Repeat the operation with
|
To simplify the build process, let's create a simple Makefile for this program. With your favourite editor, create a file called Makefile
in the same directory with the following contents:
Make sure that the indentations at the beginning of the lines are tabs and not spaces! |
# # Makefile # # Make sure all the relevant modules are loaded before running make EXEC = hello hello++ hellof versions # TODO: Add the necessary variables into CFLAGS and LDFLAGS definition CFLAGS = LDFLAGS = all: $(EXEC) %: %.c $(CC) -o $@ $^ $(CFLAGS) $(LDFLAGS) %: %.cc $(CXX) -o $@ $^ %: %.f90 $(F90) -o $@ $^ test: $(EXEC) @for exe in $(EXEC); do ./$$exe; done ldd: versions @ldd versions | grep -e netcdf.so -e eccodes.so -e hdf5.so clean: rm -f $(EXEC) |
You can test it works by running:
make clean test ldd |
Edit the Makefile and add the
Then run it with:
|
So far we have used the default compiler toolchain to build this program. Because of the installation paths of the library, it is easy to see both the version of the library used as well as the compiler flavour with ldd:
$ make ldd libhdf5.so.200 => /usr/local/apps/hdf5/<HDF5 version>/GNU/8.5/lib/libhdf5.so.200 (0x000014f612b7d000) libnetcdf.so.19 => /usr/local/apps/netcdf4/<NetCDF version>/GNU/8.5/lib/libnetcdf.so.19 (0x000014f611f2a000) libeccodes.so => /usr/local/apps/ecmwf-toolbox/<ecCodes version>/GNU/8.5/lib/libeccodes.so (0x000014f611836000) |
Rebuild the program with:
Use the following command to test and show what versions of the libraries are being used at any point:
make clean test |
You can perform this test with the following one-liner, exploiting the
Pay attention to the following aspects:
|
Rebuild the program with the "new" GNU GCC compiler. Use the same command as above to test and show what versions of the libraries are being used at any point.
This time we need to be on the GNU prgenv, but also select the "new" gcc compiler instead of just the default. Remember you can look at the versions available in modules and their corresponding labels with:
This sequence of commands should do the trick:
|
Rebuild the program with the Classic Intel compiler once again, but this time reset your module environment once the executable has been produced and before running it. What happens when you run it?
Let's look at the following sequence of commands:
The result will be something similar to:
Inspecting the executable with ldd throws some missing libraries at runtime:
That shows how, for Intel-built programs, you must have the intel environment set up at both compile and run times. |
Beyond the different compiler flavours in offer, we can also choose different MPI implementations for our MPI parallel programs. On the Atos HPCF and ECS, we can choose from the following implementations:
Implementation | Module | Description |
---|---|---|
OpenMPI | openmpi | Standard OpenMPI implementation provided by Atos |
Intel MPI | intel-mpi | Intel's MPI implementation based on MPICH. Part of part of the Intel OneAPI distribution |
HPC-X OpenMPI | hpcx-openmpi | NVIDIA's optimised flavour of OpenMPI. This is the recommended option |
For the next exercise, we will use this adapted hello world code for MPI.
#include <stdio.h> #include <string.h> #include <mpi.h> int main(int argc, char **argv) { int rank, size; char compiler[100]; char mpi_version[MPI_MAX_LIBRARY_VERSION_STRING]; int len; #if defined(__INTEL_LLVM_COMPILER) sprintf(compiler, "Intel LLVM %d", __INTEL_LLVM_COMPILER); #elif defined(__INTEL_COMPILER) sprintf(compiler, "Intel Classic %d", __INTEL_COMPILER); #elif defined(__clang_version__) sprintf(compiler, "Clang %s", __clang_version__); #elif defined(__GNUC__) sprintf(compiler, "GCC %d.%d.%d", __GNUC__, __GNUC_MINOR__, __GNUC_PATCHLEVEL__); #else sprintf(compiler,"information not available"); #endif MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Get_library_version(mpi_version, &len); mpi_version[len] = '\0'; printf("Hello from MPI rank %d of %d. Compiler: %s MPI Flavour: %s\n", rank, size, compiler, mpi_version); MPI_Finalize(); return 0; } |
Reset your environment with:
module reset |
With your favourite editor, create the file mpiversions.c
with the code above, and compile it into the executable mpiversions
. Hint: You may use the module hpcx-openmpi
.
In order to compile MPI parallel programs, we need to use the MPI wrappers such as These wrappers are made available when you load one of the MPI modules in the system. We will use the default Since we are building a C program, we will use
|
Write a small batch job that will compile and run the program using 2 processors and submit it to the batch system:
You may write a job script similar to the following
You may then submit it to the batch system with
Inspect the output to check what versions of Compiler and MPI are reported |
Tweak the previous job to build and run the mpiversions
program with as many combinations of compiler families and MPI implementations as you can.
You may amend your existing
You may then submit it to the batch system with
Inspect again the output to check what versions of Compiler and MPI are reported. |
To put into practice what we have learned so far, let's try to build and install CDO. You would typically not need to build this particular application, since it is already available as part of the standard software stack via modules or easily installable with conda. However, it is a good illustration of how to build a real-world software with dependencies to other software packages and libraries.
The goal of this exercise is for you to be able to build CDO and install it under one of your storage spaces (HOME or PERM), and then successfully run:
<PREFIX>/bin/cdo -V |
You will need to:
Make sure that CDO is built at least with support to:
It is strongly recommended you bundle all your build process in a job script that you can submit in batch. That way you can request additional cpus and speed up your compilation exploiting build parallelism with If you would like a starting point for such a job, you can start from the following example, adding and amending the necessary bits as needed:
|
We will take a step-by-step approach, taking the First thing we need to do is to decide where to install your personal CDO. A good choice would be using your PERM space, and we may use the same structure as the production installation. Because in the script we already have a variable called $VERSION containing the CDO version to install, we can also use that. This way we could have multiple versions of the same package installed alongside should we require it in the future:
Next comes the decision on what to use for the build itself. Since it is a relatively small build, for performance you might use
However, if you are going to submit it as a batch job, the directory will be wiped at the end. While you are putting the build script together, it may be more practical to have the build directory somewhere that is not deleted after a failed build so you can inspect output files and troubleshoot. As an example, we could pick the following directory in PERM:
Let's look at the environment for the build. All of dependencies listed above are already available, so we may leverage that by loading all the corresponding modules:
At this point, we need to refer to the installation instructions of this package in the official documentation. We can see it is a classic autotools package, which is typically built with the
Since we are just getting some short help, we can just run the script locally to get the configure output.
We should inspect the output of the configure help command, and identify what options are to be used:
Since all those dependencies are not installed on system paths, we will need to specify the installation directory for each one of them. We may then use the We will also define where to install the package with the --prefix option. Let's amend the configure line with:
Note that for PROJ, since the variable We are now ready to attempt our first build. Submit it the build script to the batch system with:
While it builds, we may keep an eye on the progress with:
At this point CDO build and installation should complete successfully, but the execution of the newly installed CDO at the end fails with:
If we inspect the resulting binary with ldd, we will notice there are a few libraries that cannot be found at runtime:
We are missing the PROJ and ecCodes libraries. We will need to explicitly set RPATHs when we build CDO to make sure the libraries are found. In Autotools packages, and as shown in the configure help we ran earlier, we may pass any extra link flags through the
If we submit the the build job again and wait for it to complete, we should see something like:
For reference, this is the complete and functional job script to build CDO:
|