The EGI HPC service aims at facilitating the execution of containerised workloads that can be offloaded to HPC systems from cloud native environment. It relies on Kubernetes, the de-facto standard for container management on cloud, as the main environment for managing the users workloads. As such it uses a Virtual Kubelet - an implementation of a Kubernetes node that can be backed by any other system - and the interLink software from interTwin. interLink provides an abstraction for the execution of a Kubernetes pod on any remote resource capable of managing a Container execution lifecycle (such as an HPC system)

Relying on Kubernetes allows users to exploit existing data science frameworks that are already Kubernetes-ready. interLink provides a simple interface that can be implemented and tuned by the HPC system in a way that it adjusts to the internal policies and configurations of the provider.

There are two main integration modes foreseen in the service:

  • Integration via a Check-in enabled SLURM REST API HPC System
  • Integration via InterLink deployed at the HPC site

In both cases, the integration tries to minimise the changes in the HPC system and from the users perspective allow for running the same kind of workloads 

Integration via Check-in enabled SLURM REST API

In this case, the HPC site delivers a SLURM REST API endpoint accessible from the Kubernetes cluster where the users creates their workload. The SLURM REST API offers an HTTP-based set of operations for the submission and management of jobs into an underlying SLURM cluster. In order to control access to the SLURM REST API, a simple authentication & authorization proxy server accepts Check-in credentials (i.e. OAuth access tokens) and maps those credentials to local users. Optionally ALISE can provide mapping from Check-in users to existing HPC users.

The flow for running applications in this mode would be as follow:

  1. (Opionally) User maps their identity with ALISE so an existing account/allocation in the HPC can be used
  2. User create a pod in Kubernetes with the appropriate labels so it's scheduled into the Virtual Kubelet with interLink as backend, this pod has an associated secret with a valid user Check-in token
  3. inteLink creates the remote workload by submitting API requests using the SLURM REST API to the HPC site Authentication/Authorization proxy endpoint
  4. The proxy will authorise the user by inspecting the entitlements from the Check-in token (e.g. by checking the membership to a given VO) and map the user to a local user
  5. The proxy will redirect the API request to the internal SLURM REST API with the mapped user that will in turn perform the requested operations on the SLURM cluster
  6. interLink will keep control of the job by repeating steps 3 to 5 as needed
  7. User interacts with the workload by using regular Kubernetes operations (e.g. get pod)

This scenario allows for the execution of applications beyond those Kubernetes-based as the HPC site would expose the SLURM REST API that could be potentially used by other 3rd party tools.

Integration via InterLink deployed at the HPC site

In this second scenario, InterLink is deployed within the HPC site premises, keeping it completely under control of the HPC site administrators. In this case the setup can be further customised to match the the needs of the site.

The user flow remains the same, but as interLink is collocated with the rest of the HPC site, there is no need for the HPC site to expose the SLURM REST API and direct submission to SLURM can be used. interLink needs to be configured to perform the mapping of the EGI Check-in users to the local users. Similarly to before, ALISE may be used for account linking or alternative approaches like mapping all users to a service account can be followed.




  • No labels