Alexander Lazovik

Orchestration Framework for hybrid computing

Supervisors: Alexander Lazovik
Date: 2026-01-28
Type: master-project/master-internship
Description:

(Requirement: Availability for six months for full time internship at TNO; Interest in cloud computing, HPC, or AI infrastructure)
In this internship at TNO, the student will contribute to the design of the model and implementation of a tool that automatically selects the most suitable digital infrastructure for a given application. The tool will support intelligent decision-making across heterogeneous computing environments such as HPC, Quantum, and emerging accelerators (Neuromorphic). The internship focuses on translating application requirements (e.g. performance, cost, energy, data sensitivity) into infrastructure choices using rule-based logic, optimization methods, or AI-driven approaches. The work will be carried out in close collaboration with researchers working on digital infrastructures and AI orchestration.
The challenge of this internship is to design and implement a research-oriented prototype that automatically selects the most suitable digital infrastructure (e.g. cloud, HPC, edge, or accelerators) based on application requirements such as performance, cost, energy efficiency, and data constraints. The student will investigate how these requirements can be formalized and mapped to infrastructure capabilities using rule-based, optimization, or AI-driven methods. In this role, the student will combine analytical research with hands-on implementation, validate the approach using realistic use cases, and document the findings in a structured, research-quality manner, contributing to ongoing work on intelligent orchestration of heterogeneous computing infrastructures.

Vertical Federated Learning Framework

Supervisors: Revin Alief, Dilek Düştegör, Alexander Lazovik
Date: 2026-01-28
Type: master-project/master-internship
Description:

(Requirement: Availability for six months for full time internship at TNO)
Big Data and Data Science (AI & ML) are increasingly popular topics because of the advantages they can bring to companies. The data analysis is often done in long-running processes or even with an always-online streaming process. This data analysis is almost always done within different types of limitations: from users, business perspective, from hardware and from the platforms on which the data analysis is running. At TNO we are looking into ways of developing solutions for vertical federated learning framework which allows the separation of concerns between local models, making analysis on local data, and central model which learns from many local models and updates local models when necessary. We have applied federated learning on horizontal approaches applied in multiple domains like energy, industry. Vertical Federated Learning (VFL) enables multiple parties to collaboratively train a machine learning model over vertically distributed datasets without data privacy leakage.
At TNO we are looking into ways of developing solutions for vertical federated learning framework which allows the separation of concerns between local models, making analysis on local data, and central model which learns from many local models and updates local models when necessary. We have applied federated learning on horizontal approaches applied in multiple domains like energy, industry. Vertical Federated Learning (VFL) enables multiple parties to collaboratively train a machine learning model over vertically distributed datasets without data privacy leakage.
Internship Role and Responsibilities:
- Your challenge would be to investigate and experiment on vertical federated learning approach and apply it to the energy or industry domain. Develop a scalable federated learning platform using the state-of-the-art approach.
- Evaluation of real-world scenarios and benchmarking data.
- Research on state of the art of heterogeneous edge computing and federated learning frameworks and scenarios.

Evaluating Server-Based and Serverless Deployment Strategies for Machine Learning Prediction Workloads in KServe

Supervisors: Mahmoud Alasmar, Alexander Lazovik
Date: 2026-01-09
Type: bachelor-project
Description:

KServe is an open-source, Kubernetes-native framework for deploying machine learning inference services. It supports both server-based deployments using standard Kubernetes resources and serverless deployments using Knative, enabling request-driven autoscaling and scale-to-zero capabilities. This project aims to evaluate the system-level performance (latency, throughput, resource usage) of server-based and serverless deployment strategies for classical machine learning prediction workloads using KServe. The study focuses on CPU-only inference services based on scikit-learn and XGBoost models. In the first phase, representative prediction machine learning models will be trained to generate inference workloads. In the second phase, these models will be deployed using KServe under two configurations: (i) Kubernetes-based deployments with Horizontal Pod Autoscaling (HPA), and (ii) Knative-based serverless deployments with request-driven autoscaling. A stream of controlled query workloads will be generated to simulate different traffic patterns. The evaluation will focus on latency, throughput, autoscaling responsiveness, CPU and memory utilization, and cold-start overhead. The results will highlight the trade-offs between performance, scalability, and resource efficiency in server and serverless ML serving environments, and how each deployment strategy can be adapted depending on the type of workloads. References:
Clipper: A Low-Latency Online Prediction Serving System SOCK: Rapid Task Provisioning with Serverless-Optimized Containers SelfTune: Tuning Cluster Managers Horizontal Pod Autoscaling Knative Technical Overview KServe Documentation

Estimating Inference Latency of Deep Learning Models Using Roofline Analysis

Supervisors: Mahmoud Alasmar, Alexander Lazovik
Date: 2026-01-09
Type: bachelor-project
Description:

Accurate estimation of inference latency is critical for meeting service-level objectives (SLOs) in large language model (LLM) serving systems. While classical ML prediction methods can be leveraged for the estimation task, their accuracy highly depends on the type of selected features. On the other hand, analytical performance models, such as the Roofline analysis, provide a hardware-aware upper bound on achievable performance; however, their applicability for latency estimation remains an open question. This project investigates how Roofline analysis can be integrated with ML prediction methods for improved estimation of end-to-end inference latency of LLM queries on a single GPU. A small set of representative LLMs will be selected, and inference latency will be measured under controlled conditions (Sequence length, batch size). Roofline-related metrics, such as arithmetic intensity and memory bandwidth utilization, will be collected using GPU profiling tools. These metrics will be used to estimate processing time and to build regression models that predict end-to-end inference latency. The evaluation will analyze prediction error, sensitivity to model size and input length, and the limitations of Roofline-based estimation. References:
Predicting LLM Inference Latency: A Roofline-Driven ML Method

Evaluating the Performance of vLLM and DeepSpeed for Serving LLM Inference Queries

Supervisors: Mahmoud Alasmar, Alexander Lazovik
Date: 2026-01-09
Type: master-project
Description:

The computational complexity of serving large language model (LLM) queries depends heavily on model size, sequence length, and memory access patterns. To address these challenges, several LLM inference serving frameworks have been proposed employing different optimization techniques to improve throughput and reduce memory overhead. vLLM and DeepSpeed are two prominent examples that deploy distinct techniques to achieve efficient inference serving frameworks. vLLM proposes PagedAttention for efficient key–value cache management. On the other hand, DeepSpeed integrates multiple optimization techniques, such as parallelism and kernel-level optimizations, for scalable inference. This project aims to systematically evaluate the end-to-end inference performance (Latency, throughput, Memory footprint) of vLLM and DeepSpeed under different inference workloads. Experiments will be performed using one of the publicly available datasets, such as ShareGPT. The results will highlight the trade-offs between KV cache management, kernel-level optimizations, and parallelism strategies in LLM inference serving, providing insights into the conditions under which each framework is most effective. References:
Efficient Memory Management for Large Language Model Serving with PagedAttention DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale

Estimating Time and Resource Usage of SLURM Jobs Using RLM

Supervisors: Mahmoud Alasmar, Alexander Lazovik
Date: 2026-01-09
Type: master-project/master-internship
Description:

Efficient allocation of computational resources in high-performance computing (HPC) clusters requires accurate prediction of job runtime and resource requirements. Users often over-request CPU, memory, or time to avoid failures, which can lead to wasted resources and longer queue times. Therefore, predicting these requirements before job submission is critical for improving cluster utilization and scheduling efficiency. This project investigates how Regression Language Models (RLMs) can be used to estimate the time and resource usage of SLURM jobs based on submitted Bash scripts and job metadata. The study will use real job submission data from the Habrok HPC cluster. References:
Regression Language Models for Code

Multivariate State Estimation in Drinking Water Distribution Networks

Supervisors: Huy Truong, Andrés Tello, Alexander Lazovik
Date: 2026-01-09
Type: bachelor-project/master-project/master-internship
Description:

Monitoring water distribution networks plays the main role in ensuring safe drinking water delivery to millions of residents in the urban area. Traditionally, this task relies on physics-based mathematical simulations; however, such models require a large number of parameters and frequent recalibration to maintain accuracy consistent with sensor measurements. As an alternative, recent studies have proposed data-driven approaches based on Graph Neural Networks (GNNs), which leverage pressure measurements from a limited set of sensors at known locations to infer pressure values at unmonitored nodes in the network. Building on this idea, the project extends the existing univariate method to a multivariate framework, aiming to jointly estimate multiple hydraulic quantities, including pressure, demand, flow rate, head loss, and others. The candidate is expected to have a basis of machine-learning foundation and proficiency in one of the deep learning frameworks (PyTorch, TensorFlow). Reference:
Graph Neural Networks for Pressure Estimation in Water Distribution Systems**.**

Evaluating LoRA for GNN-based model adaptation

Supervisors: Andrés Tello, Alexander Lazovik
Date: 2026-01-05
Type: bachelor-project/master-project
Description:

Foundation models have become a game-changer in several fields due to their strong generalization capabilities after some form of model adaptation, with fine-tuning being the most common approach. In this project, we aim to evaluate the effectiveness of Low-Rank Adaptation (LoRA) methods in terms of model performance, model size, and memory usage. While conventional full fine-tuning often yields high accuracy, LoRA can represent a more sustainable yet still effective alternative for model adaptation.

In this project, the student will implement a LoRA-based approach to adapt a pre-trained GNN-based model to new, unseen target datasets in the context of Water Distribution Networks (WDNs). The pre-trained model has been trained on several WDNs for pressure reconstruction, and the goal is to adapt it to make predictions on unseen WDN topologies with different operating conditions. The LoRA-based adaptation will be compared against a conventional full fine-tuning approach.

References:
LoRA: Low-Rank Adaptation of Large Language Models. Graph low-rank adapters of high regularityfor graph neural networks and graph transformers. ELoRA: Low-Rank Adaptation for Equivariant GNNs