Alexander Lazovik
Orchestration Framework for hybrid computing
Supervisors:
Alexander Lazovik
Date: 2026-01-28
Type: master-project/master-internship
Description:
In this internship at TNO, the student will contribute to the design of the model and implementation of a tool that automatically selects the most suitable digital infrastructure for a given application. The tool will support intelligent decision-making across heterogeneous computing environments such as HPC, Quantum, and emerging accelerators (Neuromorphic). The internship focuses on translating application requirements (e.g. performance, cost, energy, data sensitivity) into infrastructure choices using rule-based logic, optimization methods, or AI-driven approaches. The work will be carried out in close collaboration with researchers working on digital infrastructures and AI orchestration.
The challenge of this internship is to design and implement a research-oriented prototype that automatically selects the most suitable digital infrastructure (e.g. cloud, HPC, edge, or accelerators) based on application requirements such as performance, cost, energy efficiency, and data constraints. The student will investigate how these requirements can be formalized and mapped to infrastructure capabilities using rule-based, optimization, or AI-driven methods. In this role, the student will combine analytical research with hands-on implementation, validate the approach using realistic use cases, and document the findings in a structured, research-quality manner, contributing to ongoing work on intelligent orchestration of heterogeneous computing infrastructures.
Vertical Federated Learning Framework
Supervisors:
Revin Alief,
Dilek Düştegör,
Alexander Lazovik
Date: 2026-01-28
Type: master-project/master-internship
Description:
Big Data and Data Science (AI & ML) are increasingly popular topics because of the advantages they can bring to companies. The data analysis is often done in long-running processes or even with an always-online streaming process. This data analysis is almost always done within different types of limitations: from users, business perspective, from hardware and from the platforms on which the data analysis is running. At TNO we are looking into ways of developing solutions for vertical federated learning framework which allows the separation of concerns between local models, making analysis on local data, and central model which learns from many local models and updates local models when necessary. We have applied federated learning on horizontal approaches applied in multiple domains like energy, industry. Vertical Federated Learning (VFL) enables multiple parties to collaboratively train a machine learning model over vertically distributed datasets without data privacy leakage.
At TNO we are looking into ways of developing solutions for vertical federated learning framework which allows the separation of concerns between local models, making analysis on local data, and central model which learns from many local models and updates local models when necessary. We have applied federated learning on horizontal approaches applied in multiple domains like energy, industry. Vertical Federated Learning (VFL) enables multiple parties to collaboratively train a machine learning model over vertically distributed datasets without data privacy leakage.
Internship Role and Responsibilities:
- Your challenge would be to investigate and experiment on vertical federated learning approach and apply it to the energy or industry domain. Develop a scalable federated learning platform using the state-of-the-art approach.
- Evaluation of real-world scenarios and benchmarking data.
- Research on state of the art of heterogeneous edge computing and federated learning frameworks and scenarios.
Evaluating Server-Based and Serverless Deployment Strategies for Machine Learning Prediction Workloads in KServe
Supervisors:
Mahmoud Alasmar,
Alexander Lazovik
Date: 2026-01-09
Type: bachelor-project
Description:
Clipper: A Low-Latency Online Prediction Serving System SOCK: Rapid Task Provisioning with Serverless-Optimized Containers SelfTune: Tuning Cluster Managers Horizontal Pod Autoscaling Knative Technical Overview KServe Documentation
Estimating Inference Latency of Deep Learning Models Using Roofline Analysis
Supervisors:
Mahmoud Alasmar,
Alexander Lazovik
Date: 2026-01-09
Type: bachelor-project
Description:
Predicting LLM Inference Latency: A Roofline-Driven ML Method
Evaluating the Performance of vLLM and DeepSpeed for Serving LLM Inference Queries
Supervisors:
Mahmoud Alasmar,
Alexander Lazovik
Date: 2026-01-09
Type: master-project
Description:
Efficient Memory Management for Large Language Model Serving with PagedAttention DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale
Estimating Time and Resource Usage of SLURM Jobs Using RLM
Supervisors:
Mahmoud Alasmar,
Alexander Lazovik
Date: 2026-01-09
Type: master-project/master-internship
Description:
Regression Language Models for Code
Multivariate State Estimation in Drinking Water Distribution Networks
Supervisors:
Huy Truong,
Andrés Tello,
Alexander Lazovik
Date: 2026-01-09
Type: bachelor-project/master-project/master-internship
Description:
Graph Neural Networks for Pressure Estimation in Water Distribution Systems**.**
Evaluating LoRA for GNN-based model adaptation
Supervisors:
Andrés Tello,
Alexander Lazovik
Date: 2026-01-05
Type: bachelor-project/master-project
Description:
In this project, the student will implement a LoRA-based approach to adapt a pre-trained GNN-based model to new, unseen target datasets in the context of Water Distribution Networks (WDNs). The pre-trained model has been trained on several WDNs for pressure reconstruction, and the goal is to adapt it to make predictions on unseen WDN topologies with different operating conditions. The LoRA-based adaptation will be compared against a conventional full fine-tuning approach.
References:
LoRA: Low-Rank Adaptation of Large Language Models. Graph low-rank adapters of high regularityfor graph neural networks and graph transformers. ELoRA: Low-Rank Adaptation for Equivariant GNNs