• LLMs for Assessing Novelty in Scientific Peer Review (Huyen Nguyen)
    Peer review process is a key element in scholarly publication, ensuring the quality of scientific research. A crucial factor in scholarly publishing is the originalty/novelty of manuscript submissions. However, the rapid growth of publications poses increasing challenges for reviewers to stay aware of the state-of-the-art in any research field. This master’s project aims to study the capability of LLMs in evaluating the novelty of scientific submissions to support the peer review process.

    The goals are as follows:
    - Design prompts and collect paper reviews generated by different LLMs (i.e., GPT, LLaMa, etc.)
    - Compare the LLM-generated reviews and human reviews in terms of novelty assessment
    - Improve LLM-generated reviews using RAG or knowledge graph

    Requirements:
    Deep Learning, Python, Linux

    Related Work:
    Kuznetsov, Ilia, et al. "What Can Natural Language Processing Do for Peer Review?", arXiv preprint arXiv:2405.06563 (2024).

    Idahl, Maximilian, and Zahra Ahmadi. "OpenReviewer: A Specialized Large Language Model for Generating Critical Scientific Paper Reviews", NAACL 2025.

    Lin, Ethan, Zhiyuan Peng, and Yi Fang. "Evaluating and enhancing large language models for novelty assessment in scholarly publications", arXiv preprint arXiv:2409.16605 (2024).

    Chu, Zhumin, et al. "Automatic Large Language Model Evaluation via Peer Review", CIKM 2024.

    Project/Thesis language:
    English

    Contact:
    Please contact Huyen Nguyen if you are interested in discussing this topic.
  • Detection of Invasive and Beneficial Native Plants (Dr. Nicolás Navarro)
    In this thesis, you will develop a robust AI-based detection system for identifying and classifying relevant invasive and beneficial native plants. Supplying data for species monitoring and mapping the density of invasive and valuable native species.

    The tasks include:
    - Data collection and preparation, including image collection and preprocessing. Data augmentation with transformations for positive pairs and creation of negative pairs.
    - Self-supervised training using contrastive learning and fine-tuning. In the contrastive learning phase, the model learns to bring the representations of positive pairs closer together and push the representations of negative pairs further apart. Self-supervised learning is later used to iteratively auto-label other unlabeled images. Finally, fine-tuning is performed with a smaller set of manually labeled data.
    - Validation and testing in real-world scenarios, including validation of test data, quantitative assessment, and field tests.

    Requirements:
    Prior Knowledge:
    - Machine Learning
    - Computer Vision
    - Python, Git, Linux

    Related Work:
    Güldenring, R., Andersen, R. E., & Nalpantidis, L. (2024). Zoom in on the Plant: Fine-Grained Analysis of Leaf, Stem, and Vein Instances. IEEE Robotics and Automation Letters, 9(2), 1588–1595. IEEE Robotics and Automation Letters. https://doi.org/10.1109/LRA.2023.3346807

    Freire, A., de S. Silva, L. H., de Andrade, J. V. R., Azevedo, G. O. A., & Fernandes, B. J. T. (2024). Beyond Clean Data: Exploring the Effects of Label Noise on Object Detection Performance. Knowledge-Based Systems, 304, 112544. https://doi.org/10.1016/j.knosys.2024.112544

    Du, Y., Liu, F., Jiao, L., Hao, Z., Li, S., Liu, X., & Liu, J. (2022). Augmentative Contrastive Learning for One-Shot Object Detection. Neurocomputing, 513, 13–24. https://doi.org/10.1016/j.neucom.2022.09.125

    Project/Thesis language:
    English

    Contact:
    Please contact Dr. Nicolás Navarro if you are interested in discussing this topic.
  • Mobile Robotic: Obstacle Detection, Traversability Assessment, Mapping (Dr. Nicolás Navarro)
    In this thesis, you will develop a software pipeline for AI-based detection, classification, and mapping of relevant objects for task planning and navigation of an outdoor wheeled robot.

    The tasks include:
    - Collection of publicly available datasets used for autonomous vehicles. Augmentation of these datasets with data collected from the robot's perspective using its sensors.
    - Development of an algorithm for detecting and classifying relevant infrastructure elements, particularly road edges, guardrails, and traffic signs.
    - Assessment of traversability and obstacle detection for robots.
    - Mapping of the detected elements.
    - Validation and testing in real-world scenarios, including validation of test data, quantitative assessment, and field tests.

    Requirements:
    Prior Knowledge:
    - Machine Learning
    - Computer Vision
    - SLAM
    - ROS, Python, Git, Linux

    Project/Thesis language:
    English

    Contact:
    Please contact Dr. Nicolás Navarro if you are interested in discussing this topic.
  • Enhancement of Post-hoc Explanation Techniques Using LLMs (Dr. Marco Fisichella)
    This project will investigate the potential of Large Language Models (LLMs) to advance AI explainability pipeline, with a focus on two key objectives:

    1. Evaluation of LLMs as Explanation Generators
    We will evaluate LLMs’ ability to generate post-hoc explanations using a range of metrics, including faithfulness, plausibility, stability, and coherence. The study will benchmark LLM-generated explanations across multiple models and datasets.

    2. Improving Traditional Explainers with LLM Knowledge in Low-Resource Settings
    Leveraging LLM-generated explanations and evaluations, we aim to enhance local explainers such as LIME. This includes exploring how LLM feedback can guide the customization of LIME’s mechanisms to generate more reliable explanations particularly in low-resource environments.
    The project aims to deepen understanding of LLMs’ explanatory capabilities and explore how they can be used to improve existing explanation tools with applications in high-stakes domains such as healthcare.

    Example scenarios

    1. LLMs for Generating More Readable Explanations
    - LIME provides numerical explanations based on feature weights.
    - LLMs can transform these into more understandable textual explanations.

    Example:
    - LIME: "Feature X has a weight of 0.8, so it’s important."
    - LLM: "A high value of X suggests an increased probability of disease Y."
    ✅ Benefit: Improves interpretability for non-experts.

    2. LLMs to Optimize Superpixel Selection (Images) or Perturbed Features (Text/Tabular Data)
    - A challenge with LIME is that feature selection for perturbation is random.
    - LLMs can suggest which features should be perturbed more, based on domain knowledge.

    Example:
    - In an X-ray image, instead of segmenting randomly, an LLM can guide LIME to focus on lung regions when analyzing pneumonia.
    ✅ Benefit: Makes perturbations more meaningful and reliable.

    3. LLMs to Filter and Improve LIME’s Explanations
    - LIME can sometimes produce incoherent or misleading explanations.
    - An LLM can evaluate and refine these explanations, improving stability and plausibility.

    Example:
    If LIME incorrectly attributes a disease to an irrelevant factor, the LLM can correct or rephrase the explanation.
    ✅ Benefit: Increases trustworthiness and consistency.

    4. LLMs to Reduce Data Dependence (Low-Resource Settings)
    - In data-scarce environments, LIME’s performance suffers because it lacks sufficient samples to estimate feature importance reliably.
    - Pre-trained LLMs can compensate for missing data by suggesting relevant features or improving the perturbation process.
    ✅ Benefit: Makes LIME more robust even with limited data.

    Requirements:
    - Proficiency in Python and relevant libraries such as scikit-learn, TensorFlow, PyTorch, and Hugging Face for working with machine learning models and LLMs.

    Related Work:
    - Ribeiro, M.T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. https://arxiv.org/abs/1602.04938
    -

    Project/Thesis language:
    English

    Contact:
    Please contact Dr. Marco Fisichella if you are interested in discussing this topic.
  • Development of an Artificial Intelligence system for converting images into tactile representations (Prof. Dr. techn. Wolfgang Nejdl)
    The tactile interpretation of images requires the transformation of two-dimensional visual information into three-dimensional representations accessible through touch. This conversion presents significant challenges, particularly in the preservation of crucial expressive elements that in visual reality are highlighted through light and shadow.

    The project focuses on case studies of pictorial art; for example, works by Caravaggio, where chiaroscuro creates a true 'tactile map' through:
    - Anatomies highlighted by shadows (muscles, ribs, veins)
    - Movement-defining fabric folds
    - Depth of environments
    - Textures of objects (fruit, fabrics, metals)

    The objective of this thesis is to develop an AI system that overcomes current limitations in the automatic conversion of images into tactile representations, focusing on:
    - Recognition of elements that in visual reality are highlighted by light and shadow
    - Translation of these elements into appropriate depth maps
    - Preservation of key narrative elements
    - Optimisation of high relief rendering
    The project is being developed in parallel with tactile image production experts who will test the effectiveness of the developed solutions.

    The project offers the opportunity to:
    - Working on a concrete problem of cultural accessibility
    - Developing an innovative image interpretation system
    - Collaborating with experts in the field of tactile representations
    - Contributing to innovation in art accessibility

    Requirements:
    - Image processing
    - Interest in accessibility and art
    - Ability to interpret visual elements from a tactile perspective

    Project/Thesis language:
    English

    Contact:
    Please contact Prof. Dr. techn. Wolfgang Nejdl if you are interested in discussing this topic.
  • Digital Transformation in Medicine (Prof. Dr. techn. Wolfgang Nejdl)
    The Else-Kröner Graduate Program "Digital Transformation in Medicine" (DigiStrucMed) is a collaborative initiative of Hannover Medical School, the Technical University of Braunschweig, the University of Applied Sciences and Arts Hanover, and Leibniz University Hanover. Its goal is to support interdisciplinary training for students in medicine (doctoral candidates) and computer science (Master’s students working on their theses). The structured program is funded with €900,000 for an additional three-year period by the Else Kröner-Fresenius Foundation, enabling students from both disciplines to conduct joint research in digital transformation in medicine through project tandems.

    For the 5th cohort in the 2025/26 program year, DigiStrucMed medical students will begin on July 1 or August 1, 2025. Master’s students may start their projects, in coordination with project supervisors, between June 1, 2025, and February 1, 2026. Further information about DigiStrucMed can be found on the program homepage: https://www.mhh.de/hbrs/digistrucmed.

    Project/Thesis language:
    English

    Contact:
    Please contact Prof. Dr. techn. Wolfgang Nejdl if you are interested in discussing this topic.
  • AI meets quality assurance in biobanking (Prof. Dr. techn. Wolfgang Nejdl)
    Al-supported evaluation of scientific publications about the influence of pre-analytical factors (like temperature or time) on biosamples (e.g. blood) as a basis for a knowledge database for pre-analytical variability and biosample quality (ProvideQ).

    During the entire pre-analytical process chain from sample collection to storage, biomaterials are particularly vulnerable and sample quality can be massively impaired by unfavorable pre-analytical factors. For example, high or low temperatures and long storage or transportation times can change the molecular profile of the samples in such a way that these samples only reflect the original physiological state to a limited extent. Different pre-analytical factors have different effects on different classes of molecules (e.g. RNA, proteins or metabolites) and sometimes there are even different influences within different molecules of a molecular class.

    Although the effect of various pre-analytical influencing factors on biosamples has already been examined in numerous studies, it is still associated with extensive literature research to obtain an overview of the influence of pre-analytical influencing factors on biosamples and, for example, to answer the question of the extent to which a biospecimen can still be used for certain analyses (fit for purpose).

    This problem led to the development of a knowledge database (ProvideQ) based on scientific literature. ProvideQ maps pre-analytical influences to analytes, initially focusing on blood samples and their metabolites: users can use ProvideQ to check which pre-analytical factors affect the stability of the metabolites of a plasma or serum sample and receive literature references for their query as well as information on whether a metabolite or which metabolites can be classified as stable under the pre-analytical conditions entered or whether the concentration changes. The objective of this suggested Master project is to develop an AI that autonomously analyzes the given literature (pdf format), evaluates it, and translates the results into the corresponding fields in the database.

    Project/Thesis language:
    English

    Contact:
    Please contact Prof. Dr. techn. Wolfgang Nejdl if you are interested in discussing this topic.
  • Optimizing Robot Interaction using Image-based Tactile Sensors (Wadhah Zai El Amri)
    Tactile sensing presents a promising opportunity for enhancing the interaction capabilities of today’s robots. DIGIT [1] is a commonly used tactile sensor that enables robots to perceive and respond to physical tactile stimuli.

    In situations where the sensor is unavailable or experiment repetitions are costly, the value of a reliable, real-time simulation becomes evident. Such a simulation can effectively estimate sensor outputs for various touch scenarios. Such simulation would offer a good alternative to gathering data in different setups and environments.

    While several studies have introduced simulations for the DIGIT sensor, such as TACTO [2] and Taxim [3], they predominantly rely on rigid-body simulations and overlook the crucial soft-body aspect of the sensor's gel tip. This oversight diminishes the accuracy of the simulations and their ability to faithfully replicate real-world tactile interactions.


    Goals of the thesis

    Collect real image outputs and simulated sensor surface deformation.
    Train a machine learning algorithm to generate output images using DIGIT surface deformation mesh.
    Perform manipulation/grasping tasks with a real UR5 robot to assess and evaluate the performance of the trained algorithm.

    Requirements:
    Prior Knowledge or interest

    Machine Learning
    Robotic Operating System (ROS)
    Python
    Linux

    Related Work:
    [1]: M. Lambeta, P.-W. Chou, S. Tian, B. Yang, B. Maloon, V. R. Most, D. Stroud, R. Santos, A. Byagowi, G. Kammerer, D. Jayaraman, and R. Calandra, “DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor With Application to In-Hand Manipulation,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 3838–3845, 2020.
    [2]: S. Wang, M. Lambeta, P.-W. Chou, and R. Calandra, “TACTO: A Fast, Flexible, and Open-Source Simulator for High-Resolution Vision-Based Tactile Sensors,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3930–3937, 2022.
    [3]: Z. Si and W. Yuan, “Taxim: An example-based simulation model for gelsight tactile sensors,” IEEE Robotics and Automation Letters, 2022.

    Project/Thesis language:
    English

    Contact:
    Please contact Wadhah Zai El Amri if you are interested in discussing this topic.
  • Multi-objective Optimization for Robotic Gripper Design (Dr. Nicolás Navarro)
    The human hand is often used as the gold standard or goal for robotic manipulation, and haptic perception and illustration are often used to illustrate sophisticated robots and AIs. However, despite our fascination for anthropomorphic robotic hands, the trend in industrial applications and robotic challenges is to use simpler designs consisting of parallel grippers or suction cups. This trend is not necessarily only due to the complexity of matching the human hand's dexterity, robustness and perceptual capabilities but also to the reality that anthropomorphic hands may not be required to achieve the human skill level. For instance, the winner of the Amazon Picking Challenge used an end effector based on a suction system. In the DARPA Robotics Challenge, 15 of 25 teams used an underactuated hand with three or four fingers (Piazza et al., 2019), while none of the remaining 10 teams used a fully actuated anthropomorphic hand (Piazza et al., 2019). Even in the Cybathlon, the Powered Arm Prosthesis Race winner used a body-powered hook (Piazza et al., 2019).

    In addition, the synergistic combination of all three subsystems, including the mechanical aspects, perception, and control, might be more critical than an anthropomorphic robotic hand to match and potentially surpass human dexterous manipulation capabilities.

    This prompts the question of whether we need anthropomorphic robotic hands. This thesis aims to answer this question from a multi-objective optimization point of view.

    Goals of the thesis
    Implement a Genetic Algorithm or another multi-objective optimization algorithm to develop a general-purpose robotic manipulator. The manipulator could be based on the human hand, ideally reducing complexity while keeping most of its dexterity.

    Requirements:
    Prior Knowledge or interest
    - Machine Learning
    - Robot Simulation
    - Python, Git, Linux

    Related Work:
    Cheney, N., MacCurdy, R., Clune, J., & Lipson, H. (2013). Unshackling Evolution: Evolving Soft Robots with Multiple Materials and a Powerful Generative Encoding. Annual Conference on Genetic and Evolutionary Computation (GECCO), 167–174. https://doi.org/10.1145/2463372.2463404
    Coevoet, E., Morales-Bieze, T., Largilliere, F., Zhang, Z., Thieffry, M., Sanz-Lopez, M., Carrez, B., Marchal, D., Goury, O., Dequidt, J., & Duriez, C. (2017). Software Toolkit for Modeling, Simulation, and Control of Soft Robots. Advanced Robotics, 31(22), 1208–1224. https://doi.org/10.1080/01691864.2017.1395362
    Faure, F., Duriez, C., Delingette, H., Allard, J., Gilles, B., Marchesseau, S., Talbot, H., Courtecuisse, H., Bousquet, G., Peterlik, I., & Cotin, S. (2012). SOFA: A Multi-Model Framework for Interactive Physical Simulation. In Y. Payan (Ed.), Soft Tissue Biomechanical Modeling for Computer Assisted Surgery (pp. 283–321). Springer. https://doi.org/10.1007/8415_2012_125
    Piazza, C., Grioli, G., Catalano, M. G., & Bicchi, A. (2019). A Century of Robotic Hands. Annual Review of Control, Robotics, and Autonomous Systems, 2(1), 1–32. doi.org/10.1146/annurev-control-060117-105003

    Project/Thesis language:
    English

    Contact:
    Please contact Dr. Nicolás Navarro if you are interested in discussing this topic.
  • Exploring ChatGPT for Drone Applications: Prompts, Dialogues, and Task Adaptability (Dr. Marco Fisichella)
    This thesis aims to explore the application of OpenAI's ChatGPT in the domain of drones. The objective is to investigate the feasibility and effectiveness of leveraging ChatGPT for various drone tasks in both real and simulated environments. The thesis proposes a comprehensive strategy that combines prompt engineering principles and software engineering skills to facilitate ChatGPT's adaptation to different drone tasks.

    The study will evaluate the efficacy of different prompt engineering techniques, dialog strategies, and software architectural decisions in the context of executing drone-related tasks. Special emphasis will be placed on ChatGPT's capabilities in utilizing free-form dialogue, code synthesis, task-specific prompting functions, and closed-loop reasoning through dialogues.

    The thesis will cover a wide range of drone tasks, including aerial navigation, object detection, and path planning. The primary objective is to ascertain whether ChatGPT can effectively solve these tasks while enabling users to interact predominantly through natural language instructions.

    Throughout the research, both real-world drone experiments and simulated scenarios will be conducted to assess the performance and adaptability of ChatGPT. Software engineering best practices will be employed to ensure the robustness and scalability of the system.

    The anticipated outcome of this thesis is to demonstrate the potential of ChatGPT as a valuable tool for drone applications, showcasing its ability to enhance user interactions and task execution through natural language instructions.

    Keywords: ChatGPT, drones, prompt engineering, natural language interaction, aerial navigation, object detection, path planning, simulated environment, software architecture.

    Project/Thesis language:
    English

    Contact:
    Please contact Dr. Marco Fisichella if you are interested in discussing this topic.
  • Analysis of Propagation of Vibrations and Body-Borne Sound in a Robotic Hand (Dr. Nicolás Navarro)
    Haptic sensors span a broad range of technologies. The main focus of the
    sensors is to increase the recognition accuracy of both textures and the
    location of contact points. However, these sensors are mechanically
    fragile and mounted externally to robotic systems to increase accuracy,
    limiting the use of those sensors to applications that are kind to the
    sensors. For use in harsh applications or complementary to those
    existing sensors, this project aims to develop a machine
    learning-oriented solution capable of using body-borne vibrations to
    classify objects' texture and location of haptic interaction. This
    strategy allows mounting the sensors inside the robot, protected from
    external perturbance. Although this technology is not as accurate as
    other technologies, it promises to enable a degree of haptic perception
    anywhere the robot's outer shell (and electronics) can withstand.
    The technology has been validated in applications of multimodal object
    recognition, e.g., by Bonner et al. 2021 and Toprak et al. 2018. Some of
    the following steps include the development of the algorithms to perform
    localization of multiple points of contact between the robot and
    external objects.

    Goal
    * Systematic collect sound and vibration data from a robotic hand
    * Determine the ideal placement of sensors and sampling rate
    * Perform sound-source localization of the source of the vibration or
    sound within the robotic hand

    Requirements:
    Prior Knowledge or interest
    * Machine Learning
    * Python, Pytorch, Git, Linux
    * Signal processing

    Related Work:
    > Navarro-Guerrero, N., Toprak, S., Josifovski, J., & Jamone, L.
    (2023). Visuo-Haptic Object Perception for Robots: An Overview.
    Autonomous Robots, 27.
    https://link.springer.com/article/10.1007/s10514-023-10091-y

    > Bonner, L. E. R., Buhl, D. D., Kristensen, K., & Navarro-Guerrero, N.
    (2021). AU Dataset for Visuo-Haptic Object Recognition for Robots.
    figshare. https://doi.org/10.6084/m9.figshare.14222486

    > Toprak, S., Navarro-Guerrero, N., & Wermter, S. (2018). Evaluating
    Integration Strategies for Visuo-Haptic Object Recognition. Cognitive
    Computation, 10(3), 408–425. https://doi.org/10.1007/s12559-017-9536-7

    Project/Thesis language:
    English

    Contact:
    Please contact Dr. Nicolás Navarro if you are interested in discussing this topic.
  • Development of algorithm for tactile perception for vibration-based sensors (Dr. Nicolás Navarro)
    Multimodal object recognition is still an emerging and active field of research. Haptic sensors span a broad range of technologies. The main focus of the sensors is to increase the recognition accuracy of both textures and the location of contact points. However, these sensors are mechanically fragile and mounted externally to robotic systems to increase accuracy, limiting the use of those sensors to applications that are kind to the sensors. For use in harsh applications or complementary to those existing sensors, this project aims to develop a machine learning-oriented solution capable of using body-borne vibrations to classify objects' texture and location of haptic interaction. This strategy allows mounting the sensors inside the robot, protected from external perturbance. Although this technology is not as accurate as other technologies, it promises to enable a degree of haptic perception anywhere the robot's outer shell (and electronics) can withstand. The technology has been validated in applications of multimodal object recognition, e.g., by Bonner et al. 2021 and Toprak et al. 2018. Some of the following steps include developing the algorithms to perform localization of multiple points of contact between the robot and external objects.

    Goal
    - Creation of a haptic dataset
    - Optimization of machine learning algorithms for (multi-)stimuli localization
    - Determining the lower boundary of the number of sensors and placement.
    - Determining the lower boundary of the sensors' sampling rate

    Requirements:
    Prior Knowledge in the following areas would be helpful:
    * Machine Learning
    * Pytorch, git linux
    * Signal processing

    Related Work:
    > Navarro-Guerrero, N., Toprak, S., Josifovski, J., & Jamone, L.
    (2023). Visuo-Haptic Object Perception for Robots: An Overview.
    Autonomous Robots, 27.
    https://link.springer.com/article/10.1007/s10514-023-10091-y

    > Bonner, L. E. R., Buhl, D. D., Kristensen, K., & Navarro-Guerrero, N.
    (2021). AU Dataset for Visuo-Haptic Object Recognition for Robots.
    figshare. https://doi.org/10.6084/m9.figshare.14222486

    > Toprak, S., Navarro-Guerrero, N., & Wermter, S. (2018). Evaluating
    Integration Strategies for Visuo-Haptic Object Recognition. Cognitive
    Computation, 10(3), 408–425. https://doi.org/10.1007/s12559-017-9536-7

    Project/Thesis language:
    English

    Contact:
    Please contact Dr. Nicolás Navarro if you are interested in discussing this topic.
  • Measuring Feedback's Impact in Interactive Reinforcement Learning (Dr. Nicolás Navarro)
    As shown in multiple research, providing feedback to autonomous learning agents can speed up learning. However, the quantification/characterization of different aspects of feedback, such as feedback quantity, quality, temporal and spatial misalignments, etc., in learning speed, performance, and other relevant metrics is still an open question. This question not only addresses theoretical aspects of the learning algorithms but is also very relevant for application in real systems because although feedback might be beneficial, (human-)feedback is also expensive and adds complexity to the systems. Thus, it is essential to know the minimal requirements for the (human-)feedback to achieve a significant increment in performance, learning speed etc., that it is worth the added complexity. Hence, this project presents a series of questions that can be addressed independently to achieve a deeper understanding of the role of feedback in a learning system's performance.

    Several assumptions and simplifications can be made to facilitate the study of these questions. These include the use of binary and low-dimensional feedback, simulated environments, and the use of autonomous teachers. Moreover, this project will be studied in a robot reaching task for a KUKA LBR iiwa, a robotic arm with 7 degrees of freedom (DoF). Configurations from 1 to 7 DoF will be used to study feedback effects at different levels of task complexity.

    This project will use artificial feedback and primarily be studied in simulated environments. Eventually, once a better understanding of the effects of feedback is obtained, experiments with real users will be carried out. Several thesis directions are possible, which will be discussed with the candidates. These include:

    - Quantifying the Effect of Feedback Accuracy in IRL Performance
    - Quantifying the Effect of Feedback Quantity in IRL Performance (e.g., binary, scalar value, or vector)
    - Quantifying the Effect of Feedback Budget in Interactive Reinforcement Learning (e.g., early, uniform, late)
    - Quantifying the Effect of Time-Delayed Feedback in IRL Performance
    - Policy Shaping in IRL for Dynamical System using Binary and Other Low-Dimensional Feedback

    Requirements:
    Prior Knowledge or interest in
    - Reinforcement Learning and Machine Learning
    - Human-Robot Interaction
    - Python, Latex, git, Linux

    Related Work:
    Harnack, D., Pivin-Bachler, J., & Navarro-Guerrero, N. (2022). Quantifying the Effect of Feedback Frequency in Interactive Reinforcement Learning for Robotic Tasks. Neural Computing and Applications. Special Issue on Human-aligned Reinforcement Learning for Autonomous Agents and Robots. https://doi.org/10.1007/s00521-022-07949-0

    Stahlhut, C., Navarro-Guerrero, N., Weber, C., & Wermter, S. (2015). Interaction in Reinforcement Learning Reduces the Need for Finely Tuned Hyperparameters in Complex Tasks. Kognitive Systeme, 3(2). https://doi.org/10.17185/duepublico/40718

    Project/Thesis language:
    English

    Contact:
    Please contact Dr. Nicolás Navarro if you are interested in discussing this topic.
  • Graph Neural Networks for Semantic Table Interpretation (Simon Gottschalk)
    Semantic Table Interpretation (STI) is the task of understanding the concepts in a table. This includes (i) cell-entity linking where objects mentioned in table cells are linked to resources such as Wikipedia, (ii) column-type annotation where the type of objects in a column is identified and (iii) column-property annotation which is about finding the relation between two columns [1]. One approach to perform these three tasks is the use of graph neural networks (GNNs) where the table, its columns, rows and cells are represented as a graph [2]. By training on a large corpus, the GNN performs the three tasks through node and edge classification [3].

    The goal in this thesis is to develop a GNN that performs the three tasks jointly and to demonstrate that this joint training leads to more accurate annotations than when performing these tasks in isolation.

    As a basis of this master thesis, we already have a method for table-to-graph conversion, a basic GNN for STI and an evaluation pipeline setup which will need to be extended to demonstrate the effect of joint training and its superiority over existing approaches.

    Your goals are as follows:
    - Understand STI and GNNs
    - Extend an GNN (implemented in PyTorch Geometric) for STI
    - Conduct experiments to evaluate the effectiveness of the GNN

    Requirements:
    Ideally, you have experience in:
    - Python (mandatory)
    -- PyTorch and PyTorch Geometric
    - Machine learning
    -- Graph Neural Networks
    - Linux
    - Knowledge Graphs

    Related Work:
    [1] Jiménez-Ruiz, E., Hassanzadeh, O., Efthymiou, V., Chen, J., Srinivas, K.: SemTab 2019: Resources to Benchmark Tabular Data to Knowledge Graph Matching Systems. In: The Semantic Web: 17th International Conference, ESWC 2020, Heraklion, Crete, Greece, May 31–June 4, 2020, Proceedings 17. pp. 514–530. Springer (2020)
    [2] Pramanick, A., Bhattacharya, I.: Joint Learning of Representations for Web-tables, Entities and Types using Graph Convolutional Network. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. pp. 1197–1206 (2021)
    [3] Wang, D., Shiralkar, P., Lockard, C., Huang, B., Dong, X.L., Jiang, M.: TCN:Table Convolutional Network for Web Table Interpretation. In: Proceedings of the Web Conference 2021. pp. 4020–4032 (2021)

    Project/Thesis language:
    English

    Contact:
    Please contact Simon Gottschalk if you are interested in discussing this topic.