Research
Past Research
1. Deep Learning of Dyadic Interaction Visual Cues for Human-Robot Collaboration Link to Thesis
Context: PhD Thesis (2024)
Objective: Understanding how human interaction cues can be leveraged for collaborative robotic systems.
Outcome: Developed novel deep learning models for human intent recognition in assembly tasks.
2. Conformal Prediction and Uncertainty Estimation in ML Models
Objective: Investigating conformal prediction techniques for more reliable model uncertainty estimation.
Approach: Applied conformal methods to deep learning and statistical ML tasks.
Outcome: Conceptual findings with potential applications in real-world deployment.
Journal Articles
QUB-PHEO: A Visual-Based Dyadic Multi-View Dataset for Intention Inference in Collaborative Assembly
Authors: Samuel Adebayo, Seán F. McLoone, Joost C. Dessing
Journal: IEEE Access, Year: 2024
Abstract: This study introduces QUB-PHEO, a visual-based, dyadic dataset designed to advance human-robot interaction research in assembly operations and intention inference. The dataset captures rich multimodal interactions between two participants across various assembly tasks, providing a valuable resource for developing and evaluating intention inference models.
View PublicationSLYKLatent: A Learning Framework for Gaze Estimation Using Deep Facial Feature Learning
Authors: Samuel Adebayo, Joost C. Dessing, Seán F. McLoone
Journal: in review with IEEE Transactions of Human-Machine Systems, Year: 2024
Abstract: SLYKLatent presents a novel approach for enhancing gaze estimation by addressing appearance instability challenges in datasets due to aleatoric uncertainties, covariant shifts, and test domain generalization. The framework utilizes self-supervised learning for initial training with facial expression datasets, followed by refinement to improve facial feature estimation accuracy.
View PublicationAlzhiNet: Traversing from 2DCNN to 3DCNN, Towards Early Detection and Diagnosis of Alzheimer's Disease
Authors: Romoke Grace Akindele, Samuel Adebayo, Paul Shekonya Kanda, Ming Yu
Journal: In review with the Journal of Interdisciplinary Sciences: Computational Life Sciences Year: 2024
Abstract: AlzhiNet proposes a hybrid deep learning framework that integrates both 2D and 3D Convolutional Neural Networks to enhance early detection and diagnosis of Alzheimer's Disease. The study addresses the challenges of capturing spatial and temporal features in medical imaging, aiming to improve diagnostic accuracy.
View PublicationHand-Eye-Object Tracking for Human Intention Inference
Authors: Samuel Adebayo, Seán F. McLoone, Joost C. Dessing
Journal: IFAC, Year: 2022
Abstract: This research focuses on optimising human-robot interaction in collaborative tasks by enhancing the robot's understanding of human intentions. The study introduces a hand-eye-object tracking system that enables accurate intention inference, facilitating more natural and efficient interactions.
View PublicationApplication of Deep Learning to Autonomous Robotic Car
Authors: Oluwagbemiga Omotayo Shoewu, Samuel Adebayo, Ayangbekun Oluwafemi J., Lateef Adesola Akinyemi
Journal: International Journal of Engineering and Technology, Year: 2021
Abstract: This paper explores the application of deep learning techniques in the development of autonomous robotic cars. The study addresses safety concerns and proposes solutions to enhance the reliability and efficiency of autonomous driving systems.
View Publication
Ongoing Research and Projects
1. Automated Robotic Assembly of Intermeshed Steel Connections (ISC)
Objective: Developing a bi-modal perception model for pose estimation and object detection in robotic assembly.
Approach: Creating synthetic datasets in Unity and aligning them with real-world images for robust model training.
Key Challenges: Domain adaptation, real-to-synthetic transfer learning, and multi-task learning.
Current Status: Dataset generation and model training are in progress.
2. Joint Fine-Tuning of Self-Supervised and Downstream Tasks Using Cross-Fitting
Objective: Exploring a novel training paradigm where self-supervised learning (SSL) and supervised learning interact during training.
Current Status: Evaluating feasibility and potential contributions to foundational ML research.
3. Mixed Reality and OptiTrack for Assembly Data Collection
Objective: Capturing motion and interaction data for ergonomic analysis and robotic assembly learning.
Approach: Using Mixed Reality and OptiTrack motion tracking to record human assembly movements.
Current Status: Proposal refinement for interdisciplinary collaboration.
4. CubitQuery: An AI-Driven Query Understanding Framework
Objective: Designing an AI-driven pipeline for processing, clustering, and automating customer query responses.
Approach: Leveraging NLP, feature selection, and clustering methods to understand and classify user queries dynamically.
Current Status: Prototype implementation and performance benchmarking.
5. FinSciHub: A Data Science Workflow for Fintech
Objective: Simulating real-world data science workflows in the fintech sector, focusing on A/B testing, customer analytics, and ML-driven insights.
Approach: SQL/Python-based workflow development with dashboards and automated analysis pipelines.
Current Status: Open-source project with active development.
6. Learning Graph Neural Networks (GNNs) and Large Language Models (LLMs)
Objective: Gaining expertise in GNNs and LLMs, covering theory, mathematics, implementation, and deployment.
Current Status: Hands-on learning with practical applications.