Hello, I am

Solomon Chibuzo Nwafor

MSc Intelligent Field Robotic Systems candidate (Erasmus Mundus) Currently in Spain

Robotics researcher bridging control, perception, and machine learning, with applications in healthcare imaging and autonomous systems for food security.

0 Years Experience
0 Projects Completed
0 Publications
Python PyTorch ROS SLAM UAV Control MPC Optimization Computer Vision Medical Imaging Vision-Language Models Git Docker LaTeX SolidWorks CAD Python PyTorch ROS SLAM UAV Control MPC Optimization Computer Vision Medical Imaging Vision-Language Models Git Docker LaTeX SolidWorks CAD
C++ Basics OpenCV NumPy Pandas Matplotlib CRFs Segmentation YOLOv8 MedViT DLS Control Task-Priority IK Dubins Planning C++ Basics OpenCV NumPy Pandas Matplotlib CRFs Segmentation YOLOv8 MedViT DLS Control Task-Priority IK Dubins Planning

About Me

Solomon Chibuzo Nwafor portrait

I am a robotics researcher with a background in mechanical systems, autonomous control, and perception. My work addresses decision-making and motion in real environments, from lesion analysis in medical imaging to mobile manipulation in robotics.

My approach combines control theory, optimization, and data-driven modeling (machine learning). Experience comes from academic research, autonomous system projects, Kaggle competitions, and laboratory development.

I have also designed UAV platforms for farmland monitoring, crop mapping, and weed detection, addressing challenges in food security through robotics and automation.

I aim to bridge control, perception, and machine learning, with applications in healthcare imaging and autonomous systems for food security.

Education

2024 - Present

Erasmus Mundus Joint Master in Intelligent Field Robotic Systems

University of Girona, Spain & Eötvös Loránd University, Hungary

Erasmus Mundus

Focus Areas

Autonomous Systems
Machine Learning
Multiview Geometry
Probabilistic Robotics
Robot Manipulation
SLAM
Planning
Computer Vision
Intervention
2019 - 2023

M.Eng. Control Engineering

University of Nigeria, Nsukka

Focus Areas

Stochastic Control
Multivariable Control
Linear Systems
Optimal Control
Advanced Control
Modeling & Simulation
2012 - 2017

B.Eng. Mechanical Engineering

Chukwuemeka Odumegwu Ojukwu University, Nigeria

Chukwuemeka Odumegwu Ojukwu University

Focus Areas

Machine Design & Structures
Manufacturing & CAD/CAM
Thermodynamics & Heat Transfer
Fluid Mechanics & Power
Materials & Tribology
Control & Vibration

Experience

2025 - Present

Machine Learning Intern, iToBoS

ViCOROB, University of Girona, Spain

  • Built pipelines for lesion change detection in dermoscopic images
  • Applied MedGemma, vision–language models, and custom methods
  • Also led Team NinjaX to win iToBoS Kaggle challenge—0.6731 mAP
PythonPyTorchOpenCV SegmentationVision–LanguageMedGemma LLaVA‑MedDiffusion Models
2023 - Present

Lecturer II – Mechatronic Engineering Department

University of Nigeria, Nsukka

  • Taught undergraduate courses in mechatronics and control
  • Member, Mechatronics Research Group
2019 - 2023

Graduate Assistant – Mechatronic Engineering Department

University of Nigeria, Nsukka

  • Conducted research under the supervision of the head of department
  • Supervised students in labs and projects
2017 - 2018

Laboratory Instructor – Mechanical Engineering Department

Petroleum Training Institute, Nigeria

  • Instructor of Mechatronics Laboratory
  • Instructor of CAD/Automotive Laboratory

My Projects

Lesion Change Analysis demo A Lesion Change Analysis demo B
iToBoS

Lesion Change Analysis (iToBoS Internship)

A pipeline for longitudinal analysis of dermoscopic lesions combining vision-language models with custom segmentation and temporal feature extraction. The approach borrows concepts from biomedical signal processing to quantify lesion evolution across timepoints. Multiple models were explored, including LLaVa-Med, GPT-based reasoning, MedGemma feature extraction, and the iToBoS change-detection framework.

  • Vision-language models applied to lesion change assessment
  • Custom segmentation and temporal feature extraction for longitudinal analysis
  • Multi-model integration with LLaVa-Med, GPT-4o-mini, and MedGemma
  • Dermatologist collaboration for verification and annotation of results
  • Results refined in collaboration with dermatologists, ensuring clinical consistency
Vision-Language ModelsLLaVa-MedMedGemmaMedical AILongitudinal Analysis
iToBoS 2024 demo A iToBoS 2024 demo B
Lesion-Detection

iToBoS 2024 – Skin Lesion Detection with 3D-TBP

Kaggle International Competition — 1st place among 16 teams (0.6731 mAP, IoU 0.5–0.75). Built a two-stage pipeline for detecting multiple skin lesions from clinical images produced by 3D total-body photography. The system combined YOLOv8 for bounding-box localization with MedViT for lesion classification, balancing accuracy, speed, and robustness to imaging noise.

  • YOLOv8 used for lesion detection, leveraging decoupled head for improved localization
  • MedViT integrated for classification, tuned for medical image features
  • Pipeline optimized for speed, detection accuracy, and robustness to real-world artifacts
  • Full cycle from training, validation, and post-processing to leaderboard submission
  • Solution approach presented at the competition workshop
YOLOv8MedViTPyTorchComputer VisionMedical AI
Integrated Mobile Manipulation demo A Integrated Mobile Manipulation demo B
Integration

Integrated Mobile Manipulation with Exploration and Task Execution

Built an integrated pipeline that unified exploration, planning, perception, and manipulation on the Turtlebot–SwiftPro platform. Exploration was driven by Next Best View selection in OctoMap, with RRT–Dubins motion planning ensuring feasible navigation for the non-holonomic base. Manipulation was handled through a recursive task-priority controller on the SwiftPro arm, allowing pick-and-place tasks to run in parallel with navigation.

  • Next Best View exploration integrated with RRT–Dubins trajectory planning
  • Task-priority control enabled simultaneous navigation and arm manipulation
  • ArUco detection triggered manipulation tasks from visual perception
  • Behavior trees provided modular coordination of system tasks
  • Validated on the real Turtlebot–SwiftPro platform with ROS
ROSMobile ManipulationRRT-DubinsTask-Priority ControlBehavior TreesArUco
Planning demo A Planning demo B
Planning

Sampling-Based Motion Planning with Dubins Paths

Developed a planning framework for a Turtlebot platform under non-holonomic constraints. Implemented an RRT-like planner that grows the tree in continuous space while enforcing forward motion and bounded turning radius. Dubins path primitives ensured trajectory feasibility by connecting sampled nodes with smooth curves instead of straight lines. The planner was integrated with ROS, consuming occupancy grid maps and generating feasible paths to navigation goals.

  • Tree expansion using Dubins-compliant propagation in (x, y, ψ)
  • Collision checking against occupancy grid for safe navigation
  • Path reconstruction with concatenated Dubins maneuvers
  • Online execution in ROS with continuous feedback from odometry
Motion PlanningRRTDubins PathsROSNon-holonomic
Intervention demo A Intervention demo B
Intervention

Task-Priority Based Kinematic Control for a Mobile Manipulator

Implemented a recursive task-priority controller for a Turtlebot base with a 4-DOF Swift Pro arm in ROS 1. The solver projected lower-priority commands into the null space of higher-priority ones, handling equality and inequality tasks. Behavior trees structured pick-and-place pipelines, integrating perception, navigation, and manipulation into a single flow. ArUco detection enabled online target acquisition, while joint limit enforcement and base suppression ensured safe actuation.

  • Recursive redundancy resolution with weighted damped least squares
  • Joint limit and Cartesian tracking tasks enforced in strict priority
  • Behavior tree execution of pick-and-place routines
  • Vision-driven adaptation with ArUco-based goals
ROS 1Task-Priority ControlKinematicsBehavior TreesArUco
Localization demo A Localization demo B
Localization

Pose-Based EKF SLAM using ICP Scan-Matching on a Kobuki Turtlebot

Designed a localization pipeline on the Kobuki Turtlebot using a pose-based Extended Kalman Filter. The system fused encoder odometry, IMU yaw, and point clouds from a Realsense depth camera converted into 2D slices. Dead reckoning predicted motion through differential drive kinematics, while ICP alignment corrected accumulated drift by matching consecutive scans. IMU yaw was integrated as a pseudo-measurement to stabilize heading and guide ICP initialization. A Mahalanobis-based filter managed the state vector, keeping only statistically significant poses.

  • Pose-based EKF formulation tracking a chain of robot poses
  • Dead reckoning prior corrected by IMU yaw fusion
  • ICP scan-matching as the main observation for trajectory refinement
  • Mahalanobis distance test to suppress redundant updates
EKF SLAMICPROSPoint CloudsIMU
WSSS GIF A WSSS GIF B
Perception

Weakly Supervised Seafloor Segmentation

Segmented side-scan sonar into sand, mud, maerl, and rock with weak labels. Trained with noisy masks, refined pseudo-labels using CRF, and improved class balance with Lovász-Softmax loss. The GIF cycles through qualitative samples and training curves.

  • Noise-tolerant training with warm-up and attention dropout
  • Pseudo-mask refinement using dense CRF post-processing
  • mIoU progression tracked with cross-validation runs
PyTorchCRFLovász-SoftmaxSonar
UAV Precision Agriculture demo A UAV Precision Agriculture demo B
UAV-Agri

Multipurpose Drone and Machine Vision System for Farmland Applications

TETFund 2020 NRF Project – Mechatronics Research Group, UNN. A two-year national research project on UAV-based precision agriculture. My master's thesis work focused on the design and control of the drone, from CAD modeling of the airframe and embedded systems in SolidWorks to full kinematic, dynamic, and state-space implementation in Python. I proposed a hybrid controller combining model predictive control, feedback linearization, and backstepping for trajectory tracking. The platform was co-developed for farmland mapping, crop monitoring, and selective weed management, integrating onboard vision with a targeted spraying mechanism.

  • UAV body and embedded systems designed in SolidWorks
  • Kinematics, dynamics, and state-space models implemented in Python
  • Hybrid control (MPC + feedback linearization + backstepping) for tracking
  • Machine vision for mapping, weed detection, and selective spraying
UAVSolidWorksPythonMPCBacksteppingFeedback LinearizationComputer VisionAgriculture
Lion Ozumba 551 Electric Vehicle demo A Lion Ozumba 551 Electric Vehicle demo B
Lion-Ozumba-551

Lion Ozumba 551 Electric Vehicle

Nigeria's first electric campus shuttle, built at UNN. I led the CAD design team, developing the vehicle body and chassis models that supported fabrication and assembly.

  • CAD design leadership for body and chassis modeling
  • Prototyped for campus shuttle operations
  • Enabled fabrication-ready assemblies and integration
CAD DesignElectric VehicleSolidWorks

Labs

Machine Learning Labs demo A Machine Learning Labs demo B
Machine Learning

Machine Learning Labs

Gradient Descent for Linear Regression

Implemented gradient descent from scratch to fit a line to synthetic data. Visualized the effect of step size on convergence and tracked loss decrease across iterations.

  • Update rule derived and applied for weight and bias
  • Loss curve plotted across epochs
  • Learning rate variation compared
  • Convergence paths shown on cost surface

Linear Regression

Extended linear regression to closed-form and iterative solutions. Demonstrated predictions, residual errors, and regression line fit on datasets.

  • Analytical solution using normal equation
  • Gradient descent formulation validated
  • Plot of regression line against true data
  • Residual error visualization

Logistic Regression

Formulated logistic regression for binary classification. Illustrated sigmoid decision boundary and evaluated classification outcomes.

  • Sigmoid activation applied to linear model
  • Loss and accuracy tracked during training
  • Visualization of class separation
  • Comparison of predicted vs. true labels

Evaluation Metrics

Compared classification metrics on sample tasks. Highlighted precision, recall, F1 score, and ROC curve interpretation.

  • Confusion matrix visualized
  • Precision-recall curve plotted
  • ROC-AUC curve evaluated
  • Trade-offs between metrics explained

Fully Connected Neural Network 1

Constructed a feedforward network in PyTorch with one hidden layer. Trained on classification data and visualized loss/accuracy progress.

  • Forward and backward passes coded in PyTorch
  • Cross-entropy loss and SGD optimization
  • Training and validation curves plotted
  • Activation functions tested

Fully Connected Neural Network 2

Extended network to deeper architecture with multiple hidden layers. Investigated the effect of model complexity and dropout.

  • Multi-layer structure implemented in PyTorch
  • Regularization with dropout applied
  • Loss and accuracy monitored
  • Comparison of shallow vs. deep networks

Decision Trees

Applied decision trees to structured data for classification. Visualized splits and depth-related generalization.

  • Tree construction with Gini/entropy criteria
  • Depth parameter tuned for over/underfitting
  • Graphical tree structure shown
  • Decision boundaries visualized

Image Classification using a CNN

Implemented a convolutional network for image recognition. Demonstrated feature extraction through convolution and pooling layers.

  • Conv, ReLU, and pooling layers stacked
  • Flattened features connected to dense layers
  • Training accuracy and loss tracked
  • Visualization of learned filters

Image Classification using a CNN (Lab 9)

Refined CNN with tuning and regularization. Highlighted improvements in classification accuracy.

  • Hyperparameter tuning of learning rate and batch size
  • Data augmentation applied
  • Dropout to reduce overfitting
  • Final accuracy curves compared
Autonomous Systems Labs demo A Autonomous Systems Labs demo B
Autonous-Systems

Autonomous Systems Labs

Overview

A sequence of labs focused on motion planning and learning for mobile robots. Covered potential fields, heuristic graph search, sampling-based planning, and reinforcement learning, each implemented and validated with visualization or simulation.

  • End-to-end implementations with visualization
  • Comparative analysis across maps and planners
  • Focus on feasibility, optimality, and robustness

Lab 2: Potential Functions

Implemented attractive, repulsive, and total potential fields on grid maps. Tested gradient descent and developed wavefront planning to avoid local minima.

  • Attraction and repulsion fields combined into total potential
  • Brushfire algorithm for distance maps
  • Gradient descent with 4- and 8-connectivity
  • Wavefront planner for global path generation

Lab 3: Graph Search (A*)

Applied A* to visibility graphs and grid maps. Compared path quality under 4- and 8-connectivity with Euclidean heuristics.

  • Visibility graph construction and search
  • Grid-map A* with 4/8 connectivity
  • Optimal path reconstruction
  • Path cost analysis across maps

Lab 4: Sampling-Based Planners (RRT and RRT*)

Developed RRT and RRT* on cluttered maps. RRT ensured feasibility, RRT* optimized cost via rewiring, with post-process smoothing.

  • Object-oriented RRT and RRT* implementations
  • Random sampling with goal biasing
  • Collision-free steering and extension
  • Path smoothing and cost comparison

Reinforcement Learning Demo (Q-Learning)

Built a real-time Q-learning framework with TurtleBot visualization. Reward shaping and adaptive epsilon decay improved convergence.

  • Distance-based reward shaping and success rewards
  • Slower epsilon decay for sustained exploration
  • Real-time map rendering with robot visualization
  • Training analytics: heatmaps and success metrics
Multiview Geometry Labs demo A Multiview Geometry Labs demo B
Multiview-Geometry

Multiview Geometry Labs

Overview

A sequence of labs implementing core methods in multiview geometry for calibration, feature matching, epipolar geometry, and stereo visual odometry. Each lab combined theoretical derivations with practical coding and experiments on real or synthetic data.

  • End-to-end implementations with real and synthetic data
  • Theoretical foundations combined with practical coding
  • Focus on calibration, matching, epipolar geometry, and VO

Lab 1: Camera Projection and Calibration

Implemented projection matrices and estimated camera parameters from 3D–2D correspondences using the Hall method. Investigated the effect of point distribution and noise on parameter recovery.

  • Projection matrix estimation from intrinsic and extrinsic parameters
  • Hall method applied to compute 11 unknown parameters
  • Analysis of noise on skew, focal length, and principal point
  • Error reduction with increasing number of 3D points

Lab 2: Feature Extraction and Image Registration

Applied SIFT feature extraction on underwater datasets to detect and match features. Evaluated the role of the distance ratio in reliable matching and implemented homographies for image registration.

  • SIFT feature detection and descriptor matching
  • Effect of distance ratio on associations and errors
  • Homography models for planar registration
  • RANSAC to eliminate mismatches and minimize reprojection error

Lab 3: Epipolar Geometry and Fundamental Matrix

Derived and estimated the fundamental matrix using both analytical formulation and the 8-point algorithm. Implemented epipolar line visualization and explored noise sensitivity.

  • Analytical and 8-point estimation of F
  • Epipolar line computation and visualization
  • Noise analysis with Gaussian perturbations
  • Rank-2 enforcement and normalization for stable geometry

Lab 4: Stereo Visual Odometry

Built a stereo VO pipeline using the UTIAS dataset. Frames were undistorted and rectified, followed by feature extraction with bucketing for uniform coverage. Motion was estimated using 3D-to-3D and 2D-to-3D alignment.

  • Stereo rectification and feature bucketing
  • Circular matching across stereo frames
  • Pose estimation via 3D-to-3D and 2D-to-3D methods
  • Monte Carlo robustness tests under noise
  • Trajectory alignment with GPS reference
Probabilistic Robotics Labs demo A Probabilistic Robotics Labs demo B
Probabilistic-Robotics

Probabilistic Robotics Labs

Overview

A series of labs implementing probabilistic methods for localization of a differential drive mobile robot. Each step introduced progressively more advanced filtering techniques, from odometry-only estimation to particle filters and map-based EKF localization.

  • Progressive complexity from dead reckoning to advanced filters
  • Practical implementation of probabilistic localization
  • Focus on differential drive robots and sensor fusion

Lab 1: Differential Drive Simulation and Dead Reckoning

Implemented a differential drive robot model and tested dead reckoning localization. Pose was updated from wheel encoders, with Gaussian noise added to simulate real drift. Circular and figure-eight trajectories were used to analyze error growth.

  • Simulated differential drive robot with encoder and noise models
  • Dead reckoning localization from odometry
  • Trajectory analysis under circular and 8-shaped paths
  • Error growth analysis with Gaussian noise simulation

Lab 2: Monte Carlo Localization

Extended the simulation with a particle filter for probabilistic localization. Prediction was implemented from motion models, and observations were incorporated using range data compared with map features. Both roulette wheel and stochastic universal resampling were applied and compared.

  • Prediction step using noisy motion model
  • Observation step with Gaussian likelihood from range sensors
  • Resampling via roulette wheel and stochastic universal resampling
  • Analysis of particle size (100 vs 300) on localization quality

Lab 4: Map-Based EKF Localization

Implemented Extended Kalman Filter localization using odometry, compass heading, and map-based feature observations. Tested scenarios with only motion prediction, compass updates at different rates, and combined compass + feature updates. Explored both Cartesian and Polar feature representations.

  • EKF prediction and update from encoders and compass
  • Feature-based map localization with Cartesian and Polar features
  • Data association integrated into EKF update
  • Comparative evaluation of compass-only, feature-only, and combined updates

Publications

Contact Me

Email

solomon.nwafor@unn.edu.ng

u1999124@campus.udg.edu

Phone

(+34) 600961183

Google Scholar

Google Scholar Profile