(+62) 813-6984-2186 | alifwr@outlook.com
Robotics & AI Enthusiast | Software Engineer | Researcher
Combining AI, Robotics, and Software Engineering to Drive Technological Advancements.
Hi, I’m Alif, a robotics enthusiast, software engineer, and researcher. My passion lies in combining cutting-edge technologies with creative solutions to drive progress. I work at the intersection of robotics, AI, software engineering, and scientific research, creating systems that solve real-world challenges and push the boundaries of what's possible.
In this project, I developed a deep learning model to denoise ECG signals, improving the clarity and accuracy of these signals for medical analysis. The model combined Transformer and Convolutional Neural Network (CNN) architectures to leverage the strengths of both. The Transformer component was used to capture long-range dependencies and temporal relationships within the ECG signal, while the CNN layers were employed to extract local patterns and features effectively. The integration of these two models allowed for superior noise reduction, particularly in handling complex and non-linear noise present in real-world ECG data. The denoising process significantly enhanced signal quality, making it easier to identify key features such as heart rate variability and detecting abnormal rhythms.
An object tracking for robust human tracking application. The system fused the rgb image data with infrared image data to gain more robust object detection within the various lighting conditions. A kalman filter used to estimate the next position of the tracked human, and estimate the current position of a human that not detected in current frame. A deep learning model for appearance feature extraction was utilized. These appearance feature then used to associate a list of human in the previous frame with the current frame.
A web application with bio-metric security applied. In this work, a user who login needs to capture their face. The face image is then fed into the Siamese network to classify whether the person is the same as the login data and whether the image is live or fake. The Siamese network used InceptionNet as the feature extraction layer to generate embedding. Before the bio-metric verification process, the user faces should registered. The registration was done by storing the embedding values in a vector database named QdrantDB. The face image was given to the feature extraction layer in the verification process, resulting in the embedding values. Then, the system searches for the nearest similarity with stored embedding within the database. If the resulting embedding is identical to the login data from the user, then the user is verified to log in.
An Omni-wheel mobile robot can create a map of gas leakage intensities. This robot utilizes three gas sensors and uses the sensor data with a neural network to classify the gas into three classes: butane, alcohol, and gasoline. The robot needs a precise localization system to create a stunning and accurate map. A trilateration-based localization was used since the robot’s working area is in a warehouse, categorized as an indoor environment. The trilateration employed ultra-wide-band technology. There were four UWB devices. The first UWB is placed in the robot and works as a tag. The others were placed in a particular position and formed a triangle with certain distances.
A differential drive mobile robot that capable to follows certain person by detect and track the position of the human legs position. Human legs were detected using a deep learning model, and tracked using kalman filter. Adaptive Social Force Model were utilized as the navigation control, which has been optimized using Genetic Algorithm. The optimization process was done in simulation using CoppeliaSim.
A tool for detect, track, and associate persons around a robot, to help the navigation processes of the robot. The scanned data from the lidar sensor were plotted into 2d map. from the 2d map, a SSD MobilenetV3 was utilized to detect the object that recognized as human legs.
This is part of High Performance Computing course’s tasks. The goal of this project is to improve the time process of image stitching by distribute the work loads to several nodes. In this work, to distribute the work loads, we implemented MPI.
An application that provide cheating detection for monitoring the process of test. To recognize wether the person is cheating or not, the system used Yolov3 to detect the person’s hands. We defined the working area of the person, if the detected hands went out from the defined frame, the system mark it as a cheating. MTCNN architecture also utilized to get facial feature to determine the person’s gaze.
An application that provide attendance system that also make sure if the attending person using mask due to covid19. This system utilize Local Binary Pattern Histogram (LBPH) for the face detection. To recognize the person identity, VGGNet16 was utilized. And for the mask detection, Yolov5 was utilized. MySQL was used as the database to record the user attendance data. All the system was implemented in a Raspberry Pi 4.
A monitoring device that utilize microphone as the main sensor. The microphone is recording the sound around it, the do the feature extraction into the spectrogram form. In the spectrogram form, a Convolutional Neural Network were utilized to able recognize the recorded event. MySQL were used as the database to record the user attendance data, and laravel were used as the backend server. As the user interface, an android application was built.