Abstract
In order for a robot to operate autonomously in an environment, it must be able to locate itself within it. A robot’s position and orientation cannot be directly measured by physical sensors, so estimating it is a non-trivial problem. There are sensors that provide this information, such as the Global Navigation Satellite System (GNSS) and Motion capture (Mo-cap). Nevertheless, these sensors are expensive to set up, or they are not useful in environments where autonomous vehicles are often deployed. Our proposal utilizes one vision sensor and a fiducial Moiré marker to estimate the position and orientation of the user using deep learning. This approach was tested experimentally in standard hardware for autonomous system achieving a position estimator with 30 Hz refresh rate and a Mean Squared Error of 0.02 meters.
Brief Biography
Nawaf Alotaibi is a Mechanical Engineering Masters student at KAUST. He is part of the Robotics, Intelligent Systems and Control (RISC) lab led by Dr. Eric Feron. He received his Bachelor of Science in Mechanical Engineering from Georgia Tech. Nawaf's main research interest is in autonomous vehicles, manipulation, multi-agent systems, and deep learning.