Action Library for Robot Execution

From ISLAB/CAISR
Title Action Library for Robot Execution
Summary Action Library for Robot Execution
Keywords
TimeFrame Fall 2022
References
Prerequisites
Author Fredrik Lundborg
Supervisor Eren Erdal Aksoy
Level Master
Status Open


We start with the assumption that the robot has a robust perception algorithm that gives 6D pose of objects in the scene (i.e., the output of the first topic described above). Given a high-level goal such as put objects on top of each other, the robot should decide which objects are easy to grasp, so that it can detect grasp points accordingly. To do that, we will use state-of-the-art pose and grasp detection algorithms. The robot should then start grasping and moving objects one after another. The main task of the thesis is to create an action library where a set of manipulation tasks (e.g., push, lift, grasp) are defined for the robot. There will be no learning involved in the first attempt. Once the student has enough time left, we could include advanced learning algorithms to define the action library. It will be more about customizing the MoveIt library from ROS for our robot UR3e. The actual scientific contribution would be that the robot will decide which sensor it should rely on while performing the manipulation task. For instance, the robot can autonomously decide that the wrist camera is enough for pushing action while both wrist and fixed cameras are needed for pick&place. The force/torque sensors on the robot wrist will also be investigated. Therefore, a set of statistical analysis needs to be performed with the robot. This topic is more about robot control using ROS and simulation as well.