Difference between revisions of "Dynamic Objects Detection and Tracking"

From ISLAB/CAISR
 
Line 3: Line 3:
 
|Programme=Mobile and Autonomous Systems
 
|Programme=Mobile and Autonomous Systems
 
|Keywords=3D Sensor, Point Cloud, Obstacle Detection, Obstacle Tracking, Obstacle Avoidance.
 
|Keywords=3D Sensor, Point Cloud, Obstacle Detection, Obstacle Tracking, Obstacle Avoidance.
|TimeFrame=January 2015 until June 2015, with possible extension until September 2015.
+
|TimeFrame=October 2017 to June 2018, with possible extension to September 2018
 
|References=Petrovskaya, Anna, and Sebastian Thrun. "Model based vehicle detection and tracking for autonomous urban driving." Autonomous Robots 26.2-3 (2009): 123-139.
 
|References=Petrovskaya, Anna, and Sebastian Thrun. "Model based vehicle detection and tracking for autonomous urban driving." Autonomous Robots 26.2-3 (2009): 123-139.
  
Line 26: Line 26:
 
Börcs, Attila, et al. "A Model-based Approach for Fast Vehicle Detection in Continuously Streamed Urban LIDAR Point Clouds." (2014).
 
Börcs, Attila, et al. "A Model-based Approach for Fast Vehicle Detection in Continuously Streamed Urban LIDAR Point Clouds." (2014).
 
|Prerequisites=Familiarity with filtering techniques (eg. EKF) for mobile robots localization, Image analysis, programming skills (preferably C++ or Python).
 
|Prerequisites=Familiarity with filtering techniques (eg. EKF) for mobile robots localization, Image analysis, programming skills (preferably C++ or Python).
|Supervisor=Björn Åstrand, Saeed Gholami Shahbandi,
+
|Supervisor=Björn Åstrand, Naveed Muhammad,
 
|Level=Master
 
|Level=Master
 
|Status=Open
 
|Status=Open

Latest revision as of 16:22, 22 September 2017

Title Dynamic Objects Detection and Tracking
Summary Dynamic Objects Detection and Tracking in Warehouses, Using 3D Sensors.
Keywords 3D Sensor, Point Cloud, Obstacle Detection, Obstacle Tracking, Obstacle Avoidance.
TimeFrame October 2017 to June 2018, with possible extension to September 2018
References Petrovskaya, Anna, and Sebastian Thrun. "Model based vehicle detection and tracking for autonomous urban driving." Autonomous Robots 26.2-3 (2009): 123-139.

Wojke, N.; Haselich, M., "Moving vehicle detection and tracking in unstructured environments," Robotics and Automation (ICRA), 2012 IEEE International Conference on , vol., no., pp.3082,3087, 14-18 May 2012.

Moras, J.; Cherfaoui, V.; Bonnifait, P., "A lidar perception scheme for intelligent vehicle navigation," Control Automation Robotics & Vision (ICARCV), 2010 11th International Conference on , vol., no., pp.1809,1814, 7-10 Dec. 2010

Golovinskiy, Aleksey, Vladimir G. Kim, and Thomas Funkhouser. "Shape-based recognition of 3D point clouds in urban environments." Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009.

Granstrom, K.; Lundquist, C.; Gustafsson, F.; Orguner, U., "Random Set Methods: Estimation of Multiple Extended Objects," Robotics & Automation Magazine, IEEE , vol.21, no.2, pp.73,82, June 2014

Data Association and Tracking a survey RoboEarth.

Rusu, Radu Bogdan, and Steve Cousins. "3d is here: Point cloud library (pcl)." Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011.

Brostow, Gabriel J., et al. "Segmentation and recognition using structure from motion point clouds." Computer Vision–ECCV 2008. Springer Berlin Heidelberg, 2008. 44-57.

Drost, Bertram, et al. "Model globally, match locally: Efficient and robust 3D object recognition." Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010.

Biasotti, S. ; Falcidieno, B. ; Giorgi, D. ; Spagnuolo, M. “Mathematical Tools for Shape Analysis and Description”, 2014, Publisher :Morgan & Claypool, Edition:1, ISBN:1627053646

Börcs, Attila, et al. "A Model-based Approach for Fast Vehicle Detection in Continuously Streamed Urban LIDAR Point Clouds." (2014).

Prerequisites Familiarity with filtering techniques (eg. EKF) for mobile robots localization, Image analysis, programming skills (preferably C++ or Python).
Author
Supervisor Björn Åstrand, Naveed Muhammad
Level Master
Status Open


Concise Description
This project [as a subset of AIMS project], targets the automation of lift trucks in warehouse environments. Operating automatic guided vehicles in this particular environment is challenging due to the high expected throughput and consequently high traffic. Lift trucks are heavy vehicles operating with relatively high speed in an environment where neither the trucks, nor the humans are well protected as a regular urban traffic. This calls for a extra security measure and cautious decisions. Collision avoidance is an essential skill for mobile robots to guarantee a safe operation in a workspace shared with humans. This project focuses on detection and tracking of dynamic objects in order to avoid collision.
Objective
To reliably detect, segment, and track dynamic objects (eg. humans and lift trucks) from a 3D point cloud, acquired by the means of a 3D sensor mounted on a mobile robot, in a highly structured environment (warehouse).
Research Questions
What is the optimal sensor configuration to minimize the blind spots, data losses due to sensor deficiency, and consequently improving the detection accuracy?
How to exploit the assumption of structured environment to improve tracking?
How the background knowledge of agent types (humans, manually driven trucks and auto-guided trucks) and their behaviour models could improve the tracking?
Preliminary Plan
  • startup: literature review and data acquisition
  • point cloud manipulation, object segmentation, scene understanding.
  • filtering and tracking.
  • [bonus] object recognition
Deliverable
An implementation and demonstration of the developed method for detection and tracking of the moving obstacle over the real data acquired in a real warehouse.
Bonus
conference publication (ETFA, ECMR, TAROS)