Difference between revisions of "Adaptive warning field system"

From ISLAB/CAISR
(Created page with "{{StudentProjectTemplate |Summary=Adaptive warning field system |Programme=Mobile and Autonomous Systems |Keywords=3D perception, mapping, |TimeFrame=January 2017 until June ...")
 
Line 14: Line 14:
 
|Supervisor=Björn Åstrand,  
 
|Supervisor=Björn Åstrand,  
 
|Level=Master
 
|Level=Master
|Status=Open
+
|Status=Closed
 
}}
 
}}
 
A central issue for robots and automated guided vehicles (AGVs) is safety; the robot or AGV must not harm humans or damage objects in the environment. Safety concerns have become more and more important as the use of AGVs has spread and advances in sensor technology, sensor integration, and object detection and avoidance have been more widely adapted. Today’s safety systems only work in 2D and consist of a static protection field (in size) and a speed adaptive warning field – higher speed - larger field. However, mostly the size of the warning field is hardcoded and heavily bounded to the AGV route. The setup of such system is costly and takes a long time to adjust for proper and efficient operation.
 
A central issue for robots and automated guided vehicles (AGVs) is safety; the robot or AGV must not harm humans or damage objects in the environment. Safety concerns have become more and more important as the use of AGVs has spread and advances in sensor technology, sensor integration, and object detection and avoidance have been more widely adapted. Today’s safety systems only work in 2D and consist of a static protection field (in size) and a speed adaptive warning field – higher speed - larger field. However, mostly the size of the warning field is hardcoded and heavily bounded to the AGV route. The setup of such system is costly and takes a long time to adjust for proper and efficient operation.

Revision as of 15:25, 22 September 2017

Title Adaptive warning field system
Summary Adaptive warning field system
Keywords 3D perception, mapping,
TimeFrame January 2017 until June 2017, with possible extension until September 2017
References SAS2-project, http://islab.hh.se/mediawiki/SAS2

ROS - Robot Operating System, http://www.ros.org/ OpenCv - http://opencv.org/

Nemati, Hassan, Åstrand, Björn (2014). Tracking of People in Paper Mill Warehouse Using Laser Range Sensor. 2014 UKSim-AMSS 8th European Modelling Symposium, EMS 2014, Pisa, Italy, 21-23 October, 2014.

Power, P. Wayne, and Johann A. Schoonees. "Understanding background mixture models for foreground segmentation." Proceedings image and vision computing New Zealand. Vol. 2002. 2002.

Prerequisites Image analysis, programming skills (preferably C++ or Python)
Author
Supervisor Björn Åstrand
Level Master
Status Closed


A central issue for robots and automated guided vehicles (AGVs) is safety; the robot or AGV must not harm humans or damage objects in the environment. Safety concerns have become more and more important as the use of AGVs has spread and advances in sensor technology, sensor integration, and object detection and avoidance have been more widely adapted. Today’s safety systems only work in 2D and consist of a static protection field (in size) and a speed adaptive warning field – higher speed - larger field. However, mostly the size of the warning field is hardcoded and heavily bounded to the AGV route. The setup of such system is costly and takes a long time to adjust for proper and efficient operation.

The goal with this project [as a subset of SAS2 project] is to develop a safety system based on an adaptive warning field that autonomous learns the foreground (static and dynamic obstacles) and background model (static objects). The approach is to use 3D perception along with methods that combine a method that continuously segments background from foreground model (e.g. optical flow approach or Gaussian mixture models), with a method that uses a geometric map to filter out the foreground model. A challenge is how to update/learn the geometric map.

Preferable the solutions are designed as ROS-packages (or, c++, python, matlab -code).

Resources: Facilities for data logging, cameras, depth sensor, data logging equipment, data set from warehouse and collaboration with industrial partners.

RQ: How to automatically construct and represent a volume of interests for safe traverse of the driverless truck? What sensors to use and how to integrate information from different sources, e.g. other sensors and maps? How to detect the difference between foreground (static and dynamic obstacles) or background model (static objects).

WP1: Literature review and construction of a dataset. WP2: Develop methods for estimation of background /foreground model and an adaptive warning field system. WP3: Evaluation of the feasibility of the system and comparison with existing systems. WP4: [bonus] conference publication (ETFA, ECMR, TAROS)

Deliverable: an implementation and demonstration of the developed system for adaptive warning field system using data acquired in a real warehouse or mine.