Computationally efficient radar estimation from multiple radar sensors

From CERES
Jump to: navigation, search
Title Computationally efficient radar estimation from multiple radar sensors
Summary Computationally efficient radar estimation from multiple radar sensors.
Keywords Radar, point clouds, sensor fusion
TimeFrame asap, 6 months
References Test
Prerequisites Test
Author Johan Thunberg, Emil Nilsson
Supervisor Johan Thunberg, Emil Nilsson, Pererik Andreasson
Level Flexible
Status Open

Generate PDF template

Radar is emerging as a non-invasive alternative in applications such as healthcare monitoring. Key features such as pulse, respiration and movement patterns are possible to detect using spectral methods. A natural question in this context is how the estimation of such features can be improved by utilizing multiple radar sensors. In the context of computer vision, the benefits of using multiple cameras over one are well known. The multiple data obtained from different cameras can be used together to improve the accuracy of the feature estimations. However, the extension to multiple sensors comes at a price. The multiple data obtained from the different sensors need to be synchronized, or equivalently be associated or put into correspondence, which comprises a new problem not present for one sensor.

In computer vision such synchronization algorithms have been well studied and can be applied at different steps in the feature detection process. Radar data however, differs from the image data obtained in computer vision. There is a need to understand if algorithms from computer vision can be efficiently applied to such data on the one hand, and what new methods need to be developed on the other. To answer these questions we propose the following project:

A multi-radar test bed shall be developed. The radar sensors are placed at different positions and with different orientations. They observe a common scene. The scene can be either static (nothing is moving) or dynamic (a person or on object moves). The data from the multiple sensors will be used for evaluation of the algorithms. In order to benchmark the algorithms, additional information will be used to obtain the ground truth of the scenes in question. Such information is obtained by using regular cameras on the one hand, and knowledge about the objects’ geometry on the other. This means that, all-in-all, the testbed contains multiple radars, multiple cameras and objects with (at least partly known) geometry. The suggested development environment is Matlab. However, the data format and storage is yet to be determined. The main output of the project is a data set with radar data and camera data that can be used for algorithm evaluation. A secondary goal, which is not a requirement for completion of the project, is that a basic evaluation of some algorithm for feature estimation is implemented.