Check our second newsletter!

10.07.2019

 

Air-to-ground channel models and simulators

Based on the theoretical and simulation-based analysis presented during the previous meeting, we could estimate the influence of the aerial users of cellular networks on the existing cellular and satellite TV services, respectively. In these works, we provide guidelines for drone and infrastructure deployment: i) BS deployment height, ii) BS antenna tilt, ii) required separation distance between the drone and the satellite dish.

 

Coverage-aware Path Planning for UAVs

To operate UAVs safely, wireless coverage is of utmost importance. Currently, deployed cellular networks often exhibit an inadequate performance for aerial users due to high inter-cell interference. For the purpose of UAV trajectory planning, wireless coverage should be taken into account to mitigate interference and to lower the risk of dangerous connectivity outages. In our research, several path planning strategies are proposed and evaluated to optimize wireless coverage for UAVs.

A simulator using a real-life 3D map is used to evaluate the proposed algorithms for both 4G and 5G scenarios. We show that the proposed Coverage-Aware A* algorithm, which alters the UAV's flying altitude, is able to improve the mean SINR by 3-4dB and lower the cellular outage probability by a factor of 10. Furthermore, the outages that still occur have a 60% shorter length, hence posing a lower risk to induce harmful accidents.

 

Drone detection

We presented the concept of an UAV-mounted passive radar. Since the radar has no active transmitter and uses signals transmitted by illuminators of opportunity (IOO), it is a low cost, lightweight, low-power consuming solution. In this work, the detection performance of a drone-mounted passive radar is presented, with various settings in terms of targets, wireless propagation, realistic antenna patterns and signal processing

 

Design of a 720° Camera System

The design of a 720 degree camera system requires a good balance between overlapping pixel resolution and lens distortions. We have learned from WP1 that an optimal setup would be a 4-camera system with 190°-220° lenses. The mechanical framework should support 4 Ximea camera sensors with lens mounts, as well as a NVIDIA Jetson AGX Xavier development kit, a 4 port USB3 or PCI express interface and optionally a battery kit. The proposed cameras have been tested and the software is able to access the captured frames. The design of the mechanical rig is ongoing and a first iteration of a stable lens mount is finished. Ideally, the lens mounts can be adapted with retractable arms based on the span width of the drone. If the cameras are too close to each other, the rotors of the drone will partially occlude the omnidirectional view. If the mechanical framework is able to adjust the baseline, the cameras can be positioned in the sweet spot with minimal occlusions and maximal overlap.

Depth-Aware Stitching

A big challenge of the capturing system is to combine the frames of the 4 individual cameras of the 720° camera setup into one stitched omnidirectional frame. The hardware setup consists of multiple cameras with different camera centers. Parallax effects will occur in the captured video frames because of a small change in viewpoint. Naïve panoramic stitching algorithms often ignore this relative change in position and, as a result, will cause stitching artifacts in the generated omnidirectional frames. We explored how plane sweeping algorithms can be altered to work on spherical images and can be used to improve panoramic stitching. Each camera frame is reprojected on all possible concentric depth planes. The per pixel depth with the smallest reprojection error is selected as a good depth candidate. The output is rudimentary depth information that can guide the stitching algorithm and can reduce the amount of stitching artifacts. Unfortunately, we have observed that it only works in controlled environments. To overcome this noise, a bilateral filter in the spatial and depth domain can be applied, at the cost of more processing power. Optionally the depth range can be further reduced by adding a low-resolution omnidirectional depth sensor, e.g. a LIDAR.

 

Current work: Collaborative Recording

The capturing system will output one or more high resolution omnidirectional videos. The goal of the collaborative recording tool is to generate a new video that automatically decides on the best viewpoint and virtual framing. As a first proof of concept, we have started working on the waggle conference data. The position of the conference speaker is obtained by integrating the lightnet YOLO person detection (by EAVISE). Based on this naive bounding box information, an initial virtual framing can be achieved. However, to make the generated video sequence less noisy and more cinematographic pleasing, more advanced frame selection and camera selection criteria are being explored. Therefore, we made an overview of all the relevant techniques (e.g. rule of thirds, camera motions; etc.) based on popular literature. The next step is to integrate all the important effects in a large custom cost function. The general idea is to move gradually to a local minimum of this cost function by changing the framing, camera selection and camera position. This automatic director will run in a separate process, so it can communicate with both the real time/offline captured cameras and possibly the drone operator.

 

 

Breaking high-res CNN bandwidth barriers

Convolutional Neural Networks (CNNs) recently also start to reach impressive performance on non-classification image processing tasks, such as denoising, demosaicing, super-resolution or super-slow motion. Consequently, and also within Omnidrone, CNNs are increasingly deployed on very high resolution images. However, the resulting high resolution feature maps pose unseen requirements on the memory system of neural network processing systems, as on-chip memories are too small to store high resolution feature maps, while off-chip memories are very costly in terms of I/O bandwidth and power. 

Within MICAS, we developed a dataflow scheduler, which reorganizes the classical layer-by-layer inference approach, typical for neural networks, into a depth-first execution scheme. In this scheme, internal feature data points are consumed as quickly as possible after they are generated to limit the lifetime of internal variables, and hence the amount of embedded system memory required. This scheduler was further enhanced by applying tiling. We demonstrate with this scheduler how the alternative depth-first network computation using line buffers reduces I/O bandwidth requirements up to >200×, or on-chip memory requirements up to >10000× over the standard dataflow.

 

Towards depth-first processing chip

The depth first processing approach cannot be fully exploited on GPU due to its built in parallelization scheme. It can be exploited on FPGA,  for which an implementation is currently built at MICAS for a real-time, high resolution depth-extraction neural network (SPyNet) application. While the FPGA platform already shows benefits of depth first computation, it suffers from reduced energy efficiency typical for FPGA's. To overcome this, we are currently pursuing a silicon implementation of  processor chip optimized for depth first network execution. The tape out is planned in 12nm FinFet technology of Global Foundries. The tape out will take place in the Fall of 2019.

 

Towards scalable neural network execution: multi-scale and dynamic neural networks

The MICAS team explores approaches to scale the hardware resource consumption with the instantaneous difficulty of the executed task, which can be data- and time-dependent. This involves the usage of multi-scale and dynamic neural networks, in combination with run-time reconfigurable hardware platforms. One prototype involves a scalable depth-extraction algorithm using neural networks (based on SPyNet). The network can be executed with a selected number of scales, in function of the amount of compute time available. In parallel, other runtime pruning methods for neural networks using dynamic neural networks are explored.

 

 

Please reload

News

29.06.2018

Please reload

Archive

Please reload

Contacts:

Kasteelpark Arenberg 10

3001, Leuven