Loading...
ISSN No: 2349-2287 (P) | E-ISSN: 2349-2279 (O) | E-mail: editor@ijiiet.com
Author : P.Ashwini, V.Chiranjeevi, S.Nagamani
Abstract :
Applying our proposed neural network architecture to UAV footage shot in static environments, we can extract depth maps from stabilized monocular footage. A new navigational synthetic dataset is used for training purposes; it simulates aerial imagery shot in stiff situations using a gimbal stabilized monocular camera. We suggest a multi range architecture for unrestricted UAV flying based on this network, which uses flight data from sensors to create accurate depth maps for an outdoor setting free of obstructions. Using both simulated and real-world UAV f-light data, we test our approach. Results for synthetic scenes with a little orientation noise are provided quantitatively, demonstrating that our multi-range architecture enhances depth inference. (a) – (c) In Figure 1. One option for stabilizing a camera is a mechanical gimbal; another is to use dynamic cropping from a fish-eye lens; and a third is to use a handheld camera. An accompanying movie provides a more in-d