scispace - formally typeset
Search or ask a question
Author

Bernhard Rinner

Bio: Bernhard Rinner is an academic researcher from Alpen-Adria-Universität Klagenfurt. The author has contributed to research in topics: Smart camera & Wireless sensor network. The author has an hindex of 35, co-authored 242 publications receiving 4819 citations. Previous affiliations of Bernhard Rinner include Vienna University of Technology & Graz University of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: This work designed the smart camera as a fully embedded system, focusing on power consumption, QoS management, and limited resources, and combined several smart cameras to form a distributed embedded surveillance system that supports cooperation and communication among cameras.
Abstract: Recent advances in computing, communication, and sensor technology are pushing the development of many new applications. This trend is especially evident in pervasive computing, sensor networks, and embedded systems. Smart cameras, one example of this innovation, are equipped with a high-performance onboard computing and communication infrastructure, combining video sensing, processing, and communications in a single embedded device. By providing access to many views through cooperation among individual cameras, networks of embedded cameras can potentially support more complex and challenging applications - including smart rooms, surveillance, tracking, and motion analysis - than a single camera. We designed our smart camera as a fully embedded system, focusing on power consumption, QoS management, and limited resources. The camera is a scalable, embedded, high-performance, multiprocessor platform consisting of a network processor and a variable number of digital signal processors (DSPs). Using the implemented software framework, our embedded cameras offer system-level services such as dynamic load distribution and task reconfiguration. In addition, we combined several smart cameras to form a distributed embedded surveillance system that supports cooperation and communication among cameras.

302 citations

Journal ArticleDOI
01 Jan 2018
TL;DR: A high-level architecture is described for the design of a collaborative aerial system consisting of drones with on-board sensors and embedded processing, coordination, and networking capabilities that has potential in disaster assistance, search and rescue, and aerial monitoring.
Abstract: Small drones are being utilized in monitoring, transport, safety and disaster management, and other domains. Envisioning that drones form autonomous networks incorporated into the air traffic, we describe a high-level architecture for the design of a collaborative aerial system consisting of drones with on-board sensors and embedded processing, coordination, and networking capabilities. We implement a multi-drone system consisting of quadcopters and demonstrate its potential in disaster assistance, search and rescue, and aerial monitoring. Furthermore, we illustrate design challenges and present potential solutions based on the lessons learned so far.

277 citations

Journal ArticleDOI
TL;DR: This paper reports on the ongoing research on deploying small-scale, battery-powered and wirelessly connected UAVs carrying cameras for disaster management applications and the challenges of aerial sensor networks.
Abstract: Advances in control engineering and material science made it possible to develop small-scale unmanned aerial vehicles (UAVs) equipped with cameras and sensors. These UAVs enable us to obtain a bird's eye view of the environment. Having access to an aerial view over large areas is helpful in disaster situations, where often only incomplete and inconsistent information is available to the rescue team. In such situations, airborne cameras and sensors are valuable sources of information helping us to build an "overview" of the environment and to assess the current situation. This paper reports on our ongoing research on deploying small-scale, battery-powered and wirelessly connected UAVs carrying cameras for disaster management applications. In this "aerial sensor network" several UAVs fly in formations and cooperate to achieve a certain mission. The ultimate goal is to have an aerial imaging system in which UAVs build a flight formation, fly over a disaster area such as wood fire or a large traffic accident, and deliver high-quality sensor data such as images or videos. These images and videos are communicated to the ground, fused, analyzed in real-time, and finally delivered to the user. In this paper we introduce our aerial sensor network and its application in disaster situations. We discuss challenges of such aerial sensor networks and focus on the optimal placement of sensors. We formulate the coverage problem as integer linear program (ILP) and present first evaluation results.

253 citations

Proceedings ArticleDOI
18 May 2015
TL;DR: A modular architecture of an autonomous unmanned aerial vehicle (UAV) system for search and rescue missions that is capable of providing a real-time video stream from a UAV to one or more base stations using a wireless communications infrastructure.
Abstract: This paper proposes and evaluates a modular architecture of an autonomous unmanned aerial vehicle (UAV) system for search and rescue missions. Multiple multicopters are coordinated using a distributed control system. The system is implemented in the Robot Operating System (ROS) and is capable of providing a real-time video stream from a UAV to one or more base stations using a wireless communications infrastructure. The system supports a heterogeneous set of UAVs and camera sensors. If necessary, an operator can interfere and reduce the autonomy. The system has been tested in an outdoor mission serving as a proof of concept. Some insights from these tests are described in the paper.

219 citations

Journal ArticleDOI
17 Oct 2008
TL;DR: It is argued that distributed smart cameras represent key components for future embedded computer vision systems and that smart cameras will become an enabling technology for many new applications.
Abstract: Distributed smart cameras (DSCs) are real-time distributed embedded systems that perform computer vision using multiple cameras. This new approach has emerged thanks to a confluence of simultaneous advances in four key disciplines: computer vision, image sensors, embedded computing, and sensor networks. Processing images in a network of distributed smart cameras introduces several complications. However, we believe that the problems DSCs solve are much more important than the challenges of designing and building a distributed video system. We argue that distributed smart cameras represent key components for future embedded computer vision systems and that smart cameras will become an enabling technology for many new applications. We summarize smart camera technology and applications, discuss current trends, and identify important research challenges.

209 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: In this article, a review of deep learning-based object detection frameworks is provided, focusing on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further.
Abstract: Due to object detection’s close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles that combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy, and optimization function. In this paper, we provide a review of deep learning-based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely, the convolutional neural network. Then, we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection, and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network-based learning systems.

3,097 citations

01 Jan 2003

3,093 citations

01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations