Other affiliations: Cork Institute of Technology, University of Bremen, University of Surrey ...read more
Bio: Dirk Pesch is an academic researcher from University College Cork. The author has contributed to research in topics: Wireless sensor network & Wireless network. The author has an hindex of 29, co-authored 211 publications receiving 2732 citations. Previous affiliations of Dirk Pesch include Cork Institute of Technology & University of Bremen.
Papers published on a yearly basis
••08 Jan 2018
TL;DR: This paper considers an emerging non-wearable fall detection approach based on WiFi Channel State Information (CSI), which uses the conventional Short-Time Fourier Transform to extract time-frequency features and a sequential forward selection algorithm to single out features that are resilient to environment changes while maintaining a higher fall detection rate.
Abstract: Falling or tripping among elderly people living on their own is recognized as a major public health worry that can even lead to death. Fall detection systems that alert caregivers, family members or neighbours can potentially save lives. In the past decade, an extensive amount of research has been carried out to develop fall detection systems based on a range of different detection approaches, i.e, wearable and non-wearable sensing and detection technologies. In this paper, we consider an emerging non-wearable fall detection approach based on WiFi Channel State Information (CSI). Previous CSI based fall detection solutions have considered only time domain approaches. Here, we take an altogether different direction, time-frequency analysis as used in radar fall detection. We use the conventional Short-Time Fourier Transform (STFT) to extract time-frequency features and a sequential forward selection algorithm to single out features that are resilient to environment changes while maintaining a higher fall detection rate. When our system is pre-trained, it has a 93% accuracy and compared to RTFall and CARM, this is a 12% and 15% improvement respectively. When the environment changes, our system still has an average accuracy close to 80% which is more than a 20% to 30% and 5% to 15% improvement respectively.
TL;DR: TS-LoRa is proposed, an approach that tackles overheads of LoRaWAN by allowing devices to self-organise and determine their slot positions in a frame autonomously and only one dedicated slot in each frame is used to ensure global synchronisation and handle acknowledgements.
Abstract: Automation and data capture in manufacturing, known as Industry 4.0, requires the deployment of a large number of wireless sensor devices in industrial environments. These devices have to be connected via a reliable, low-latency, low-power and low operating-cost network. Although LoRaWAN provides a low-power and reasonable-cost network technology, its current ALOHA-based MAC protocol limits its scalability and reliability. A common practise in wireless networks is to solve this issue and improve scalability through the use of time-slotted communications. However, any time-slotted approach comes with overheads to compute and disseminate the transmission schedule in addition to ensuring global time synchronisation. Affording these overheads is not straight forward with LoRaWAN restrictions on radio duty-cycle and downlink availability. Therefore, in this work, we propose TS-LoRa, an approach that tackles these overheads by allowing devices to self-organise and determine their slot positions in a frame autonomously. In addition to that, only one dedicated slot in each frame is used to ensure global synchronisation and handle acknowledgements. Our experimental results with 25 nodes show that TS-LoRa can achieve more than 99% packet delivery ratio even for the most distant nodes. Moreover, our simulations with a higher number of nodes revealed that TS-LoRa exhibits a lower energy consumption than the confirmable version of LoRaWAN while not compromising the packet delivery ratio.
••23 Feb 2005
TL;DR: TATUS, a ubiquitous computing simulator designed to maximize usability and flexibility in the experimentation of adaptive ubiquitous computing systems, is described, which is interfaced with a testbed for wireless communication domain simulation.
Abstract: Core to ubiquitous computing environments are adaptive software systems that adapt their behavior to the context in which the user is attempting the task the system aims to support. This context is strongly linked with the physical environment in which the task is being performed. The efficacy of such adaptive systems is thus highly dependent on the human perception of the provided system behavior within the context represented by that particular physical environment and social situation. However, effective evaluation of human interaction with adaptive ubiquitous computing technologies has been hindered by the cost and logistics of accurately controlling such environmental context. This paper describes TATUS, a ubiquitous computing simulator aimed at overcoming these cost and logistical issues. Based on a 3D games engine, the simulator has been designed to maximize usability and flexibility in the experimentation of adaptive ubiquitous computing systems. We also describe how this simulator is interfaced with a testbed for wireless communication domain simulation.
22 Mar 2007
TL;DR: A framework for indoor location with the nearest-neighbour and particle filter are developed to evaluate predicted and measured fingerprints and a map-filtering technique is elaborated to take advantage of environment description.
Abstract: WLAN indoor location that is based on received signal strength indication (RSSI) technique needs extensive calibration to build a signal fingerprint. Re-calibration is also needed if there is a major change in the propagation environment. The use of propagation models to predict signal fingerprint becomes an interesting preposition. This paper will investigate the influence of predicted fingerprint on the accuracy of indoor location. They include empirical propagation models (i.e. one-slope model and multi-wall model) and a semi-deterministic model. A framework for indoor location with the nearest-neighbour and particle filter are developed to evaluate predicted and measured fingerprints. In order to take advantage of environment description, a map-filtering technique is also elaborated.
TL;DR: A series of key enabling technologies from a range of domains, such as new materials, algorithms, and system architectures are outlined, envisioning that machine learning will play an instrumental role for advanced vehicular communication and networking.
Abstract: We are on the cusp of a new era of connected autonomous vehicles with unprecedented user experiences, tremendously improved road safety and air quality, highly diverse transportation environments and use cases, as well as a plethora of advanced applications. Realizing this grand vision requires a significantly enhanced vehicle-to-everything (V2X) communication network which should be extremely intelligent and capable of concurrently supporting hyper-fast, ultra-reliable, and low-latency massive information exchange. It is anticipated that the sixth-generation (6G) communication systems will fulfill these requirements of the next-generation V2X. In this article, we outline a series of key enabling technologies from a range of domains, such as new materials, algorithms, and system architectures. Aiming for truly intelligent transportation systems, we envision that machine learning will play an instrumental role for advanced vehicular communication and networking. To this end, we provide an overview on the recent advances of machine learning in 6G vehicular networks. To stimulate future research in this area, we discuss the strength, open challenges, maturity, and enhancing areas of these technologies.
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
01 Jan 2011
01 Jan 2007
TL;DR: In this paper, the authors provide updates to IEEE 802.16's MIB for the MAC, PHY and asso-ciated management procedures in order to accommodate recent extensions to the standard.
Abstract: This document provides updates to IEEE Std 802.16's MIB for the MAC, PHY and asso- ciated management procedures in order to accommodate recent extensions to the standard.
TL;DR: This survey paper proposes a novel taxonomy for IoT technologies, highlights some of the most important technologies, and profiles some applications that have the potential to make a striking difference in human life, especially for the differently abled and the elderly.
Abstract: The Internet of Things (IoT) is defined as a paradigm in which objects equipped with sensors, actuators, and processors communicate with each other to serve a meaningful purpose. In this paper, we survey state-of-the-art methods, protocols, and applications in this new emerging area. This survey paper proposes a novel taxonomy for IoT technologies, highlights some of the most important technologies, and profiles some applications that have the potential to make a striking difference in human life, especially for the differently abled and the elderly. As compared to similar survey papers in the area, this paper is far more comprehensive in its coverage and exhaustively covers most major technologies spanning from sensors to applications.
TL;DR: A top-down survey of the trade-offs between application requirements and lifetime extension that arise when designing wireless sensor networks is presented and a new classification of energy-conservation schemes found in the recent literature is presented.
Abstract: The design of sustainable wireless sensor networks (WSNs) is a very challenging issue. On the one hand, energy-constrained sensors are expected to run autonomously for long periods. However, it may be cost-prohibitive to replace exhausted batteries or even impossible in hostile environments. On the other hand, unlike other networks, WSNs are designed for specific applications which range from small-size healthcare surveillance systems to large-scale environmental monitoring. Thus, any WSN deployment has to satisfy a set of requirements that differs from one application to another. In this context, a host of research work has been conducted in order to propose a wide range of solutions to the energy-saving problem. This research covers several areas going from physical layer optimization to network layer solutions. Therefore, it is not easy for the WSN designer to select the efficient solutions that should be considered in the design of application-specific WSN architecture. We present a top-down survey of the trade-offs between application requirements and lifetime extension that arise when designing wireless sensor networks. We first identify the main categories of applications and their specific requirements. Then we present a new classification of energy-conservation schemes found in the recent literature, followed by a systematic discussion as to how these schemes conflict with the specific requirements. Finally, we survey the techniques applied in WSNs to achieve trade-off between multiple requirements, such as multi-objective optimisation.