Smart drone with real time face recognition
29 Jul 2021-Materials Today: Proceedings (Elsevier BV)-
TL;DR: In this article, the authors developed a solution to help the task force using a face recognition based UAV to identify the criminals, missing people, civilians and for surveillance, which is a technology which involves the understanding of how the faces are detected and recognized.
About: This article is published in Materials Today: Proceedings.The article was published on 2021-07-29. It has received 4 citations till now. The article focuses on the topics: Drone & Computer science.
TL;DR: In this paper , the authors focus on facial processing, which refers to artificial intelligence (AI) systems that take facial images or videos as input data and perform some AI-driven processing to obtain higher-level information (e.g. a person's identity, emotions, demographic attributes) or newly generated imagery.
Abstract: This work focuses on facial processing, which refers to artificial intelligence (AI) systems that take facial images or videos as input data and perform some AI-driven processing to obtain higher-level information (e.g. a person's identity, emotions, demographic attributes) or newly generated imagery (e.g. with modified facial attributes). Facial processing tasks, such as face detection, face identification, facial expression recognition or facial attribute manipulation, are generally studied as separate research fields and without considering a particular scenario, context of use or intended purpose. This paper studies the field of facial processing in a holistic manner. It establishes the landscape of key computational tasks, applications and industrial players in the field in order to identify the 60 most relevant applications adopted for real-world uses. These applications are analysed in the context of the new proposal of the European Commission for harmonised rules on AI (the AI Act) and the 7 requirements for Trustworthy AI defined by the European High Level Expert Group on AI. More particularly, we assess the risk level conveyed by each application according to the AI Act and reflect on current research, technical and societal challenges towards trustworthy facial processing systems.
TL;DR: The results of this study show that the proposed AAS can recognize multiple faces and so record attendance automatically and can assist in the detection of students who attempt to skip classes without the knowledge of their teachers.
Abstract: The aim of this study was to develop a real-time automatic attendance system (AAS) based on Internet of Things (IoT) technology and facial recognition. A Raspberry Pi camera built into a Raspberry Pi 3B is used to transfer facial images to a cloud server. Face detection and recognition libraries are implemented on this cloud server, which thus can handle all the processes involved with the automatic recording of student attendance. In addition, this study proposes the application of data serialization processing and adaptive tolerance vis-à-vis Euclidean distance. The facial features encountered are processed using data serialization before they are saved in the SQLite database; such serialized data can easily be written and then read back from the database. When examining the differences between the facial features already stored in the SQLite databases and any new facial features, the proposed adaptive tolerance system can improve the performance of the facial recognition method applying Euclidean distance. The results of this study show that the proposed AAS can recognize multiple faces and so record attendance automatically. The AAS proposed in this study can assist in the detection of students who attempt to skip classes without the knowledge of their teachers. The problem of students being unintentionally marked present, though absent, and the problem of proxies is also resolved.
18 Jul 2022
TL;DR: This paper presents an efficient pooling-based input mask training algorithm to optimize the energy efficiency of DNN inference by enhancing the input sparsity and reducing the number of sporadic values in the masked input.
Abstract: Deep Neural Networks (DNNs) are increasingly deployed in battery-powered and resource-constrained devices. However, the most accurate DNNs usually require millions of parameters and operations, making them computation-heavy and energy-expensive, so it is an important topic to develop energy efficient DNN models. In this paper, we present an efficient DNN training framework under energy constraint to improve the energy efficiency of DNN inference. The key idea of this research is inspired by the observation that the input data of DNNs is usually inherently sparse and such sparsity can be exploited by sparse tensor DNN accelerators to eliminate ineffectual data access and compute. Therefore, we can enhance the inference accuracy within the energy budget by strategically controlling the sparsity of the input data. We build an energy consumption model for the sparse tensor DNN accelerator to quantify the inference energy consumption from the perspective of data access and data processing. In particular, we define a metric (named sporadic degree) to characterise the influence of the number of sporadic values in the sparse input on the energy consumption of data access for the sparse tensor DNN accelerator. Based on the proposed quantitative energy consumption model, we present an efficient pooling-based input mask training algorithm to optimize the energy efficiency of DNN inference by enhancing the input sparsity and reducing the number of sporadic values in the masked input. Experiments show that compared with the state-of-the-art methods, our proposed method can achieve higher inference accuracy with lower energy consumption and storage requirement owing to higher sparsity and lower sporadic degree of the masked input.
08 Jan 2023
TL;DR: In this paper , a system composed of a drone capable of autonomous facial detection and recognition, used to find and track a specified individual in real time, is the subject of this paper.
Abstract: Facial recognition technology has come a long way and it is still rapidly advancing. A system composed of a drone capable of autonomous facial detection and recognition, used to find and track a specified individual in real time, is the subject of this paper. The automated system works by having a drone responsible for video capture and streaming, while a computer, receiving that video stream, will run modern day facial recognition algorithms and relay instructions back to the drone. These instructions control the drone's movements with the goal of identifying and tracking a singular specified individual. Three facial recognition systems were implemented and tested to see which works best in this system. The three facial recognition systems being tested are Local Binary Pattern Histogram (LBPH), FaceNet, and Face_Recognition, all of which were chosen due to a mixture of their performance and prominence. Experiments were conducted to quantify the performance of each facial recognition system with respect to the distance of the drone from an individual, while also taking into consideration the angle of an individual's face and common accessories that cause facial obstruction. Additional experiments were conducted to measure each facial recognition system's performance by examining how many frames per second (FPS) each system could analyze. The results showed that the implemented automated drone system that uses facial recognition technology to track an individual in real time is practical and can have huge implications in the security field, such as locating a lost child in a crowd or identifying targets prior to a police raid.
University of Basilicata1, King Abdullah University of Science and Technology2, James Hutton Institute3, Democritus University of Thrace4, Tel Aviv University5, Bar-Ilan University6, Clark University7, University of Palermo8, Academy of Sciences of the Czech Republic9, Tuscia University10, University of Coimbra11, Polytechnic University of Valencia12, University of California, Santa Barbara13, University of Tartu14, Newcastle University15, Swedish University of Agricultural Sciences16, University of Pannonia17, Hungarian Academy of Sciences18
TL;DR: An overview of the existing research and applications of UAS in natural and agricultural ecosystem monitoring is provided in order to identify future directions, applications, developments, and challenges.
Abstract: Environmental monitoring plays a central role in diagnosing climate and management impacts on natural and agricultural systems; enhancing the understanding of hydrological processes; optimizing the allocation and distribution of water resources; and assessing, forecasting, and even preventing natural disasters. Nowadays, most monitoring and data collection systems are based upon a combination of ground-based measurements, manned airborne sensors, and satellite observations. These data are utilized in describing both small- and large-scale processes, but have spatiotemporal constraints inherent to each respective collection system. Bridging the unique spatial and temporal divides that limit current monitoring platforms is key to improving our understanding of environmental systems. In this context, Unmanned Aerial Systems (UAS) have considerable potential to radically improve environmental monitoring. UAS-mounted sensors offer an extraordinary opportunity to bridge the existing gap between field observations and traditional air- and space-borne remote sensing, by providing high spatial detail over relatively large areas in a cost-effective way and an entirely new capacity for enhanced temporal retrieval. As well as showcasing recent advances in the field, there is also a need to identify and understand the potential limitations of UAS technology. For these platforms to reach their monitoring potential, a wide spectrum of unresolved issues and application-specific challenges require focused community attention. Indeed, to leverage the full potential of UAS-based approaches, sensing technologies, measurement protocols, postprocessing techniques, retrieval algorithms, and evaluation techniques need to be harmonized. The aim of this paper is to provide an overview of the existing research and applications of UAS in natural and agricultural ecosystem monitoring in order to identify future directions, applications, developments, and challenges.
TL;DR: The proposed GPU-based path planner was able to find quasi-optimal solutions in a timely fashion allowing in-flight planning and the execution time was reduced by a factor of 290x compared to a sequential execution on CPU.
Abstract: Military unmanned aerial vehicles (UAVs) are employed in highly dynamic environments and must often adjust their trajectories based on the evolving situation. To operate autonomously and safely, a UAV must be equipped with a path planning module capable of quickly recalculating a feasible and quasi-optimal path in flight while in the event a new obstacle or threat has been detected or simply if the destination point is changed during the mission. To allow for a fast path planning, this paper proposes a parallel implementation of the genetic algorithm on graphics processing unit (GPU). The trajectories are built as series of line segments connected by circular arcs resulting in smooth paths suitable for fixed-wing UAVs. The fitness function we defined takes into account the dynamic constraints of the UAVs and aims to minimize fuel consumption and average flying altitude in order to improve range and avoid detection by enemy radars. This fitness function is also implemented on the GPU and different parallelization strategies were developed and tested for each step of the fitness evaluation. By exploiting the massively parallel architecture of GPUs, the execution time of the proposed path planner was reduced by a factor of 290x compared to a sequential execution on CPU. The path planning module developed was tested using 18 scenarios on six realistic three-dimensional terrains with multiple no-fly zones. We found that the proposed GPU-based path planner was able to find quasi-optimal solutions in a timely fashion allowing in-flight planning.
••18 May 2015
TL;DR: The findings show that the current face recognition technologies are capable of recognizing faces on drones with some limits in distance and angle, especially when drones take pictures in high altitudes and the face image is taken from a long distance and with a large angle of depression.
Abstract: Drones, as known as unmanned aerial vehicles (UAV), are aircrafts which can perform autonomous pilot. They can easily reach locations which are too difficult to reach or dangerous for human beings and collect images from bird's-eye view through aerial photography. Enabling drones to identify people on the ground is important for a variety of applications, such as surveillance, people search, and remote monitoring. Since faces are part of inherent identities of people, how well face recognition technologies can be used by drones becomes essential for future development of the above applications.In this paper, we conduct empirical studies to evaluate several factors that may influence the performance of face detection and recognition techniques on drones. Our findings show that the current face recognition technologies are capable of recognizing faces on drones with some limits in distance and angle, especially when drones take pictures in high altitudes and the face image is taken from a long distance and with a large angle of depression. We also find that augmenting face models with 3D information may help to boost recognition performance in the case of large angles of depression.
01 Feb 2019
TL;DR: This work has implemented various facial recognition algorithms like LBPH, Eigenface and Fisherface using Haar cascade for facial identification and trained the algorithms using the same data set.
Abstract: Facial recognition is a major challenge in the field of computer vision. Here we have implemented various facial recognition algorithms like LBPH, Eigenface and Fisherface. Haar cascade has been used for facial identification. We trained the algorithms using the same data set and have got some insights, from which we have tried to identify which algorithm gives us the best results. Different algorithms are compared and their workings are discussed. At the end, tabular comparisons are provided. So that it would be easier to understand the difference between algorithms.
TL;DR: A non-invasive passive flexible Ultra Wide Band (UWB) Myogram antenna sensor for the prediction of Sarcopenia through human muscle mass measurement and the proposed method of diagnosing Sarc Openia achieves an accuracy of 85% in fifty samples.
Related Papers (5)
18 May 2015