scispace - formally typeset
Search or ask a question
Author

K Sangeetha Lakshmi

Bio: K Sangeetha Lakshmi is an academic researcher from R.M.K. College of Engineering and Technology. The author has contributed to research in topics: Artificial intelligence & Software. The author has co-authored 1 publications.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors developed a solution to help the task force using a face recognition based UAV to identify the criminals, missing people, civilians and for surveillance, which is a technology which involves the understanding of how the faces are detected and recognized.

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper , the authors focus on facial processing, which refers to artificial intelligence (AI) systems that take facial images or videos as input data and perform some AI-driven processing to obtain higher-level information (e.g. a person's identity, emotions, demographic attributes) or newly generated imagery.
Abstract: This work focuses on facial processing, which refers to artificial intelligence (AI) systems that take facial images or videos as input data and perform some AI-driven processing to obtain higher-level information (e.g. a person's identity, emotions, demographic attributes) or newly generated imagery (e.g. with modified facial attributes). Facial processing tasks, such as face detection, face identification, facial expression recognition or facial attribute manipulation, are generally studied as separate research fields and without considering a particular scenario, context of use or intended purpose. This paper studies the field of facial processing in a holistic manner. It establishes the landscape of key computational tasks, applications and industrial players in the field in order to identify the 60 most relevant applications adopted for real-world uses. These applications are analysed in the context of the new proposal of the European Commission for harmonised rules on AI (the AI Act) and the 7 requirements for Trustworthy AI defined by the European High Level Expert Group on AI. More particularly, we assess the risk level conveyed by each application according to the AI Act and reflect on current research, technical and societal challenges towards trustworthy facial processing systems.

7 citations

Journal ArticleDOI
TL;DR: The results of this study show that the proposed AAS can recognize multiple faces and so record attendance automatically and can assist in the detection of students who attempt to skip classes without the knowledge of their teachers.
Abstract: The aim of this study was to develop a real-time automatic attendance system (AAS) based on Internet of Things (IoT) technology and facial recognition. A Raspberry Pi camera built into a Raspberry Pi 3B is used to transfer facial images to a cloud server. Face detection and recognition libraries are implemented on this cloud server, which thus can handle all the processes involved with the automatic recording of student attendance. In addition, this study proposes the application of data serialization processing and adaptive tolerance vis-à-vis Euclidean distance. The facial features encountered are processed using data serialization before they are saved in the SQLite database; such serialized data can easily be written and then read back from the database. When examining the differences between the facial features already stored in the SQLite databases and any new facial features, the proposed adaptive tolerance system can improve the performance of the facial recognition method applying Euclidean distance. The results of this study show that the proposed AAS can recognize multiple faces and so record attendance automatically. The AAS proposed in this study can assist in the detection of students who attempt to skip classes without the knowledge of their teachers. The problem of students being unintentionally marked present, though absent, and the problem of proxies is also resolved.

2 citations

Proceedings ArticleDOI
18 Jul 2022
TL;DR: This paper presents an efficient pooling-based input mask training algorithm to optimize the energy efficiency of DNN inference by enhancing the input sparsity and reducing the number of sporadic values in the masked input.
Abstract: Deep Neural Networks (DNNs) are increasingly deployed in battery-powered and resource-constrained devices. However, the most accurate DNNs usually require millions of parameters and operations, making them computation-heavy and energy-expensive, so it is an important topic to develop energy efficient DNN models. In this paper, we present an efficient DNN training framework under energy constraint to improve the energy efficiency of DNN inference. The key idea of this research is inspired by the observation that the input data of DNNs is usually inherently sparse and such sparsity can be exploited by sparse tensor DNN accelerators to eliminate ineffectual data access and compute. Therefore, we can enhance the inference accuracy within the energy budget by strategically controlling the sparsity of the input data. We build an energy consumption model for the sparse tensor DNN accelerator to quantify the inference energy consumption from the perspective of data access and data processing. In particular, we define a metric (named sporadic degree) to characterise the influence of the number of sporadic values in the sparse input on the energy consumption of data access for the sparse tensor DNN accelerator. Based on the proposed quantitative energy consumption model, we present an efficient pooling-based input mask training algorithm to optimize the energy efficiency of DNN inference by enhancing the input sparsity and reducing the number of sporadic values in the masked input. Experiments show that compared with the state-of-the-art methods, our proposed method can achieve higher inference accuracy with lower energy consumption and storage requirement owing to higher sparsity and lower sporadic degree of the masked input.
Proceedings ArticleDOI
08 Jan 2023
TL;DR: In this paper , a system composed of a drone capable of autonomous facial detection and recognition, used to find and track a specified individual in real time, is the subject of this paper.
Abstract: Facial recognition technology has come a long way and it is still rapidly advancing. A system composed of a drone capable of autonomous facial detection and recognition, used to find and track a specified individual in real time, is the subject of this paper. The automated system works by having a drone responsible for video capture and streaming, while a computer, receiving that video stream, will run modern day facial recognition algorithms and relay instructions back to the drone. These instructions control the drone's movements with the goal of identifying and tracking a singular specified individual. Three facial recognition systems were implemented and tested to see which works best in this system. The three facial recognition systems being tested are Local Binary Pattern Histogram (LBPH), FaceNet, and Face_Recognition, all of which were chosen due to a mixture of their performance and prominence. Experiments were conducted to quantify the performance of each facial recognition system with respect to the distance of the drone from an individual, while also taking into consideration the angle of an individual's face and common accessories that cause facial obstruction. Additional experiments were conducted to measure each facial recognition system's performance by examining how many frames per second (FPS) each system could analyze. The results showed that the implemented automated drone system that uses facial recognition technology to track an individual in real time is practical and can have huge implications in the security field, such as locating a lost child in a crowd or identifying targets prior to a police raid.