scispace - formally typeset
Search or ask a question

Showing papers on "Smart camera published in 2022"


Journal ArticleDOI
TL;DR: In this article , two camera systems have been designed to provide a comparative analysis for a thermal camera system, one visible spectrum camera and the other thermal camera, and the results from the test are compared to those from a human observer, showing that the thermal camera can perform with the same success as the visual camera despite a smaller field of view, fewer pixels and lower frame-rate.
Abstract: There is a documented shortage of reliable counting systems for the entrance of beehives. Movement at the entrance of a hive is a measure of hive health and abnormalities, in addition to an indicator of predators. To that end, two camera systems have been designed to provide a comparative analysis for a thermal camera system. The first, a visible spectrum camera, competed directly with the thermal camera. Machine learning is used to address the narrower field of view of the thermal camera, in addition to lost extracted tracks from both cameras. K-nearest-neighbour, support vector machine, random forest, and neural networks are used to classify flights as arriving, departing, or hovering bees. A hierarchical system is used to determine the nature of any flights where a clear label is not feasibly assigned based on the information from either test camera. A third camera at distance from the hive served as the end authority. After three iterations of training and validating, a test case is evaluated between both camera systems. Results from the test are compared to those from a human observer, showing that the thermal camera can perform with the same success as the visual camera despite a smaller field of view, fewer pixels and lower frame-rate, while both systems achieve greater than 96% accuracy and both camera systems are 93% successful at extracting flights. This is advantageous as a thermal camera will work in a wider range of environments, keeping the accuracy of an optical camera, and predicting based on movement characteristics will allow expanded uses such as predicting the presence of predators.

6 citations


Journal ArticleDOI
TL;DR: The history of image sensing and processing hardware from the perspective of in-pixel computing is reviewed and the key features of a state-of-the-art smart camera system based on a PPA device are outlined, through the description of the SCAMP-5 system.
Abstract: Vision processing for control of agile autonomous robots requires low-latency computation, within a limited power and space budget. This is challenging for conventional computing hardware. Parallel processor arrays (PPAs) are a new class of vision sensor devices that exploit advances in semiconductor technology, embedding a processor within each pixel of the image sensor array. Sensed pixel data are processed on the focal plane, and only a small amount of relevant information is transmitted out of the vision sensor. This tight integration of sensing, processing, and memory within a massively parallel computing architecture leads to an interesting trade-off between high performance, low latency, low power, low cost, and versatility in a machine vision system. Here, we review the history of image sensing and processing hardware from the perspective of in-pixel computing and outline the key features of a state-of-the-art smart camera system based on a PPA device, through the description of the SCAMP-5 system. We describe several robotic applications for agile ground and aerial vehicles, demonstrating PPA sensing functionalities including high-speed odometry, target tracking, obstacle detection, and avoidance. In the conclusions, we provide some insight and perspective on the future development of PPA devices, including their application and benefits within agile, robust, adaptable, and lightweight robotics.

6 citations


Journal ArticleDOI
TL;DR: In this article , a binocular camera system is designed to effectively solve the problems of distortion and coverage caused by monocular camera, and an image-stitching algorithm is developed to splice the images captured by the camera.
Abstract: The smart unmanned vending machine using machine vision technology suffers from the sharp decrease of detection accuracy due to the incomplete image collection of items by monocular camera in complex environment, and the lack of obvious features in dense stacking of items. In this article, a binocular camera system is designed to effectively solve the problems of distortion and coverage caused by monocular camera. Besides, an image-stitching algorithm is developed to splice the images captured by the camera, which reliefs the burden of computation for back-end recognition processing brought by the binocular camera. A new neural network structure-the YOLOv3-TinyE is proposed based on YOLOv3-tiny model. Based on the dataset of 21,000 images captured in real scenarios containing 20 different type of beverages, the comparison experimental results show that YOLOv3-TinyE model achieves the mean average precision of 99.15%, and the inference speed is 2.91 times faster than that of YOLOv3 model, and the detection accuracy of YOLOv3-TinyE model based on binocular vision is higher than that based on monocular vision. The results suggest that the designed method achieves the goal in terms of inference speed and average precision, that is, it is able to satisfy the requirements for real-world applications.

5 citations


Journal ArticleDOI
01 Nov 2022
TL;DR: In this article , a multi-camera system is proposed to automatically estimate the number of cars present in the entire parking lot directly on board the edge devices by using an on-device deep learning-based detector that locates and counts the vehicles from the captured images and a decentralized geometric-based approach that can analyze the inter-camera shared areas.
Abstract: This paper presents a novel solution to automatically count vehicles in a parking lot using images captured by smart cameras. Unlike most of the literature on this task, which focuses on the analysis of single images, this paper proposes the use of multiple visual sources to monitor a wider parking area from different perspectives. The proposed multi-camera system is capable of automatically estimating the number of cars present in the entire parking lot directly on board the edge devices. It comprises an on-device deep learning-based detector that locates and counts the vehicles from the captured images and a decentralized geometric-based approach that can analyze the inter-camera shared areas and merge the data acquired by all the devices. We conducted the experimental evaluation on an extended version of the CNRPark-EXT dataset, a collection of images taken from the parking lot on the campus of the National Research Council (CNR) in Pisa, Italy. We show that our system is robust and takes advantage of the redundant information deriving from the different cameras, improving the overall performance without requiring any extra geometrical information of the monitored scene.

4 citations


Journal ArticleDOI
TL;DR: It is demonstrated that it is feasible to implement a performant smart-camera system that leverages the convenience of a cloud-based model while retaining the ability to control access to (private) data.
Abstract: Abstract Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) access the video footage without the users’ knowledge or consent—violating the core tenet of user privacy. In this paper, we present CaCTUs, a privacy-preserving smart Camera system Controlled Totally by Users. CaCTUs returns control to the user; the root of trust begins with the user and is maintained through a series of cryptographic protocols, designed to support popular features, such as sharing, deleting, and viewing videos live. We show that the system can support live streaming with a latency of 2 s at a frame rate of 10 fps and a resolution of 480 p. In so doing, we demonstrate that it is feasible to implement a performant smart-camera system that leverages the convenience of a cloud-based model while retaining the ability to control access to (private) data.

4 citations


Proceedings ArticleDOI
TL;DR: In this article , a vision system for defect inspection of machinery parts and guidance of industrial robots is described, which includes four building blocks and several techniques for measuring object characteristics, and a convenient human-machine interface is built which allows fast system reconfiguration and project structure adjustment.
Abstract: This paper describes a vision system which can be used for defect inspection of machinery parts and guidance of industrial robots. The system is an important part of a conceptual production cell project. Based on the goal tasks, a 3D smart camera is selected. The appropriate sample for defect hunting is considered. Significance of the work scene lighting is emphasized and the built-in blue-light capabilities of the smart camera are indicated. Connection with a PC is established ensuring all needed communications. An application for visual inspection is developed which includes four building blocks and several techniques for measuring object characteristics. Convenient human-machine interface is built which allows fast system reconfiguration and project structure adjustment. Finally, essential propositions for further system improvement are outlined.

4 citations


Journal ArticleDOI
TL;DR: In this article , a multi-camera system study for the overlapping area of the road for traffic analysis is presented, where a deep neural network is used in the experiments for traffic behavior analysis.
Abstract: In a video surveillance system, tracking multiple moving objects using a single camera feed is having numerous challenges. A multi-camera system increases the output image quality in both overlapping and non-overlapping environment. Traffic behavior analysis is an intensified demand in a recent topic of research. Due to increasing traffic in intercity roads, interstate, and national highways. Automated traffic visual surveillance applications with the multi-camera are a topic of research in computer vision. This paper, present a multi-camera system study for the overlapping area of the road for traffic analysis in three sections. The second section represents the thorough literature survey on the multi-camera system. Here, the third section is our proposed system using a dual-camera experimental setup with their coordination. A deep neural network is used in the experiments for traffic behavior analysis. The emphasis of this paper is on the physical arrangement of the multi-camera system, calibration, and advantages- disadvantages. On a conclusion note, future development and advancement in traffic analysis using a multi-camera system is discussed.

3 citations


Proceedings ArticleDOI
01 Aug 2022
TL;DR: In this article , the authors proposed the use of high-performance computing and deep learning to create prediction models that can be deployed as a part of smart agriculture solutions in the poultry sector.
Abstract: This paper proposes the use of high-performance computing and deep learning to create prediction models that can be deployed as a part of smart agriculture solutions in the poultry sector. The idea is to create object detection models that can be ported onto edge devices equipped with camera sensors for the use in Internet of Things systems for poultry farms. The object detection prediction models could be used to create smart camera sensors that could evolve into sensors for counting chickens or detecting dead ones. Such camera sensor kits could become a part of digital poultry farm management systems in shortly. The paper discusses the approach to the development and selection of machine learning and computational tools needed for this process. Initial results, based on the use of Faster R-CNN network and high-performance computing are presented together with the metrics used in the evaluation process. The achieved accuracy is satisfactory and allows for easy counting of chickens. More experimentation is needed with network model selection and training configurations to increase the accuracy and make the prediction useful for developing a dead chicken detector.

2 citations




Proceedings ArticleDOI
27 Mar 2022
TL;DR: A light and high-resolution video camera system based on FPGA is designed and the design of time-driving of CMOS sensor, output data remapping and Camlink interface with Verilog hardware language is realized and a imaging experiment is carried out.
Abstract: To obtain the high-resolution and real-time digital image of the monitoring target and meet the requirements of miniaturization, a light and high-resolution video camera system based on FPGA is designed. The camera uses the large array CMOS sensor CMV12000 produced by the CMOSIS company and transfers the output data to the computer through Camlink interface. By using the FPGA as the core of timing control and completing the design of time-driving of CMOS sensor, output data remapping and Camlink interface with Verilog hardware language, the design of the camera is realized and a imaging experiment is carried out. The result shows that the driving sequence of the camera is reasonable and the communication with computer is correct. The camera operates stably and takes high quality images with the image resolution is 4096×3072.

Journal ArticleDOI
TL;DR: This project proposes to use a security camera with night vision using OpenCV to provide a low-cost security system that uses the improved properties of the built-in camera and OpenCV for face and person detection.
Abstract: Abstract: Every individual in today's society has a need for a safe and reliable system. The most commonly used closed-circuit television (CCTV) or video surveillance systems are being implemented everywhere: hospitals, warehouses, parking lots, and buildings. However, this highly effective system has a cost disadvantage. Therefore, a cost-effective system is required. This project proposes to use a security camera with night vision using OpenCV. This is a cost-effective method. Images are captured and processed frame by frame. When a person is detected, the image is saved and an email is sent. The accuracy of this system is about 83%. It also uses the improved properties of the built-in camera. So the image is captured by the camera and sent to processing for face and person detection using OpenCV. Then, whether the detected person is known (visitor) or (stranger) () the detected face is compared against the database, and based on the output an email is generated and sent to the user. Security cameras can be controlled by custom AI. assistant. AI features include turning security cameras on and off with voice control. Accordingly, a low-cost security system can be provided. Keywords: OpenCV, Computer Vision, Tkinter, CCTV, Surveillance, Noise, Motion Artificial Intelligence

Journal ArticleDOI
TL;DR: SafeFac as mentioned in this paper uses a set of cameras installed on the assembly line to capture images of workers that approach the machinery under hazardous situations to alert system managers and halt the line if needed.
Abstract: This work presents SafeFac, an intelligent camera-based system for managing the safety of factory environments. In SafeFac a set of cameras installed on the assembly line are used to capture images of workers that approach the machinery under hazardous situations to alert system managers and halt the line if needed. Given a challenging set of practical application-level requirements such as multi-camera support and low response latency, SafeFac exploits a YOLOv3-based light-weight human object detection. To address the latency–accuracy tradeoff, SafeFac incorporates a set of algorithms as pre- and post-processing modules and a novel adaptive camera scheduling scheme. Our evaluation with a video dataset containing more that 113,000 frames from real assembly line activity shows that SafeFac achieves high precision (99.93%) and recall (96.44%), and SafeFac successfully satisfies such challenging requirements as a ready-for-deployment system for safe factory management.

Journal ArticleDOI
28 Jun 2022
TL;DR: This paper presents a tailor-made multi-camera based motion averaging system, where the fixed relative poses are utilized to improve the accuracy and robustness of SfM.
Abstract: In order to fully perceive the surrounding environment, many intelligent robots and self-driving cars are equipped with a multi-camera system. Based on this system, the structure-from-motion (SfM) technology is used to realize scene reconstruction, but the fixed relative poses between cameras in the multi-camera system are usually not considered. This paper presents a tailor-made multi-camera based motion averaging system, where the fixed relative poses are utilized to improve the accuracy and robustness of SfM. Our approach starts by dividing the images into reference images and non-reference images, and edges in view-graph are divided into four categories accordingly. Then, a multi-camera based rotating averaging problem is formulated and solved in two stages, where an iterative re-weighted least squares scheme is used to deal with outliers. Finally, a multi-camera based translation averaging problem is formulated and a l1-norm based optimization scheme is proposed to compute the relative translations of multi-camera system and reference camera positions simultaneously. Experiments demonstrate that our algorithm achieves superior accuracy and robustness on various data sets compared to the state-of-the-art methods.

Proceedings ArticleDOI
30 Jun 2022
TL;DR: In this paper , the authors define a new interrogation method based on a Federated Edge approach, which addresses the problem from the point of view of both camera hardware and shooting angle associated with it.
Abstract: Nowadays, video surveillance is a very common practice in Smart Cities. There are public and private video surveillance systems, and very often different systems or single devices frame the same area. However, when a target needs to be identified or needs to be tracked in real-time, such solutions typically require human intervention to configure the devices in the best possible way (e.g., choosing the optimal cameras, setting up their focus, and so on). To address such a problem, in this paper, we define a new interrogation method based on a Federated Edge approach. This approach addresses the problem from the point of view of both camera hardware and shooting angle associated with it. According to the presented approach, it is possible to understand which the best camera to identify a target and possibly tracking it in a specific area is. A case study is defined in the context of urban mobility management.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a new direction that leverages the unintentional electromagnetic (EM) emanations of the camera to detect it, which can filter out potential camera EM emanations from numerous EM signals quickly and achieve accurate hidden camera detection.
Abstract: Hidden cameras in sensitive locations have become an increasing threat to personal privacy all over the world. Because the camera is small and camouflaged, it is difficult to detect the presence of the camera with naked eyes. Existing works on this subject have either only covered using wireless transmission to detect cameras, or using other methods which are cumbersome in practical use. In this paper, we introduce a new direction that leverages the unintentional electromagnetic (EM) emanations of the camera to detect it. We first find that the digital output of the camera’s image sensor will be amplitude-modulated to the EM emanations of the camera’s clock. Thus, changes in the scope of the camera will directly cause changes in the camera’s EM emanations, which constitutes a unique characteristic for a hidden camera. Based on this, we propose a novel camera detection system named CamRadar, which can filter out potential camera EM emanations from numerous EM signals quickly and achieve accurate hidden camera detection. Benefitting from the camera’s EM emanations, CamRadar will not be limited by the camera transmission types or the detection angle. Our extensive real-world experiments using CamRadar and 19 hidden cameras show that CamRadar achieves a fast detection (in 16.75s) with a detection rate of 93.23% as well as a low false positive rate of 3.95%. CCS Concepts: • Security and privacy → Privacy protections .


Proceedings ArticleDOI
01 Jan 2022
TL;DR: Li et al. as mentioned in this paper proposed a self-supervised trajectory-based camera link model (SCLM) with both appearance and topological features systematically extracted from a graph auto-encoder (GAE) network.
Abstract: Multi-Target Multi-Camera Tracking (MTMCT) of vehicles is a challenging task in smart city related applications. The main challenge of MTMCT is how to accurately match the single-camera trajectories generated from different cameras and establish a complete global cross-camera trajectory for each target, i.e., the multi-camera trajectory matching problem. In this paper, we propose a novel framework to solve this problem using the self-supervised trajectory-based camera link model (CLM) with both appearance and topological features systematically extracted from a graph auto-encoder (GAE) network. Unlike most related works that represent the spatio-temporal relationships of multiple cameras with the laborious human-annotated CLM, we introduce a self-supervised CLM (SCLM) generation method that extracts the crucial multi-camera relationships among the vehicle trajectories passing through different cameras robustly and automatically. Moreover, we apply a GAE to encode topological information and appearance features to generate the topological embeddings. According to our experimental results, the proposed method achieves a new state-of-the-art on both CityFlow 2019 and CityFlow 2020 benchmarks with IDF1 of 77.21% and 55.56%, respectively.

Book ChapterDOI
01 Jan 2022
TL;DR: In this article , an efficient convolutional neural network designed for unified object detection and image compression is presented. But, due to limited power budget and computation resources, there are still algorithmic and systematic challenges to implement operational smart cameras with sophisticated functions.
Abstract: The modern wireless smart camera is an embedded system designed not only for capturing images but also for image encoding, patten recognition, and communication; However, due to limited power budget and computation resources, there are still algorithmic and systematic challenges to implement operational smart cameras with sophisticated functions. In this chapter, we introduce a prototyping wireless smart camera and a hardware-friendly algorithm, i.e., an efficient convolutional neural network designed for unified object detection and image compression. The proposed algorithm of Compressive Convolutional Network features the ability to perform near-isometric compressive sensing using convolutional operations. A novel incoherent convolution approach is invented for learning the sampling matrix to achieve near isometric property for compressive sensing. Experiments show that the proposed algorithm can achieve near state-of-the-art object detection accuracy with 3.1 to 5.5 times higher efficiency, and 2.5 to 5.2 dB higher image reconstruction PSNR compared to other compressive sensing based approaches. With hardware-oriented algorithm optimization, our smart camera prototype built using off-the-self hardware can perform object detection and image compression over 20 to 25 frames of video images per second at 14 watts of power consumption.


Journal ArticleDOI
01 Aug 2022-Sensors
TL;DR: In this paper , a camera-network-based visual positioning system is presented, which is capable of locating a moving target with high precision: relative errors for positional parameters are all smaller than 10% for linear velocities (vx, vy) are also kept to an acceptable level.
Abstract: The development of a self-configuring method for efficiently locating moving targets indoors could enable extraordinary advances in the control of industrial automatic production equipment. Being interactively connected, cameras that constitute a network represent a promising visual system for wireless positioning, with the ultimate goal of replacing or enhancing conventional sensors. Developing a highly efficient algorithm for collaborating cameras in the network is of particular interest. This paper presents an intelligent positioning system, which is capable of integrating visual information, obtained by large quantities of cameras, through self-configuration. The use of the extended Kalman filter predicts the position, velocity, acceleration and jerk (the third derivative of position) in the moving target. As a result, the camera-network-based visual positioning system is capable of locating a moving target with high precision: relative errors for positional parameters are all smaller than 10%; relative errors for linear velocities (vx, vy) are also kept to an acceptable level, i.e., lower than 20%. This presents the outstanding potential of this visual positioning system to assist in the industry of automation, including wireless intelligent control, high-precision indoor positioning, and navigation.

Journal ArticleDOI
01 Mar 2022
TL;DR: In this article , the authors proposed a green camera-network-as-a-service (G-CNaaS) architecture, which provides on-demand camera networks to multiple end-users simultaneously while utilizing minimal energy.
Abstract: This work proposes the Green Camera-Network-as-a-Service (G-CNaaS) architecture, which provides on-demand camera networks to multiple end-users simultaneously while utilizing minimal energy. G-CNaaS simultaneously reduces the carbon footprint and eliminates the single application-centric approach of traditional camera networks (TCNs) by enabling each camera to participate in multiple Virtual-Camera-Networks (VCNs) and selecting an optimal set of cameras for each VCN. We couple each camera node in every VCNs with a learning model suitable for the requested application. We assign an intelligent edge device to each VCN to analyze time-sensitive events. We introduce the camera selection factor by leveraging the properties of cameras: 1) field-of-view (FoV); 2) angular-distance; 3) observation range; and 4) residual energy to select the optimal camera set. The results of the extensive simulation of the G-CNaaS architecture show that it excels in performance concerning attributes such as the average lifetime, fair distribution of the work among the camera owners, and cost-effectiveness compared to the TCNs. We observe that the expenditure of a user using the TCN varies by 88.7%, while in the case of G-CNaaS, the expenditure varies by 10.28% with the increase in time from 1–60 months. On the other hand, the average energy consumed increases by 59.88% and 99.5% in the presence of 10 and 20 camera sensor owners.

Proceedings ArticleDOI
28 Oct 2022
TL;DR: In this article , a roadside single-camera sensing system serving the Cooperative Vehicle Infrastructure system is proposed, where the sensory data acquisition is done by only one camera. But, the system is limited to a single camera.
Abstract: We propose a roadside single-camera sensing system serving the Cooperative Vehicle Infrastructure system, where the sensory data acquisition is done by only one camera. The spatially assisted calibration of the camera is performed by using a high beam LIDAR tape at the time of camera installation, and the points in the image are maximally filled by a multi-frame continuous mapping method according to the radar scanning principle, and the empty points in the image coordinate system are estimated by using the quadratic fitting method to derive the camera pixel-distance map as the base map. After calibration, a single camera is used for data acquisition to realize the target sensing function, which is combined with the calibrated camera pixel-distance map to locate the target, so as to obtain the actual position of the sensing target from the camera sensor to realize single camera roadside intelligent sensing.

Proceedings ArticleDOI
26 Mar 2022
TL;DR: This paper proposes an algorithm to detect obstacle distances from photos or videos of a single camera, which is related to the safety of automatic driving.
Abstract: Autonomous driving is one of the most popular technologies in artificial intelligence. Collision detection is an important issue in automatic driving, which is related to the safety of automatic driving. Many collision detection methods have been proposed, but they all have certain limitations and cannot meet the requirements for automatic driving. Camera is one of the most popular methods to detect objects. The obstacle detection of the current camera is mostly completed by two or more cameras (binocular technology) or used in conjunction with other sensors (such as a depth camera) to achieve the purpose of distance detection. In this paper, we propose an algorithm to detect obstacle distances from photos or videos of a single camera.

Journal ArticleDOI
TL;DR: The applications of the AI cameras and its uses are discussed and technologies that are specifically designed to match the conditions are discussed.
Abstract: Artificial intelligence have played a great role in our daily life. AI have now been implemented in almost all the fields nowadays. It’s been decade since AI have taken over the cyber world. AI have been implemented traffic cameras to sense the violation of traffic of rules. The functionality of these camera is very limited. Many other nations have implemented these technologies according to their specification so that they can customize according to their need. These technologies have their own advantages and disadvantages. We need to calibrate these advantages so that we can use this AI technology. Recent incidents in India by applying these cameras have given many errors in the camera, that’s because it has not implemented in the correct manner. We need technologies that are specifically designed to match the conditions. Suppose if it is raining then we need to calibrate these cameras according to those conditions. The applications of the AI cameras and its uses are discussed.

Journal ArticleDOI
TL;DR: CamRadar as mentioned in this paper leverages the unintentional electromagnetic (EM) emanations of the camera to detect it, which can filter out potential camera EM emanations from numerous EM signals quickly and achieve accurate hidden camera detection.
Abstract: Hidden cameras in sensitive locations have become an increasing threat to personal privacy all over the world. Because the camera is small and camouflaged, it is difficult to detect the presence of the camera with naked eyes. Existing works on this subject have either only covered using wireless transmission to detect cameras, or using other methods which are cumbersome in practical use. In this paper, we introduce a new direction that leverages the unintentional electromagnetic (EM) emanations of the camera to detect it. We first find that the digital output of the camera's image sensor will be amplitude-modulated to the EM emanations of the camera's clock. Thus, changes in the scope of the camera will directly cause changes in the camera's EM emanations, which constitutes a unique characteristic for a hidden camera. Based on this, we propose a novel camera detection system named CamRadar, which can filter out potential camera EM emanations from numerous EM signals quickly and achieve accurate hidden camera detection. Benefitting from the camera's EM emanations, CamRadar will not be limited by the camera transmission types or the detection angle. Our extensive real-world experiments using CamRadar and 19 hidden cameras show that CamRadar achieves a fast detection (in 16.75s) with a detection rate of 93.23% as well as a low false positive rate of 3.95%.

Proceedings ArticleDOI
01 Nov 2022
TL;DR: Zhang et al. as mentioned in this paper proposed a multi-camera localization algorithm based on the main localization module and sub-localization module, which first uses the depth estimation method based on camera geometry to obtain a preliminary depth map, and then uses a convolutional neural network to refine it.
Abstract: It is very important to provide sufficient visual perception for large mobile robots to improve their security and intelligence. Considering the narrow field of vision and insufficient perception of a single camera, this paper proposes a practical multi-camera SLAM system. According to the hardware structure of the large mobile robot, the layout scheme of the multi-camera is designed and its field of view is analyzed. To make full use of the advantages of multi-camera data fusion, a multi-camera localization algorithm based on the main localization module and sub-localization module is proposed. In order to build a dense 3D point cloud map, a novel depth estimation method is present, which first uses the depth estimation method based on camera geometry to obtain a preliminary depth map, and then uses a convolutional neural network to refine it. Comprehensive experiments prove that our multi-camera SLAM system achieves appealing results and has strong practicability.


Book ChapterDOI
01 Jan 2022
TL;DR: The aim of this chapter is to automate the lights of the room to increase the productivity and accuracy of the system in a cost-effective manner and also permits wireless accessibility and control over the system.
Abstract: In the twenty-first century, the demand of power has gone up and this requires more generation of power. This leads to faster consumption of raw materials and results in more pollution. The need for the present time is for a smart lighting system that can adjust itself with respect to the natural light in order to save energy. In recent years, there has been a growing concern about energy consumption within residential buildings. Currently, two main strategies are available for reducing energy consumption by lighting, according to Martirano (Smart lighting control to save energy, 132–138, 2011) [1] either increasing efficiency or effectiveness. The efficiency improves by applying more efficient light sources while effectiveness means the implementation of automated control systems including, for instance, daylight harvesting or occupancy sensors. For home environments, the energy-efficient light sources currently available on the market are Compact Fluorescent Lights (CFLs), Light Emitting Diodes (LEDs), and Smart LEDs. In this chapter, a smart lighting system would be exemplified that can control the room-light efficiently by using sensors to dim and brighten whenever it is required. This system is based on the concept of IoT (Internet of Things). When the system will be operating under automatic mode, the system will maintain the intensity set by the user with respect to the natural light by either increasing or decreasing the artificial lighting. In case of automatic usage of the lighting system, the AI system will be able to predict the suitable lighting intensity based on the previous records of intensity manually set by the user. It will take in features like time, weather and natural light intensity recorded by the LDR. This saves the user’s time as he/she may simply need to choose between the suggested intensity by the system or choose to set the intensity himself/herself. To make the system truly intelligent, a smart camera system and IR sensor would be installed that would command the lighting system to adjust the lighting intensity according to the user’s selected choice when entering the room. The two systems have been studied in this chapter and the systems are, Self-Adjusting Lighting System and the other is Facial Recognition based Lighting Management System. The first system brings automation to homes and saves time and energy whereas the later one uses Artificial Intelligence to bring about energy efficiency and make homes smarter. The aim of this chapter is to automate the lights of the room to increase the productivity and accuracy of the system in a cost-effective manner and also permits wireless accessibility and control over the system.

Proceedings ArticleDOI
26 Oct 2022
TL;DR: In this article , a lamp control system with in-edge processing is presented. But, it detects failures using camera image processing and recovers from the failure by monitoring cyclic outdoor brightness change observed on windows captured with the same camera.
Abstract: Recently IoT edge devices have become more diverse and lower cost. In addition, small low-power single-board computers' computing performance has significantly increased. These conditions make it possible to process locally without communicating to the cloud. Since the advantages of in-edge processing are security and privacy, we applied in-edge IoT to smart homes with rich private information to be secured. In in-edge processing, conventional cloud-managed abnormality monitoring and system maintenance cannot be involved. We developed a lamp control system with in-edge processing. It detects failures using camera image processing and recovers from the failure. The abnormalities of the image processing are detected by monitoring cyclic outdoor brightness change observed on windows captured with the same camera. We have developed a prototype system with Python with OpenCV and FastAPI, etc., over PHP-based lamp timer control while keeping source code size small and considering validation easiness. The camera detectors work at 10 FPS on Python with as small as 1607 total source code lines (three times of code lines against the original lamp control timer).