scispace - formally typeset
Search or ask a question

Showing papers on "Video camera published in 2021"


Journal ArticleDOI
TL;DR: The achieved experimental results show that the proposed solution is suitable for creating a smart and real-time video-surveillance system for fire/smoke detection, and YOLOv2 is a better option compared to the other approaches for real- time fire/Smoke detection.
Abstract: This work presents a real-time video-based fire and smoke detection using YOLOv2 Convolutional Neural Network (CNN) in antifire surveillance systems. YOLOv2 is designed with light-weight neural network architecture to account the requirements of embedded platforms. The training stage is processed off-line with indoor and outdoor fire and smoke image sets in different indoor and outdoor scenarios. Ground truth labeler app is used to generate the ground truth data from the training set. The trained model was tested and compared to the other state-of-the-art methods. We used a large scale of fire/smoke and negative videos in different environments, both indoor (e.g., a railway carriage, container, bus wagon, or home/office) or outdoor (e.g., storage or parking area). YOLOv2 is a better option compared to the other approaches for real-time fire/smoke detection. This work has been deployed in a low-cost embedded device (Jetson Nano), which is composed of a single, fixed camera per scene, working in the visible spectral range. There are not specific requirements for the video camera. Hence, when the proposed solution is applied for safety on-board vehicles, or in transport infrastructures, or smart cities, the camera installed in closed-circuit television surveillance systems can be reused. The achieved experimental results show that the proposed solution is suitable for creating a smart and real-time video-surveillance system for fire/smoke detection.

80 citations



Journal ArticleDOI
01 Jan 2021-Heliyon
TL;DR: In this article, a non-contact vision system based on a standard video camera was used to predict the irrigation requirements for loam soils using a feed-forward back propagation neural network.

21 citations



Journal ArticleDOI
TL;DR: This essay reviews the CCTV and BWVC literatures across four main areas of inquiry: (1) program effect and common outcome measures, (2) contextual factors influencing program effect, (3) intervention costs, and (4) implementation issues.
Abstract: Closed-circuit television (CCTV) and body-worn video cameras (BWVCs) have rapidly spread throughout policing. Such widespread deployment has heightened the importance of identifying best practices...

16 citations


Journal ArticleDOI
TL;DR: In this article, the concept of a fully-staring 2D detector array with a single detector element responsible for a single imaged pixel is introduced. But the system is designed for a field-of-view of 2 × 1/m $^2$ and an imaging distance of 2.5m.
Abstract: Current state-of-the-art security video cameras operating in the THz regime employ up to a few hundred detectors together with optomechanical scanning to cover an adequate field-of-view for practical concealed object detection. As a downside, the scanning reduces the integration time per pixel compromising sensitivity, increases the complexity, and reduces the reliability of the system. In contrast to this, we demonstrate a video camera, for the first time, basing its operation on the concept of a fully staring 2-D detector array with a single detector element responsible for a single imaged pixel. The imaging system is built around the detector technology of kinetic inductance bolometers, allowing the operation in the intermediate temperature range $>$ 5 K and the scale-up of the detector count into multikilo-pixel arrays and beyond. The system is designed for a field-of-view of 2 × 1 m $^2$ and an imaging distance of 2.5 m. We describe the main components of the system and show images from concealed object experiments performed at a near-video rate of 9 Hz.

13 citations


Journal ArticleDOI
TL;DR: In this paper, a hybrid approach of CNN (Convolutional Neural Network) and BiLSTM (Bidirectional Long Term Dependencies) is used to detect the driver's drowsiness.

13 citations


Journal ArticleDOI
TL;DR: In this paper, a qualitative performance comparison between laser-based and RGB camera-based systems has been made to hint that laserbased algorithms should be used instead of common RGB cameras.
Abstract: Privacy of people is a key factor in surveillance systems. Video camera brings us well-off color information. How would the privacy be secured then? Besides, privacy protection should not create a hindrance for finding of objects or people under specific cases. Laser scanner takes way affluent color information. It functions with eye-safe and invisible laser beam. Yet, it provides us robust object recognition map. Images can be interpreted by humans, but laser-based systems need software applications to explain the data. Camera-based surveillance system does not focus on the problem of private life conservation. On the contrary, laser-based surveillance system ensures privacy of people inherently, as it does not record real world videos except laser scanned data points. In this paper, first, the privacy issues of people for both surveillance systems have been compared to realize their significance. Second, a qualitative performance comparison between laser-based and RGB camera-based systems has been made to hint that laser-based algorithms should be used instead of common RGB cameras. Third, a succinct survey of laser-based detection and tracking algorithms of movers has been conducted. Final, a superiority measure of the leading laser-based people-vehicles related algorithms has been performed on the basis of statistical test scores deeming the ineffectualness metrics (e.g., errors and failures) of each algorithm.

11 citations


Journal ArticleDOI
16 Jun 2021
TL;DR: In this paper, a vision system was used as one of the input signals to an automatic landing system to imitate pilot actions while landing an aircraft in the longitudinal channel during automatic landing.
Abstract: The paper presents automatic control of an aircraft in the longitudinal channel during automatic landing. There are two crucial components of the system presented in the paper: a vision system and an automatic landing system. The vision system processes pictures of dedicated on-ground signs which appear to an on-board video camera to determine a glide path. Image processing algorithms used by the system were implemented into an embedded system and tested under laboratory conditions according to the hardware-in-the-loop method. An output from the vision system was used as one of the input signals to an automatic landing system. The major components are control algorithms based on the fuzzy logic expert system. They were created to imitate pilot actions while landing the aircraft. Both systems were connected with one another for cooperation and to control an aircraft model in a simulation environment. Selected results of tests presenting control efficiency and precision are shown in the final section of the paper.

11 citations


DOI
12 Nov 2021
TL;DR: In this paper, a review highlights research behind smartphone and video camera methods for measuring blood pressure (BP) and discusses the advantages of the various techniques, their potential clinical applications, and future directions and challenges.
Abstract: Regular blood pressure (BP) monitoring enables earlier detection of hypertension and reduces cardiovascular disease. Cuff-based BP measurements require equipment that is inconvenient for some individuals and deters regular home-based monitoring. Since smartphones contain sensors such as video cameras that detect arterial pulsations, they could also be used to assess cardiovascular health. Researchers have developed a variety of image processing and machine learning techniques for predicting BP via smartphone or video camera. This review highlights research behind smartphone and video camera methods for measuring BP. These methods may in future be used at home or in clinics, but must be tested over a larger range of BP and lighting conditions. The review concludes with a discussion of the advantages of the various techniques, their potential clinical applications, and future directions and challenges. Video cameras may potentially measure multiple cardiovascular metrics including and beyond BP, reducing the risk of cardiovascular disease.

8 citations


Journal ArticleDOI
TL;DR: The results confirmed the hypothesis that infrared thermography might be used for categorizing the vehicle type according to the thermal features of vehicle exteriors and machine learning methods for vehicle type recognition.
Abstract: The main goal of this paper is to present new possibilities for the detection and recognition of different categories of electric and conventional (equipped with combustion engines) vehicles using a thermal video camera. The paper presents a draft of a possible detection and classification system of vehicle propulsion systems working with thermal analyses. The differences in thermal features of different vehicle categories were found out and statistically proved. The thermal images were obtained using an infrared thermography camera. They were utilized to design a database of vehicle class images of passenger vehicles (PVs), vans, and buses. The results confirmed the hypothesis that infrared thermography might be used for categorizing the vehicle type according to the thermal features of vehicle exteriors and machine learning methods for vehicle type recognition.

Journal ArticleDOI
TL;DR: In this paper, a digital video camera was used to measure heart rate and detect oxygen desaturations in healthy infants, and the average bias of camera heart-rate measures was −4.2 BPM and 95% limits of agreement were ±43.8 BPM.
Abstract: To assess the feasibility of using an ordinary digital video camera to measure heart rate and detect oxygen desaturations in healthy infants. Heart rate and oxygen saturation were measured with a video camera by detecting small color changes in 28 infants’ foreheads and compared with standard pulse oximetry measures. Multivariable regression examined the relationship between infant characteristics and heart-rate measurement precision. The average bias of camera heart-rate measures was −4.2 beats per minute (BPM) and 95% limits of agreement were ±43.8 BPM. Desaturations detected by camera were 75% sensitive (15/20) and had a positive predictive value of 20% (15/74). Lower birth-weight was independently correlated with more precise heart-rate measures (8.05 BPM per kg, [95% CI 0.764–15.3]). A digital video camera provides accurate but imprecise measures of infant heart rate and may provide a rough screening tool for oxygen desaturations.

Journal ArticleDOI
Sven Ubik1, Jiri Pospisilik1
TL;DR: Three methods to measure camera latency are proposed: timecode view, waveform shift, and screen photodetector methods, which can achieve a subframe resolution and precision of approximately 1 ms although with varying levels of automation, convenience and suitability for particular cameras.
Abstract: All modern video cameras exhibit some latency between the scene being captured and the output video signal. This camera latency is significant to the overall latency of a video network transmission chain. Some real-time applications based on video sharing require very low latency and selecting the right camera is then crucial. We observed how video cameras operate regarding their latency. We proposed three methods to measure camera latency: timecode view, waveform shift, and screen photodetector methods. All methods can achieve a subframe resolution and precision of approximately 1 ms although with varying levels of automation, convenience and suitability for particular cameras. We discuss arrangements that affect measurement precision. We applied the proposed methods to a sample camera, and we show measurements results for several other cameras. The measurement results showed that the fastest cameras provide latencies lower than 5 ms, which should be fast enough even for demanding real-time applications. However, most cameras still exhibit latencies at the range of 1—3 video frames.

Proceedings ArticleDOI
17 May 2021
TL;DR: In this paper, the authors explored the use of various transforms to achieve registration between the video image plane and the pressure sensitive mat (PSM) with the ultimate goal of fusing PSM and video modalities of the patient dataset.
Abstract: We have collected a multi-modal neonatal patient dataset suitable for development of noncontact continuous monitoring techniques. Data was simultaneously collected from a RGB-D video camera placed above the patient and a pressure sensitive mat (PSM) beneath the patient. This paper explores the use of various transforms to achieve registration between the video image plane and the PSM, with the ultimate goal of fusing PSM and video modalities of our patient dataset. A series of experiments were conducted to evaluate transforms requiring different numbers of registration landmarks. The expected error in determining landmark locations in both video and PSM is characterized, including the impact of camera offset, registration instrument angle, the degree of collinearity of landmarks, the spacing between landmarks and the use of “secondary” landmarks estimated from patient anatomy. A landmark spacing greater than 450 cm2 is recommended since it achieves an error of less than 3 cm when aligning points between video and PSM planes. For a top-down camera view, a similarity transform is recommended while for an angled camera view, a projective transform is recommended.

Journal ArticleDOI
TL;DR: TagAttention as discussed by the authors is a vision-RFID fusion system that achieves mobile object tracing without the knowledge of the target object appearances and hence can be used in many applications that need to track arbitrary unregistered objects.
Abstract: We propose to study mobile object tracing, which allows a mobile system to report the shape, location, and trajectory of the mobile objects appearing in a video camera and identifies each of them with its cyber-identity (ID), even if the appearances of the objects are not known to the system. Existing tracking methods either cannot match objects with their cyber-IDs or rely on complex vision modules pre-learned from vast and well-annotated datasets including the appearances of the target objects, which may not exist in practice. We design and implement TagAttention, a vision-RFID fusion system that achieves mobile object tracing without the knowledge of the target object appearances and hence can be used in many applications that need to track arbitrary un-registered objects. TagAttention adopts the visual attention mechanism, through which RF signals can direct the visual system to detect and track target objects with unknown appearances. Experiments show TagAttention can actively discover, identify, and track the target objects while matching them with their cyber-IDs by using commercial sensing devices in complex environments with various multipath reflectors. It only requires around one second to detect and localize a new mobile target appearing in the video and keeps tracking it accurately over time.

Journal ArticleDOI
13 Jan 2021
TL;DR: The C3D data collection, which is presented in this research, is more accessible and cost-effi cient than other systems and can be used as an alternative method for motion analysis due to a more detailed comparison.
Abstract: The Human three-dimensional (3D) musculoskeletal model is based on motion analysis methods and can be obtained by particular motion capture systems that export 3D data with coordinate 3D (C3D) format. Unique cameras and specifi c software are essential for analyzing the data. This equipment is quite expensive, and using them is time-consuming. This research intends to use ordinary video cameras and open source systems to get 3D data and create a C3D format due to these problems. By capturing movements with two video cameras, marker coordination is obtainable using Skill-Spector. To create C3D data from 3D coordinates of the body points, MATLAB functions were used. The subject was captured simultaneously with both the Cortex system and two video cameras during each validation test. The mean correlation coeffi cient of datasets is 0.7. This method can be used as an alternative method for motion analysis due to a more detailed comparison. The C3D data collection, which we presented in this research, is more accessible and cost-effi cient than other systems. In this method, only two cameras have been used.

Journal ArticleDOI
TL;DR: The vibration behavior of monofilaments was investigated to identify their physical and apparent properties and it was shown that with an increase in the number of knots, the energy parameter would undergo the highest percentage of change.
Abstract: The vibration behavior of monofilaments was investigated to identify their physical and apparent properties. The investigation is based on the general belief that the modal parameters of a given st...

Journal ArticleDOI
TL;DR: This work focuses on the development and enhancement of computer vision methods for vision-based sensing using a video camera as a source of data for signal-to-noise discrimination in the darkroom.
Abstract: With the advancement and wide availability of digital video cameras, compounded with the development and enhancement of computer vision methods, vision-based sensing using a video camera as...

Journal ArticleDOI
TL;DR: The problem of maximizing the lifetime of a wireless sensor network which uses video cameras to monitor targets is considered and a column generation algorithm based on these properties is proposed for solving three lifetime maximization problems.
Abstract: The problem of maximizing the lifetime of a wireless sensor network which uses video cameras to monitor targets is considered. These video cameras can rotate and have a fixed monitoring angle. For a target to be covered by a video camera mounted on a sensor node, three conditions must be satisfied. First, the distance between the sensor and the target should be less than the sensing range. Second, the direction of the camera sensor should face the target, and third, the focus of the video camera should be such that the picture of the target is sharp. Basic elements on optics are recalled, then some properties are shown to efficiently address the problem of setting the direction and focal distance of a video camera for target coverage. Then, a column generation algorithm based on these properties is proposed for solving three lifetime maximization problems. Targets are considered as points in the first problem, they are considered as discs in the second problem (which allows for considering occlusion) and in the last problem, focal distance is also dealt with for taking image sharpness into account. All of these problems are compared on a testbed of 180 instances and numerical results show the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: In this article, a method to evaluate dynamics of a representative grooving tool using a smartphone's video camera is presented, where pixels within images are treated as vibration sensors measuring response to impact-based excitation and damping is estimated using the logarithmic decrement method.

Proceedings ArticleDOI
22 May 2021
TL;DR: A contactless dementia detection system based on gait analysis from surveillance video, and it can serve as a home-based healthcare system and achieve a sensitivity of 74.10% on the test set and the processing only takes several minutes for early dementia detection.
Abstract: Dementia is a neurodegenerative disease with a high incidence in the elderly. However, there is no effective treatment for this disease, and early intervention has a great effect to slow the deterioration. Currently, the detection of dementia is mainly achieved using questionnaire-like neuropsychological tests. Such ways usually cost a lot of time. To this end, we design a contactless dementia detection system based on gait analysis from surveillance video, and it can serve as a home-based healthcare system. This system applies a Kinect 2.0 camera to capture the human video and extract the skeleton joints at a rate of 15 frames per second. Two different gaits are collected for detection, namely single-task gait and dual-task gait. In this paper, we design a convolutional neural network based classifier to extract features in a data-driven way from these two groups of videos, but not take hand-crafted features. Experimental results show that we achieve a sensitivity of 74.10% on the test set using this system, and the processing only takes several minutes for early dementia detection.

Journal ArticleDOI
TL;DR: This paper proposes an efficient method—VFDHSOG based on Histograms of the second order gradient to locate ‘ suspicious ’ frames and then localize the CMF within the frame and performance evaluation is done using the benchmark datasets Surrey university library for forensic analysis (SULFA), the Video tampering dataset (VTD) and SYSU-OBJFORGED dataset.
Abstract: It is a generic belief that digital video can be proffered as visual evidence in areas like politics, criminal litigation, journalism and military intelligence services. Multicamera smartphones with megapixels of resolution are a common hand-held device used by everyone. This has made the task of video recording very easy. At the same time a variety of applications available on smart phones have made this indispensable source of information vulnerable to deliberate manipulations. Hence, content authentication of video evidence becomes essential. Copy-move forgery or Copy-paste forgery is consequential forgery done by forgers for changing the basic understanding of the scene. Removal or addition of frames in a video clip can also be managed by advanced apps on smartphones. In case of surveillance, the video camera and the background are stable which makes forgery easy and imperceptible. Therefore, accurate Video forgery detection is crucial. This paper proposes an efficient method—VFDHSOG based on Histograms of the second order gradient to locate ‘suspicious’ frames and then localize the CMF within the frame. A ‘suspicious’ frame is located by computing correlation coefficients of the HSOG feature after obtaining a binary image of a frame. Performance evaluation is done using the benchmark datasets Surrey university library for forensic analysis (SULFA), the Video tampering dataset (VTD) and SYSU-OBJFORGED dataset. SULFA has video files of different quality like q10, q20 etc., which represents high compression. The VTD dataset provides both, i.e. inter and intra frame forgery. The SYSU dataset covers different attacks like scaling and rotation. An overall accuracy of 92.26% is achieved with the capability to identify attacks like scale up/down and rotation.

Journal ArticleDOI
TL;DR: In this paper, the authors used a large tiger shark as a proof-of-concept case study to demonstrate the utility of 360-degree camera technology for bio-logging of marine organisms.
Abstract: Animal-borne video camera systems have long-been used to capture the fine-scale behaviors and unknown aspects of the biology of marine animals. However, their utility to serve as robust scientific tools in the greater bio-logging research community has not been fully realized. Here we provide, for the first time, an application of 360-degree camera technology to a marine organism, using a large tiger shark as a proof-of-concept case study. Leveraging the three-dimensional nature of the imaging technology, we derived 224 seafloor habitat assessments over the course of the nearly 1-hour track, whereby the shark was able to survey ~23,000 square meters of seafloor; over three-times greater than the capacity of non 360-degree cameras. The resulting data provided detailed information on habitat use, diving behavior, and swimming speed, as well seafloor mapping. Our results suggest that 360-degree cameras provide complimentary benefits - and in some cases superior efficiency - than unidirectional video packages, with an enhanced capacity to map seafloor.

Proceedings ArticleDOI
26 Jul 2021
TL;DR: In this article, the authors presented first stage simulations of such an indoor navigation system based on optical camera communications (OCC), coupling together LED lights as sources with widely available modern smartphone cameras as receivers.
Abstract: Light Emitting Diodes (LED) equipment is used primarily just for lighting, but Visible Light Communication (VLC) is a new research domain that uses existing LED infrastructure for transmitting data, intelligent road traffic signaling, positioning, guiding inside buildings, etc. Most proposed VLC techniques use light sensors as receivers, but they are not readily available in devices that we use every day, or their sample rate is too low. A branch of VLC is represented by optical camera communications (OCC), where a video camera is the receiver, which, unlike a fast sampled light sensor, is available on any smartphone. Because of the operating system, there could be a variation in the framerate of video smartphone cameras while recording. This paper presents first stage simulations of such an indoor navigation system based on OCC, coupling together LED lights as sources with widely available modern smartphone cameras as receivers. The proposed experiments investigate the influences of camera framerate variation from the nominal value on the decoding performances of our OCC system and show that it can be used in real world conditions.

Journal ArticleDOI
26 Jun 2021
TL;DR: The proposed approach based on the video image obtained from an external video camera located above the working area of mobile robots, the location of both robots and nearby obstacles is recognized and the optimal route to the target point of the selected robot is built, and changes in its working area are monitored.
Abstract: This article presents the algorithmic support of the external monitoring and routing system of autonomous mobile robots. In some cases, the practical usage of mobile robots is related to the solution of navigation problems. In particular, the position of ground robots can be secured using unmanned aerial vehicles. In the proposed approach based on the video image obtained from an external video camera located above the working area of mobile robots, the location of both robots and nearby obstacles is recognized. The optimal route to the target point of the selected robot is built, and changes in its working area are monitored. Information about the allowed routes of the robot is transmitted to third-party applications via network communication channels. Primary image processing from the camera includes distortion correction, contouring and binarization, which allows to separate image fragments containing robots and obstacles from background surfaces and objects. Recognition of robots in a video frame is based on the use of a SURF detector. This technology extracts key points in the video frame and compares them with key points of reference images of robots. Trajectory planning is implemented using Dijkstra’s algorithm. The discreteness of the trajectories obtained using the algorithm for finding a path on the graph can be compensated for on board autonomous mobile robots by using spline approximation. Experimental studies have confirmed the efficiency of the proposed approach both in the problem of recognition and localization of mobile robots and in the problem of planning safe trajectories.

Proceedings ArticleDOI
11 Jul 2021
TL;DR: In this paper, a 3D bounding box representation and a physically reasonable 3D motion model relying on an unscented Kalman filter based approach are presented. But, this model is not suitable for real-time data.
Abstract: Urban Traffic Surveillance (UTS) is a surveillance system based on a monocular and calibrated video camera that detects vehicles in an urban traffic scenario with dense traffic on multiple lanes and vehicles performing sharp turning maneuvers. UTS then tracks the vehicles using a 3D bounding box representation and a physically reasonable 3D motion model relying on an unscented Kalman filter based approach. Since UTS recovers positions, shape and motion information in a three-dimensional world coordinate system, it can be employed to recognize diverse traffic violations or to supply intelligent vehicles with valuable traffic information. We build on YOLOv3 as a detector yielding 2D bounding boxes and class labels for each vehicle. A 2D detector renders our system much more independent to different camera perspectives as a variety of labeled training data is available. This allows for a good generalization while also being more hardware efficient. The task of 3D tracking based on 2D detections is supported by integrating class specific prior knowledge about the vehicle shape. We quantitatively evaluate UTS using self generated synthetic data and ground truth from the CARLA simulator, due to the non-existence of datasets with an urban vehicle surveillance setting and labeled 3D bounding boxes. Additionally, we give a qualitative impression of how UTS performs on real-world data. Our implementation is capable of operating in real time on a reasonably modern workstation. To the best of our knowledge, UTS is to date the only 3D vehicle tracking system in a surveillance scenario (static camera observing moving targets).

Journal ArticleDOI
16 Nov 2021-Sensors
TL;DR: In this paper, a video processing pipeline is presented to extract riding lines in cyclocross races, which consists of a stepwise analysis process to extract the riding behavior from a region (i.e., the fence) in a video camera feed.
Abstract: Video-based trajectory analysis might be rather well discussed in sports, such as soccer or basketball, but in cycling, this is far less common. In this paper, a video processing pipeline to extract riding lines in cyclocross races is presented. The pipeline consists of a stepwise analysis process to extract riding behavior from a region (i.e., the fence) in a video camera feed. In the first step, the riders are identified by an Alphapose skeleton detector and tracked with a spatiotemporally aware pose tracker. Next, each detected pose is enriched with additional meta-information, such as rider modus (e.g., sitting on the saddle or standing on the pedals) and detected team (based on the worn jerseys). Finally, a post-processor brings all the information together and proposes ride lines with meta-information for the riders in the fence. The presented methodology can provide interesting insights, such as intra-athlete ride line clustering, anomaly detection, and detailed breakdowns of riding and running durations within the segment. Such detailed rider info can be very valuable for performance analysis, storytelling, and automatic summarization.

Proceedings ArticleDOI
01 Jul 2021
TL;DR: In this article, a robot system that can generate a map using a collaborative ground-based robot is presented, which is done by a SBC using a video camera and a convex mirror.
Abstract: Autonomous mobile robot paradigm has started to change to collaborative robots. This paper presents a robot system that can generate a map using a collaborative ground-based robot. Mapping is done by a SBC using a video camera and a convex mirror. Final purpose of the robot is to send data with the position of the obstacles in real time to the other robots from swarm.

DOI
06 Oct 2021
TL;DR: In this article, a method of measuring the distance using a video camera designed to assess the distance the user is from the camera and be exploited industrially for applications such as Human-Computer Interaction (HCI) was presented.
Abstract: This paper presents a method of measuring the distance using a video camera designed to assess the distance the user is from the camera and be exploited industrially for applications such as Human-Computer Interaction (HCI). The measurement method uses a statistically determined average interpupillary distance for both men and women and has been implemented in Python. The eye position detection was performed using the OpenCV library.

Journal ArticleDOI
TL;DR: In this paper, the authors used video data and reverse projection photogrammetry to determine the speed of a vehicle with a limited set of variables, and the results provided an accurate speed result with a standard degree of uncertainty.
Abstract: Reverse projection photogrammetry has long been used to estimate the height of an individual in forensic video examinations. A natural extrapolation would be to apply the same technique on a video to estimate the speed of an object by determining the distance traveled between two points over a set amount of time. To test this theory, five digital video recorders (DVRs) were connected to a single fixed camera to record a vehicle traveling down a track. The vehicle's speed was measured through Doppler radar by a trained operator and the speedometer of the vehicle was also recorded with a video camera. The recorded video was examined and the frames that best depict the beginning and end of the vehicles course were selected. Two reverse projection photogrammetric examinations were performed on the selected frames to establish the position of the vehicle. The distance between the two points was measured, and the time elapsed between the two points was examined. The outcome provided an accurate speed result with a standard degree of uncertainty. This study proves the feasibility of using video data and reverse projection photogrammetry to determine the speed of a vehicle with a limited set of variables. Further research is needed to determine how additional variables would impact the standard degree of uncertainty.