Video Fragmentation and Reverse Search on the Web.
read more
Citations
Examining fake news comments on Facebook: an application of situational theory of problem solving in content analysis
The Challenges of Studying Misinformation on Video-Sharing Platforms During Crises and Mass-Convergence Events
References
Camera Motion-Based Analysis of User Generated Video
Feature-based video key frame extraction for low quality video sequences
Hierarchical Hidden Markov Model in Detecting Activities of Daily Living in Wearable Videos for Studies of Dementia
Near-lossless semantic video summarization and its applications to video analysis
Hierarchical Hidden Markov Model in detecting activities of daily living in wearable videos for studies of dementia
Related Papers (5)
Frequently Asked Questions (15)
Q2. What have the authors stated for future works in "Video fragmentation and reverse search on the web" ?
Regarding the future outlook of the presented technologies, motivated by the adoption and use of the developed web application for reverse video search by hundreds of users on a daily basis ( through its integration into the InVID Verification Plugin19 ), their work will focus on: a ) the user-based evaluation of the efficiency of 19 Available at: http: //www. invid-project. 3. 1. 2 to produce a comprehensive and thorough keyframe-based summary of the video content ; b ) the possibility to combine the algorithms of Sections 1. 3. 1. 1 and 1. 3. 1. 2 in order to exploit the fragmentation accuracy of the latter one and the visual discrimination efficiency of the former one ( especially on the keyframe selection part of the process ) ; c ) the exploitation of the performance of modern deep-network architectures ( such as DCNNs and LSTMs ) for advancing the accuracy of the video fragmentation process ; and d ) the further improvement of the keyframe selection process to minimize the possibility of extracting black on blurred video frames of limited usability for the user, thus aiming to an overall amelioration of the tool ’ s effectiveness.
Q3. What is the main reason for the rise of UGVs?
The ubiquitous use of video capturing devices supported by the convenience of the user to share videos through social networks and video sharing platforms, leads to a wealth of online available UGVs.
Q4. Why did the authors build their own ground-truth dataset?
Driven by the lack of publicly available datasets for evaluating the performance of video sub-shot fragmentation algorithms16, the authors built their own ground-truth dataset.
Q5. What are the three recent platforms that assist the detection and retrieval of near duplicates of an?
Last but not least, three recently developed platforms that assist the detection and retrieval of images and videos are the Berify, the RevIMG and the Videntifier.
Q6. What is the way to produce fake news?
One of the easiest ways to produce fake news (such fakes are known as “easy fakes” in the media verification community) is based on the reuse of a video from an earlier circumstance with the assertion that it presents a current event, with the aim to deliberately misguide the viewers about the event.
Q7. What is the main challenge of the in-time identification of media?
the in-time identification of media posted online, which (claim to) illustrate a (breaking) news event is for many journalists the foremost challenge in order to meet deadlines to publish a news story online or fill a news broadcast with content.
Q8. What is the procedure used to estimate the motion between a pair of neighboring frames?
The conducted motion between a pair of neighboring frames is estimated by computing the region-level optical flow based on the procedure depicted in Fig. 1.5, which consists of the following steps:• each frame undergoes an image resizing process that maintains the original aspect ratio and makes the frame width equal to w, and then it is spatially fragmented into four quartiles;• the most prominent corners in each quartile are detected based on the algorithm of [38]; • the detected corners are used for estimating the optical flow at the region-level by utilizing the Pyramidal Lucas Kanade (PLK) method; • based on the extracted optical flow, a mean displacement vector is computed for each quartile, and the four spatially distributed vectors are treated as a regionlevel representation of the motion activity between the pair of frames.
Q9. What was the performing algorithm of the tested approaches?
For each one of the tested approaches the number of correct detections (where the detected boundary can lie within a temporal window around the respective groundtruth boundary, equal to twice the video frame-rate), misdetections and false alarms were counted and the algorithms’ performance was expressed in terms of Precision (P), Recall (R) and F-Score (F), similarly to [1, 2].
Q10. What is the effective method for generating keyframe summaries?
the keyframe selection strategy of the first alternative combined with the competitive performance of the InVID approach in most examined cases, indicates the InVID method as the most efficient one in generating keyframe-based video summaries that are well-balanced according to the determined criteria for the descriptiveness (completion) and representativeness (conciseness) of the keyframe collection.
Q11. What are the main methods that are related to the analysis of rushes video?
Most of them are related to approaches for video summarization and keyframe selection (e.g. [21, 9, 29, 15]), some focus on the analysis of egocentric or wearable videos (e.g. [27, 41, 19]), others aim to address the need for detecting duplicates of videos (e.g. [8]), a number of them is related to the indexing and annotation of personal videos (e.g. [28]),while there is a group of methods that targeted the indexing and summarization of rushes video (e.g. [12, 25, 4, 36]).
Q12. What is the common type of fake news?
One type of fakes, probably the easiest to do and thus one of most commonly found by journalists, relies on the reuse of a video from an earlier event with the claim that it shows a contemporary event.
Q13. What technology has made it possible to embed powerful, high-resolution video sensors into portable devices?
The recent advances in video capturing technology made possible the embedding of powerful, high-resolution video sensors into portable devices, such as camcorders, digital cameras, tablets and smartphones.
Q14. What is the general approach for motion-based video parsing?
Contrary to the use of experimentally-defined thresholds for categorizing the detected camera motion, [18] describes a generic approach for motion-based video parsing that estimates the affine motion parameters, either based on motion vectors of the MPEG-2 stream or by applying a frame-to-frame image registration process, factorizes their values via Singular Value Decomposition (SVD) and imports them into three multi-class Support Vector Machines (SVMs) to recognize the camera motion type and direction between successive video frames.
Q15. How does the algorithm compute the similarity scores of video frames?
Driven by this observation, the algorithm does not apply the aforementioned pair-wise similarity estimation on every pair of consecutive video frames, but only for neigboring frames selected via a frame-sampling strategy which keeps 3 equally distant frames per second.