scispace - formally typeset
Search or ask a question
Author

Mohamed Sedky

Bio: Mohamed Sedky is an academic researcher from Staffordshire University. The author has contributed to research in topics: Anomaly detection & Pixel. The author has an hindex of 9, co-authored 30 publications receiving 320 citations.

Papers
More filters
Proceedings ArticleDOI
25 May 2015
TL;DR: This paper aims to describe a security alarm system using low processing power chips using Internet of things which helps to monitor and get alarms when motion is detected and sends photos and videos to a cloud server.
Abstract: Internet of things is the communication of anything with any other thing, the communication mainly transferring of use able data, for example a sensor in a room to monitor and control the temperature. It is estimated that by 2020 there will be about 50 billion internet-enabled devices. This paper aims to describe a security alarm system using low processing power chips using Internet of things which helps to monitor and get alarms when motion is detected and sends photos and videos to a cloud server. Moreover, Internet of things based application can be used remotely to view the activity and get notifications when motion is detected. The photos and videos are sent directly to a cloud server, when the cloud is not available then the data is stored locally on the Raspberry Pi and sent when the connection resumes. Therefore, advantages like these make this application ideal for monitoring homes in absence.

101 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: This paper presents and assesses a novel physics-based change detection technique, Spectral-360, which is based on the dichromatic color reflectance model, and shows that the objective evaluation performed using the 'changedetection.net 2014' dataset shows that it outperforms most state-of-the-art methods.
Abstract: This paper presents and assesses a novel physics-based change detection technique, Spectral-360, which is based on the dichromatic color reflectance model. This approach, uses image formation models to computationally estimate, from the camera output, a consistent physics-based color descriptor of the spectral reflectance of surfaces visible in the image, and then to measure the similarity between the full-spectrum reflectance of the background and foreground pixels to segment the foreground from a static background. This method represents a new approach to change detection, using explicit hypotheses about the physics that create images. The assumptions which have been made are that diffuse-only-reflection is applicable, and the existence of a dominant illuminant. The objective evaluation performed using the 'changedetection.net 2014' dataset shows that our Spectral-360 method outperforms most state-of-the-art methods.

76 citations

Journal ArticleDOI
02 May 2017-Sensors
TL;DR: A new hybrid, open-source, cross-platform 3D smart home simulator, OpenSHS, for dataset generation that combines advantages from both interactive and model-based approaches and reduces the time and efforts required to generate simulated smart home datasets.
Abstract: This paper develops a new hybrid, open-source, cross-platform 3D smart home simulator, OpenSHS, for dataset generation. OpenSHS offers an opportunity for researchers in the field of the Internet of Things (IoT) and machine learning to test and evaluate their models. Following a hybrid approach, OpenSHS combines advantages from both interactive and model-based approaches. This approach reduces the time and efforts required to generate simulated smart home datasets. We have designed a replication algorithm for extending and expanding a dataset. A small sample dataset produced, by OpenSHS, can be extended without affecting the logical order of the events. The replication provides a solution for generating large representative smart home datasets. We have built an extensible library of smart devices that facilitates the simulation of current and future smart home environments. Our tool divides the dataset generation process into three distinct phases: first design: the researcher designs the initial virtual environment by building the home, importing smart devices and creating contexts; second, simulation: the participant simulates his/her context-specific events; and third, aggregation: the researcher applies the replication algorithm to generate the final dataset. We conducted a study to assess the ease of use of our tool on the System Usability Scale (SUS).

59 citations

Proceedings ArticleDOI
01 Jan 2005
TL;DR: A generic model of smart video surveillance systems that can meet requirements of strong commercial applications is defined and capabilities of computer vision algorithms to the requirement of commercial application are related.
Abstract: Video surveillance has a large market as the number of installed cameras around us can show. There are immediate commercial needs for smart video surveillance systems that can make use of the existing camera network (e.g. CCTV) for more intelligent security systems and to contribute in more applications (beside or) rather than security applications. This work introduces a new classification for smart video surveillance systems depending on their commercial applications. This paper highlights different links between the research and the commercial applications. The work reported here has both research and commercial motivations. Our goals are first to define a generic model of smart video surveillance systems that can meet requirements of strong commercial applications. Our second goal is to categorize different smart video surveillance applications and to relate capabilities of computer vision algorithms to the requirement of commercial application.

46 citations

Proceedings ArticleDOI
28 Jul 2020
TL;DR: This work proposes a CNN-based strategy for learning RGB to hyperspectral cube mapping by learning a set of basis functions and weights in a combined manner and using them both to reconstruct the hyperspectrals signatures of RGB data.
Abstract: Single RGB image hyperspectral reconstruction has seen a boost in performance and research attention with the emergence of CNNs and more availability of RGB/hyperspectral datasets. This work proposes a CNN-based strategy for learning RGB to hyperspectral cube mapping by learning a set of basis functions and weights in a combined manner and using them both to reconstruct the hyperspectral signatures of RGB data. Further to this, an unsupervised learning strategy is also proposed which extends the supervised model with an unsupervised loss function that enables it to learn in an end-to-end fully self supervised manner. The supervised model outperforms a baseline model of the same CNN model architecture and the unsupervised learning model shows promising results. Code will be made available online here

26 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: The latest release of the changedetection.net dataset is presented, which includes 22 additional videos spanning 5 new categories that incorporate challenges encountered in many surveillance settings and highlights strengths and weaknesses of these methods and identifies remaining issues in change detection.
Abstract: Change detection is one of the most important lowlevel tasks in video analytics. In 2012, we introduced the changedetection.net (CDnet) benchmark, a video dataset devoted to the evalaution of change and motion detection approaches. Here, we present the latest release of the CDnet dataset, which includes 22 additional videos (70; 000 pixel-wise annotated frames) spanning 5 new categories that incorporate challenges encountered in many surveillance settings. We describe these categories in detail and provide an overview of the results of more than a dozen methods submitted to the IEEE Change DetectionWorkshop 2014. We highlight strengths and weaknesses of these methods and identify remaining issues in change detection.

680 citations

Journal ArticleDOI
TL;DR: This paper presents a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes, which allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored.
Abstract: Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method’s internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

603 citations

Journal ArticleDOI
TL;DR: This work presents a novel background subtraction from video sequences algorithm that uses a deep Convolutional Neural Network (CNN) to perform the segmentation, and it outperforms the existing algorithms with respect to the average ranking over different evaluation metrics announced in CDnet 2014.

331 citations

Proceedings ArticleDOI
23 May 2016
TL;DR: This work presents a background subtraction algorithm based on spatial features learned with convolutional neural networks (ConvNets) that at least reproduces the performance of state-of-the-art methods, and that it even outperforms them significantly when scene-specific knowledge is considered.
Abstract: Background subtraction is usually based on low-level or hand-crafted features such as raw color components, gradients, or local binary patterns. As an improvement, we present a background subtraction algorithm based on spatial features learned with convolutional neural networks (ConvNets). Our algorithm uses a background model reduced to a single background image and a scene-specific training dataset to feed ConvNets that prove able to learn how to subtract the background from an input image patch. Experiments led on 2014 ChangeDetection.net dataset show that our ConvNet based algorithm at least reproduces the performance of state-of-the-art methods, and that it even outperforms them significantly when scene-specific knowledge is considered.

292 citations