scispace - formally typeset
Search or ask a question

Showing papers by "Yaser Jararweh published in 2019"


Journal ArticleDOI
01 Jul 2019
TL;DR: This work introduces an automated secure continuous cloud service availability framework for smart connected vehicles that enables an intrusion detection mechanism against security attacks and provides services that meet users’ quality of service (QoS) and quality of experience (QoE) requirements.
Abstract: In the very near future, transportation will go through a transitional period that will shape the industry beyond recognition. Smart vehicles have played a significant role in the advancement of intelligent and connected transportation systems. Continuous vehicular cloud service availability in smart cities is becoming a crucial subscriber necessity which requires improvement in the vehicular service management architecture. Moreover, as smart cities continue to deploy diversified technologies to achieve assorted and high-performance cloud services, security issues with regards to communicating entities which share personal requester information still prevails. To mitigate these concerns, we introduce an automated secure continuous cloud service availability framework for smart connected vehicles that enables an intrusion detection mechanism against security attacks and provides services that meet users’ quality of service (QoS) and quality of experience (QoE) requirements. Continuous service availability is achieved by clustering smart vehicles into service-specific clusters. Cluster heads are selected for communication purposes with trusted third-party entities (TTPs) acting as mediators between service requesters and providers. The most optimal services are then delivered from the selected service providers to the requesters. Furthermore, intrusion detection is accomplished through a three-phase data traffic analysis, reduction, and classification technique used to identify positive trusted service requests against false requests that may occur during intrusion attacks. The solution adopts deep belief and decision tree machine learning mechanisms used for data reduction and classification purposes, respectively. The framework is validated through simulations to demonstrate the effectiveness of the solution in terms of intrusion attack detection. The proposed solution achieved an overall accuracy of 99.43% with 99.92% detection rate and 0.96% false positive and false negative rate of 1.53%.

274 citations


Journal ArticleDOI
TL;DR: This paper proposes a state-of-the-art research for aspect-based sentiment analysis of Arabic Hotels’ reviews using two implementations of long short-term memory (LSTM) neural networks and shows that the approaches outperform baseline research on both tasks.
Abstract: This paper proposes a state-of-the-art research for aspect-based sentiment analysis of Arabic Hotels’ reviews using two implementations of long short-term memory (LSTM) neural networks. The first one is (a) a character-level bidirectional LSTM along with conditional random field classifier (Bi-LSTM-CRF) for aspect opinion target expressions (OTEs) extraction, and the second one is (b) an aspect-based LSTM for aspect sentiment polarity classification in which the aspect-OTEs are considered as attention expressions to support the sentiment polarity identification. Proposed approaches are evaluated using a reference dataset of Arabic Hotels’ reviews. Results show that our approaches outperform baseline research on both tasks with an enhancement of 39% for the task of aspect-OTEs extraction and 6% for the aspect sentiment polarity classification task.

172 citations


Journal ArticleDOI
TL;DR: The results proved that the used software functioned perfectly until a compression ratio of (30–40%) of the raw images; any higher ratio would negatively affect the accuracy of the used system.
Abstract: Despite the large body of work on fingerprint identification systems, most of it focused on using specialized devices. Due to the high price of such devices, some researchers directed their attention to digital cameras as an alternative source for fingerprints images. However, such sources introduce new challenges related to image quality. Specifically, most digital cameras compress captured images before storing them leading to potential losses of information. This study comes to address the need to determine the optimum ratio of the fingerprint image compression to ensure the fingerprint identification system’s high accuracy. This study is conducted using a large in-house dataset of raw images. Therefore, all fingerprint information is stored in order to determine the compression ratio accurately. The results proved that the used software functioned perfectly until a compression ratio of (30–40%) of the raw images; any higher ratio would negatively affect the accuracy of the used system.

154 citations


Journal ArticleDOI
TL;DR: This survey presents a comprehensive overview of the works done so far on Arabic SA and tries to identify the gaps in the current literature laying foundation for future studies in this field.
Abstract: Sentiment analysis (SA) is a continuing field of research that lies at the intersection of many fields such as data mining, natural language processing and machine learning It is concerned with the automatic extraction of opinions conveyed in a certain text Due to its vast applications, many studies have been conducted in the area of SA especially on English texts, while other languages such as Arabic received less attention This survey presents a comprehensive overview of the works done so far on Arabic SA (ASA) The survey groups published papers based on the SA-related problems they address and tries to identify the gaps in the current literature laying foundation for future studies in this field

153 citations


Journal ArticleDOI
TL;DR: The privacy issues and factors that are essential to be considered for preserving privacy in SIoV environments are analyzed from different perspectives including the privacy of a person, behavior and action, communication, data and image, thoughts and feelings, location and space, and association.
Abstract: The Internet of Things (IoT) paradigm has integrated the sensor network silos to the Internet and enabled the provision of value-added services across these networks. These smart devices are now becoming socially conscious by following the social Internet of Things (SIoT) model that empowers them to create and maintain social relationships among them. The Social Internet of Vehicle (SIoV) is one application of SIoT in the vehicular domain that has evolved the existing intelligent transport system (ITS) and vehicular ad-hoc networks (VANETs) to the next phase of Intelligent by adding socializing aspect and constant connectivity. SIoV generates a massive amount of real-time data enriched with context and social relationship information about vehicles, drivers, passengers, and the surrounding environment. Therefore, the role of privacy management becomes essential in SIoV, as data is collected and stored at different layers of its architecture. The challenge of privacy is aggravated because the dynamic nature of SIoV poses a major threat in its adoption. Motivated by the need to address these aspects, this paper identifies the challenges involved in managing privacy in SIoV. Furthermore, the paper analyzes the privacy issues and factors that are essential to be considered for preserving privacy in SIoV environments from different perspectives including the privacy of a person, behavior and action, communication, data and image, thoughts and feelings, location and space, and association. In addition, the paper discusses the blockchain-based solutions to preserve privacy for SIoV.

113 citations


Journal ArticleDOI
TL;DR: A new F 2 F and FRAMES collaboration model is proposed that promotes offloading incoming requests among fog nodes, according to their load and processing capabilities, via a novel load balancing known as Fog Resource manAgeMEnt Scheme (FRAMES).

97 citations


Journal ArticleDOI
TL;DR: This research focuses on the smart employment of internet of Multimedia sensors in smart farming to optimize the irrigation process and showed that the use of deep learning proves to be superior in the Internet ofmultimedia Things environment.
Abstract: Efficiently managing the irrigation process has become necessary to utilize water stocks due to the lack of water resources worldwide. Parched plant leads into hard breathing process, which would result in yellowing leaves and sprinkles in the soil. In this work, yellowing leaves and sprinkles in the soil have been observed using multimedia sensors to detect the level of plant thirstiness in smart farming. We modified the IoT concepts to draw an inspiration towards the perspective vision of ’Internet of Multimedia Things’ (IoMT). This research focuses on the smart employment of internet of Multimedia sensors in smart farming to optimize the irrigation process. The concepts of image processing work with IOT sensors and machine learning methods to make the irrigation decision. sensors reading have been used as training data set indicating the thirstiness of the plants, and machine learning techniques including the state-of-the-art deep learning were used in the next phase to find the optimal decision. The conducted experiments in this research are promising and could be considered in any smart irrigation system. The experimental results showed that the use of deep learning proves to be superior in the Internet of Multimedia Things environment.

95 citations


Journal ArticleDOI
TL;DR: This article describes how to replicate data from the cloud to the edge, and then to mobile devices to provide faster data access for users, and shows how services can be composed in crowded environments using service-specific overlays.
Abstract: Densely crowded environments such as stadiums and metro stations have shown shortcomings when users request data and services simultaneously. This is due to the excessive amount of requested and generated traffic from the user side. Based on the wide availability of user smart-mobile devices, and noting their technological advancements, devices are not being categorized only as data/service requesters anymore, but are readily being transformed to data/ service providing network-side tools. In essence, to offload some of the workload burden from the cloud, data can be either fully or partially replicated to edge and mobile devices for faster and more efficient data access in such dense environments. Moreover, densely crowded environments provide an opportunity to deliver, in a timely manner, through node collaboration, enriched user-specific services using the replicated data and device-specific capabilities. In this article, we first highlight the challenges that arise in densely crowded environments in terms of data/service management and delivery. Then we show how data replication and service composition are considered promising solutions for data and service management in densely crowded environments. Specifically, we describe how to replicate data from the cloud to the edge, and then to mobile devices to provide faster data access for users. We also discuss how services can be composed in crowded environments using service-specific overlays. We conclude the article with most of the open research areas that remain to be investigated.

90 citations


Journal ArticleDOI
TL;DR: A workflow-net based framework for agent cooperation is proposed to enable collaboration among fog computing devices and form a cooperative IoT service delivery system and results show that the cooperation process increases the number of achieved tasks and is performed in a timely manner.
Abstract: Most Internet of Things (IoT)-based service requests require excessive computation which exceeds an IoT device’s capabilities. Cloud-based solutions were introduced to outsource most of the computation to the data center. The integration of multi-agent IoT systems with cloud computing technology makes it possible to provide faster, more efficient and real-time solutions. Multi-agent cooperation for distributed systems such as fog-based cloud computing has gained popularity in contemporary research areas such as service composition and IoT robotic systems. Enhanced cloud computing performance gains and fog site load distribution are direct achievements of such cooperation. In this article, we propose a workflow-net based framework for agent cooperation to enable collaboration among fog computing devices and form a cooperative IoT service delivery system. A cooperation operator is used to find the topology and structure of the resulting cooperative set of fog computing agents. The operator shifts the problem defined as a set of workflow-nets into algebraic representations to provide a mechanism for solving the optimization problem mathematically. IoT device resource and collaboration capabilities are properties which are considered in the selection process of the cooperating IoT agents from different fog computing sites. Experimental results in the form of simulation and implementation show that the cooperation process increases the number of achieved tasks and is performed in a timely manner.

57 citations


Journal ArticleDOI
TL;DR: The results show that the proposed optimal feature selection and neural network-based classification approach with overlapped frequency bands is an effective method for EEG classification as compared to previous techniques.
Abstract: Brain computer interface translates electroencephalogram (EEG) signals into control commands so that paralyzed people can control assistive devices. This human thought translation is a very challenging process as EEG signals contain noise. For noise removal, a bandpass filter or a filter bank is used. However, these techniques also remove useful information from the signal. Furthermore, after feature extraction, there are such features which do not play any significant role in effective classification. Thus, soft computing-based EEG classification followed by extraction and then selection of optimal features can produce better results. In this paper, subband common spatial patterns using sequential backward floating selection is being proposed in order to classify motor-imagery-based EEG signals. The signal is decomposed into subband using a filter bank having overlapped frequency cutoffs. Linear discriminant analysis followed by common spatial pattern is applied to the output of each filter for features extraction. Then, sequential backward floating selection is applied for selection of optimal features to train radial basis function neural networks. Two different datasets have been used for evaluation of results, i.e., Open BCI dataset and EEG signals acquired by Emotiv Epoc. The proposed system shows an overall accuracy of 93.05% and 85.00% for both datasets, respectively. The results show that the proposed optimal feature selection and neural network-based classification approach with overlapped frequency bands is an effective method for EEG classification as compared to previous techniques.

54 citations


Journal ArticleDOI
TL;DR: Three sets of parallel implementations of the Needleman-Wunsch algorithm are presented using a mixture of specialized software and hardware solutions: POSIX Threads-based, SIMD Extensions-based and a GPU-based implementations.
Abstract: The Needleman-Wunsch (NW) is a dynamic programming algorithm used in the pairwise global alignment of two biological sequences. In this paper, three sets of parallel implementations of the NW algorithm are presented using a mixture of specialized software and hardware solutions: POSIX Threads-based, SIMD Extensions-based and a GPU-based implementations. The three implementations aim at improving the performance of the NW algorithm on large scale input without affecting its accuracy. Our experiments show that the GPU-based implementation is the best implementation as it achieves performance 72.5X faster than the sequential implementation, whereas the best performance achieved by the POSIX threads and the SIMD techniques are 2X and 18.2X faster than the sequential implementation, respectively.

Proceedings ArticleDOI
20 May 2019
TL;DR: A unified service architecture is proposed enabling seamless handover between a 5G (New Generation Core) service and a 4G (Evolved Packet Core)service via the network slicing paradigm, using an identifier-locator concept that allows active source-IP sessions to handle the seamless hand-over.
Abstract: Mobile Edge Computing (MEC) and Network Slicing techniques have a potential to augment 5G-IoT network services. Telecommunication operators use a diverse set of radio access technologies to provide services for users. Mobility management is one such service that needs attention for new 5G deployments. The QoS requirements in 5G networks are user specific. Network slicing along with MEC has been promoted as a key enabler for such on-demand service schemes. This paper focuses on radio resource access across heterogeneous networks for mobile roaming users. A unified service architecture is proposed enabling seamless handover between a 5G (New Generation Core) service and a 4G (Evolved Packet Core) service via the network slicing paradigm. An identifier-locator (I-L) concept that allows active source-IP sessions is used to handle the seamless hand-over. Signaling costs, service disruptions and other resource reservation requirements are considered in the evaluation to assure that profit for mobile edge operators is achieved. Simulation experiments are considered to provide performance comparisons against the state-of-the-art Distributed Mobility Management Protocol (DMM).

Journal ArticleDOI
TL;DR: An algorithm for segmenting Medical Volumes based on Multiresolution Analysis is proposed, which aims to segment medical volumes under various conditions and in different axel representations.
Abstract: Medical images have a very significant impact in the diagnosing and treating process of patient ailments and radiology applications. For many reasons, processing medical images can greatly improve the quality of radiologists’ job. While 2D models have been in use for medical applications for decades, wide-spread utilization of 3D models appeared only in recent years. The proposed work in this paper aims to segment medical volumes under various conditions and in different axel representations. In this paper, we propose an algorithm for segmenting Medical Volumes based on Multiresolution Analysis. Different 3D volume reconstructed versions have been considered to come up with a robust and accurate segmentation results. The proposed algorithm is validated using real medical and Phantom Data. Processing time, segmentation accuracy of predefined data sets and radiologist’s opinions were the key factors for methods validations.


Proceedings ArticleDOI
11 Jun 2019
TL;DR: A novel approach called RecDNNing with a combination of embedded users and items combined with deep neural network to predict the scores of rating by applying the forward propagation algorithm on MovieLens.
Abstract: The success of applying deep learning to many domains has gained strong interest in developing new revolutionary recommender systems. However, there are little works studying these systems that employ deep learning; additionally, there is no study showing how to combine the users and items embedding with deep learning to enhance the effectiveness of the recommender systems. Therefore, this paper proposes a novel approach called RecDNNing with a combination of embedded users and items combined with deep neural network. The proposed recommendation approach consists of two phases. In the first phase, we create a dens numeric representation for each user and item, called user embedding and item embedding, respectively. Following that, the items and users embedding are averaged and then concatenated before being fed into the deep neural network. In the second phase, we use the model of the deep neural network to take the concatenated users and items embedding as the inputs in order to predict the scores of rating by applying the forward propagation algorithm. The experimental results on MovieLens show that the proposed RecDNNing outperforms state-of-the-art algorithms.

01 Jan 2019
TL;DR: This paper describes the method for the Medical Domain Visual Question Answering (VQA-Med) Task of ImageCLEF 2019, and proposes a model that is able to answer questions about medical images using pre-trained VGG16 network.
Abstract: This paper describes our method for the Medical Domain Visual Question Answering (VQA-Med) Task of ImageCLEF 2019. The aim is to build a model that is able to answer questions about medical images. Our proposed model consists of sub-models, each specializing in answering a specific type of questions. Specifically, the sub-models we have are: “plane” model, “organ systems” model, “modality” models, and “abnormality” models. All of these models are basically image classification models based on pre-trained VGG16 network. We do not rely on the questions for the answers prediction since the questions on each type are repetitive. However, we do rely on them to determine the suitable model to be used for producing the answers and determine the suitable answer format. Our best model achieves 57% accuracy and 0.591 BLEU score.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: A multi-label multi-target data set of Arabic tweets annotated for emotion analysis has been built, and different experts participated in the annotation process and Cohens Kappa measure was employed to determine their concordance.
Abstract: Emotion Analysis (EA)is a process of determining if the text has any emotion. EA spread significantly in the recent years, especially for social media applications as applied to tweets and Facebook posts. An assumption has been presented recently that each social media post has no intensity or has one emotion. Different cases for public posts have been considered in this work, it focuses on several emotions (multi-label)included in a single post. Tweeter posts (Tweets)have been employed to validate the proposed work, it is possible to have different intensities related to each tweet (multi-target). The proposed work focused on Arabic language tweets unlike previously implemented work, which focused on other languages such as English or Chinese. A multi-label multi-target data set of Arabic tweets annotated for emotion analysis has been built, and different experts participated in the annotation process and Cohens Kappa measure was employed to determine their concordance.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: Social media, web applications, and mobile applications have been employed together in the proposed system to manage the search in the rapidly growing worldwide web, resulting in a fast and comfortable search engine that fulfill the users requests based on specific geolocations.
Abstract: The large size and the dynamic nature of the Web highlight the need for continuous support and updating of Web based information retrieval systems. Crawlers facilitate the process by following the hyperlinks in Web pages to automatically download a partial snapshot of the Web. While some systems rely on crawlers that exhaustively crawl the Web, others incorporate focus within their crawlers to harvest application or topic specific collections. This project studied web crawling and scraping at many different levels. It will aggregate information from multiple sources into one central location. It Specifics a program for downloading web pages. Given an initial set of seed URLs, it recursively downloads every page that is linked from pages in the set, that have content satisfies specific criterion. Social media, web applications, and mobile applications have been employed together in the proposed system to manage the search in the rapidly growing worldwide web. Applying the proposed system is resulting in a fast and comfortable search engine that fulfill the users requests based on specific geolocations.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: A novel similarity measurement for recommender systems that uses weighted user interests and rate timestamps that is efficient in terms of accuracy and recommendation time is proposed.
Abstract: This paper proposes a novel similarity measurement for recommender systems that uses weighted user interests and rate timestamps. Although some works were proposed previously to include the time factor in the recommendation process, these works were based on the use of the time factor with user rates. In this work, we show that using user rates could be misleading in some cases, and we propose the use of the time factor with the hidden user interest(s) instead of user rates. The user interests are weighted according to the time factor so that recent interests are given more weight than the older ones as they are more important. Experimental results proved that our proposed similarity measurement is efficient in terms of accuracy and recommendation time.

Proceedings ArticleDOI
10 Jun 2019
TL;DR: A linear power model for the EdgeCloudSim simulator to measure the energy consumption of edge network servers and a simple dynamic power management model used to minimize power consumption in the edge network by switching the edge servers on and off based on provisioned application needs are introduced.
Abstract: With the rapid development of edge computing and its applications, requests to edge servers is expected to grow, resulting in higher edge network energy consumption. This in essence would also result in higher operational costs for running edge applications. Furthermore, service providers try to manage their resources efficiently to provide appropriate quality of services to their customers while reducing service costs. To appropriately manage resources, it is necessary to apply useful models to measure energy consumption in the edge network. The linear relationship between energy consumption and CPU utilization is one powerful modeling method used to compute the energy consumption of edge network servers. The method calculates the power consumption of a server based on its CPU utilization during run-time. In this paper, we propose a linear power model for the EdgeCloudSim simulator to measure the energy consumption of edge network servers. Moreover, we introduce a simple dynamic power management model used to minimize power consumption in the edge network by switching the edge servers on and off based on provisioned application needs. The experimental and simulation results show a notable reduction in the total energy consumption when applying the proposed simple model on two different orchestration policies to manage the edge network servers.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: This study provides a comprehensive statistical analysis of a previously-published dataset containing Amazon reviews and insists on the importance of using user votes as an important source of information for new users.
Abstract: Shifting from traditional marketing into online marketing has allowed people to share their experiences about various aspects of those products using textual comments known as Product Reviews. As a result of this shifting, people are able to access various websites where they can find reviews for all kind of products, even the rare ones. Thus, these reviews act as a supplementary information and help people to make the right decision before buying products. Reviews that influence one's decision are considered influential reviews, as they provide truthful experiences. Given the list of reviews for a certain product, each user can vote for any given review as helpful or unhelpful. As a result, each review would be given a number that represents how many users found this review helpful. This would indicate how influential each review is. As a result, buyers rely on these reviews and those who wrote these reviews. This study emphasizes on the importance of using user votes as an important source of information for new users. The contribution of this work lies in two aspects. First, it provides a comprehensive statistical analysis of a previously-published dataset containing Amazon reviews. Second, this study insists on the importance of using user votes. This study is the first phase for many future interesting directions. It was shown that the relationship between the number of reviews and the percentage of votes is an inverse relationship.

Proceedings ArticleDOI
01 Nov 2019
TL;DR: The proposed work is a method to upkeep strategic decision making for the best water desalination facility, considering intelligent water pumping to the locations through the pumping network and water storage at every location.
Abstract: Recently, water desalination has been developing increasingly worldwide. Many new plants are contracted constantly. Strategic planning and many other technical decisions are significant to these strategic systems. The proposed Artificial Intelligent (AI) methods provide decision makers with different choices for investment, where each is comprised of different desalination combinations regarding to locations, capacities, and energy sources in terms of several performance metrics. The intelligent decisions determine the optimal stations location and the water desalination system capacity for future expectations. Other smart decisions select the optimal desalination technologies for available existing and suggested desalination planting. In addition, AI methods provide decision makers to configure the pipeline network and transport water among the planting locations. The proposed work is a method to upkeep strategic decision making for the best water desalination facility. Our methodology offers a set of AI alternatives for several desalination plans. Decision support systems and tools are imperfect to deliver a set of alternatives. Therefore, the proposed work provides a systematic decision process to validate several water desalination alternatives, considering intelligent water pumping to the locations through the pumping network and water storage at every location. The proposed approach is validated for a case study in Jordan, which is a beginner country in desalination. The results show where economic and environmental benefits occurs. It shows how the AI methods can introduce an optimal settings of the desalination process to the peopole who makes decisions.

Proceedings ArticleDOI
01 Nov 2019
TL;DR: An automatic clustering algorithm is proposed as part of an IDS architecture based on concepts of coherence and separation to find clusters with the most similarity between the proposed cluster elements and the least similarity with other clusters.
Abstract: Intrusion Detection Systems (IDSs) can identify the malicious activities and anomalies in networks and present robust protection for these systems. Clustering of attacks plays an important role in defining IDS defense policies. A key challenge in clustering has been finding the optimal value for the number of clusters. In this paper, we propose an automatic clustering algorithm as part of an IDS architecture. This algorithm is based on concepts of coherence and separation. Our automatic clustering algorithms find clusters with the most similarity between the proposed cluster elements and the least similarity with other clusters. The proposed clustering is further optimized by considering two types of objective index functions, and Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO), and Differential Evolution (DE) methods. Comparison of the results obtained with other work in the literature shows improvements in terms of the low average number of evaluations functions, high accuracy, and low computation cost.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: This team’s effort on the MADAR Shared Task on Arabic Fine-Grained Dialect Identification performs really well producing the 4th highest F-measure and region-level accuracy and the 5th highest precision, recall, city- level accuracy and country-level accuracies among the participating teams.
Abstract: In this paper, we describe our team’s effort on the MADAR Shared Task on Arabic Fine-Grained Dialect Identification. The task requires building a system capable of differentiating between 25 different Arabic dialects in addition to MSA. Our approach is simple. After preprocessing the data, we use Data Augmentation (DA) to enlarge the training data six times. We then build a language model and extract n-gram word-level and character-level TF-IDF features and feed them into an MNB classifier. Despite its simplicity, the resulting model performs really well producing the 4th highest F-measure and region-level accuracy and the 5th highest precision, recall, city-level accuracy and country-level accuracy among the participating teams.

Journal ArticleDOI
TL;DR: It is shown that using multiple passes over the data set, as proposed in the algorithm, has resulted in a great improvement in the number of swaps, thus, reducing the overall sorting time.
Abstract: The extensive amount of data and contents generated today will require a paradigm shift in processing and management techniques for these data. One of the important data processing operations is the data sorting. Using multiple passes in external merge sort has a great influence on speeding up the sorting of extremely large data files. Since in large files, the swapping time is dominant in many applications, algorithms that minimize the swapping operations are normally superior to those which only focus on CPU time optimizations. In sorting extremely large files, external algorithms, such as the merge sort, are normally used. It is shown that using multiple passes over the data set, as proposed in our algorithm, has resulted in a great improvement in the number of swaps, thus, reducing the overall sorting time. Moreover, the proposed technique is suitable to be used with the emerging parallelization techniques such as GPUs. The reported results show the superiority of the proposed technique for “CPU only” and hybrid CPU–GPU implementations.

Journal ArticleDOI
TL;DR: The aim of this paper is to collect and provide the definitions of the main concepts related to media forensics and classifies the work done in the field according to the main technique used in the proposed solution approach.
Abstract: The wide spread of digital devices and tools causes the simplification of the manipulation of any digital multimedia content. As a result, digital videos and photos are not trusted to be used as evidence in courts. This fact raises the need for finding techniques to ensure the authenticity of digital multimedia contents. Experts in digital-signal processing conducted a huge number of researches to find new strategies, using digital forensics, to verify digital evidences and trace its origins. The aim of this paper is to collect and provide the definitions of the main concepts related to media forensics. Also, this paper gives an overview of the different techniques used in media forensics concentrating on video forensics. Furthermore, it classifies the work done in the field according to the main technique used in the proposed solution approach.

Journal ArticleDOI
TL;DR: An agent-based self-organizing model that utilizes the emerging Mobile Edge Computing concept for rapid biological threat detection and the experimental results show that the proposed model is able to monitor large-scale environments efficiently and to accurately detect biological threats.

Journal ArticleDOI
TL;DR: This paper evaluates how the used programming language affects the evidence, the search for evidence is influenced by the implicitly used encoding scheme, and the amount of digital evidence is highly affected by these factors.
Abstract: Identifying the software used in a cybercrime can play a key role in establishing the evidence against the perpetrator in the court of law. This can be achieved by various means, one of which is to utilize the RAM contents. RAM comprises vital information about the current state of a system, including its running processes. Accordingly, the memory footprint of a process can be used as evidence about its usage. However, this evidence can be influenced by several factors. This paper evaluates three of these factors. First, it evaluates how the used programming language affects the evidence. Second, it evaluates how the used platform affects the evidence. Finally, it evaluates how the search for this evidence is influenced by the implicitly used encoding scheme. Our results should assist the investigator in its quest to identify the best amount of evidences about the used software based on its execution logic, host platform, language used, and the encoding of its string values. Results show that the amount of digital evidence is highly affected by these factors. For instance, the memory footprint of a Java based software is often more traceable than the footprints of languages such as C++ and C#. Moreover, the memory footprint of a C# program is more visible on Linux than it is on Windows or Mac OS. Hence, often software related values are successfully identified in RAM memory dumps even after the program is stopped.


Journal ArticleDOI
TL;DR: A hybrid approach that evaluates the effectiveness of using LOD to automatically answer Arabic questions is developed to map users’ questions in Modern Standard Arabic, to a standard query language for LOD.
Abstract: The interchangeably connected Web technologies and the advancements that accompany the semantic web content’s leaps, have raised many challenges in the results’ retrieval process especially for the Arabic Language. This research targets an important, yet insufficiently precedent, area in using Linked Open Data (LOD) for Automatic Question Answering systems in the Arabic Language. The significance of work presented, comes from its ability to overcome many challenges in querying Arabic content. Some of these challenges are: (a) bridging the gap between natural language and linked data by mapping users’ queries to a standard semantic web query language such as SPARQL, (b) facilitating multilingual access to semantic data, and (c) maintaining the quality of data. Another challenging aspect was the lack of related work and publicly available resources for Arabic Question Answering Systems over Linked Data, despite the vastly growing Arabic corpus on the web. This paper presents a novel approach that targets Automatic Arabic Questions’ Answering Systems whilst bypassing many featured challenges in the field. A hybrid approach that evaluates the effectiveness of using LOD to automatically answer Arabic questions is developed. The approach is developed to map users’ questions in Modern Standard Arabic, to a standard query language for LOD (i.e. SPARQL) through: (i) extracting entities from questions and linking them over the web using Named-Entity Recognition and Disambiguation (NER/NED), and (ii) extracting properties among extracted named entities using a dependency parsing approach integrated with Wikidata ontology. To evaluate our proposed system, an Arabic questions dataset was created including: (a) Question body in Arabic language, (b) Question type, (c) SPARQL Query formulation, and (d) Question answer. Evaluation results are promising with a Precision of 84%, a Recall of 81.3%, and an F-Measure of 82.8%.