scispace - formally typeset
Search or ask a question

Showing papers presented at "Color Imaging Conference in 2017"


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper develops an access control framework for cloud-enabled WIoT (CEWIoT) based on the Access Control Oriented (ACO) architecture recently developed for CEIoT in general and presents a remote health and fitness monitoring use case to illustrate different access control aspects of this framework.
Abstract: Internet of Things (IoT) has become a pervasive and diverse concept in recent years. IoT applications and services have given rise to a number of sub-fields in the IoT space. Wearable technology, with its particular set of characteristics and application domains, has formed a rapidly growing sub-field of IoT, viz., Wearable Internet of Things (WIoT). While numerous wearable devices are available in the market today, security and privacy are key factors for wide adoption of WIoT. Wearable devices are resource constrained by nature with limited storage, power, and computation. A Cloud-Enabled IoT (CEIoT) architecture, a dominant paradigm currently shaping the industry and suggested by many researchers, needs to be adopted for WIoT. In this paper, we develop an access control framework for cloud-enabled WIoT (CEWIoT) based on the Access Control Oriented (ACO) architecture recently developed for CEIoT in general. We first enhance the ACO architecture from the perspective of WIoT by adding an Object Abstraction Layer, and then develop our framework based on interactions between different layers of this enhanced ACO architecture. We present a general classification and taxonomy of IoT devices, along with brief introduction to various application domains of IoT and WIoT. We then present a remote health and fitness monitoring use case to illustrate different access control aspects of our framework and outline its possible enforcement in a commercial CEIoT platform, viz., AWS IoT. Finally, we discuss the objectives of our access control framework and relevant open problems.

52 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper uses prospect theory to predict the profit that a specific miner, given his hash rate power and electricity costs, is expected to make from each pool, and shows how the utility values from a pool varies with electricity fee and dollar equivalent of a Bitcoin.
Abstract: It is predicted that cryptocurrencies will play an important role in the global economy. Therefore, it is prudent for us to understand the importance and monetary value of such cryptocurrencies, and strategize our investments accordingly. One of the ways to obtain cryptocurrency is via mining. As solo mining is not possible because of the computational requirements, pool mining has gained popularity. In this paper, we focus on Bitcoin and its pools. With more than 20 pools in the network of Bitcoin and other cryptocurrencies, it becomes challenging for a new miner to decide the pool he must join such that the profit is maximized. We use prospect theory to predict the profit that a specific miner, given his hash rate power and electricity costs, is expected to make from each pool. A utility value is calculated for each pool based on its recent performance, hash rate power, total number of the pool members, reward distribution policy of the pool, electricity fee in the new miner's region, pool fee, and the current Bitcoin value. Then, based on these parameters during a certain time duration, the most profitable pool is found for that miner. We show how the utility values from a pool varies with electricity fee and dollar equivalent of a Bitcoin. To find the accuracy of our predictions, we mine Bitcoin by joining 5 different pools- AntPool, F2Pool, BTC.com, Slushl'ool, and BatPool. Using an Antminer 55 for each pool, we mine Bitcoin for 40 consecutive days. Results reveal that our prospect theoretic predictions are consistent with what we actually mine; however predictions using expected utility theory are not as close.

45 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: A new framework, called DeepFood, is proposed which not only extracts rich and effective features from a dataset of food ingredient images using deep learning but also improves the average accuracy of multi-class classification by applying advanced machine learning techniques.
Abstract: Deep learning has brought a series of breakthroughs in image processing. Specifically, there are significant improvements in the application of food image classification using deep learning techniques. However, very little work has been studied for the classification of food ingredients. Therefore, this paper proposes a new framework, called DeepFood which not only extracts rich and effective features from a dataset of food ingredient images using deep learning but also improves the average accuracy of multi-class classification by applying advanced machine learning techniques. First, a set of transfer learning algorithms based on Convolutional Neural Networks (CNNs) are leveraged for deep feature extraction. Then, a multi-class classification algorithm is exploited based on the performance of the classifiers on each deep feature set. The DeepFood framework is evaluated on a multi-class dataset that includes 41 classes of food ingredients and 100 images for each class. Experimental results illustrate the effectiveness of the DeepFood framework for multi-class classification of food ingredients. This model that integrates ResNet deep feature sets, Information Gain (IG) feature selection, and the SMO classifier has shown its supremacy for foodingredients recognition compared to several existing work in this area.

40 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: The smart contracts of blockchain technology and the cryptograph blockchain model, Hawk, are adopted to design the blockchain-based lottery system, FairLotto, for future smart cities applications to ensure fairness, transparency, and privacy are ensured.
Abstract: Among the smart cities applications, optimizing lottery games is one of the urgent needs to ensure their fairness and transparency. The emerging blockchain technology shows a glimpse of solutions to fairness and transparency issues faced by lottery industries. This paper presents the design of a blockchain-based lottery system for smart cities applications. We adopt the smart contracts of blockchain technology and the cryptograph blockchain model, Hawk [8], to design the blockchain-based lottery system, FairLotto, for future smart cities applications. Fairness, transparency, and privacy of the proposed blockchain-based lottery system are discussed and ensured.

37 citations


Journal ArticleDOI
11 Sep 2017
TL;DR: The proposed algorithm presents a new method to enhance the visible images using NIR information via edge-preserving filters via bilateral filter (BF) and weighted least squares optimization framework (WLS), and investigates which method performs best from an image features standpoint.
Abstract: Image enhancement using visible (RGB) and near-infrared (NIR) image data has been shown to enhance useful details of the image. While the enhanced images are commonly evaluated by observers perception, in the present work, we rather evaluate it by quantitative feature evaluation. The proposed algorithm presents a new method to enhance the visible images using NIR information via edge-preserving filters, and also investigates which method performs best from an image features standpoint. In this work, we combine two edge-preserving filters: bilateral filter (BF) and weighted least squares optimization framework (WLS). To fuse the RGB and NIR images, we obtain the base and detail images for both filters. The NIR-detail images for both filters are simply fused by taking an average/maximum of both, which is then combined with the RGB-base image from the WLS filter to reconstruct the final enhanced RGB-NIR image. We then show that our proposed enhancement method produces more stable features than the existing state-of-the-art methods on RGB-NIR Scene Dataset. For feature matching, we use the SIFT features. As a use case, the proposed fusion method is tested on two challenging biometric verifications tasks using CMU hyperspectral face and CASIA multispectral palmprint databases. Our exhaustive experiments show that the proposed fusion method performs equally well in comparison to the existing biometric fusion methods

28 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: A detailed analysis of the security of MBAs of several banks running on two dominant platforms of Android & iOS using both static and dynamic analysis is presented, to detect various vulnerabilities rigorously.
Abstract: Mobile devices are becoming targets for hackers and malicious users due to the multifold increase in its capabilities and usage. Security threats are more prominent in mobile payment and mobile banking applications (MBAs). As these MBAs, store, transmit and access sensitive and confidential information, so utmost priority should be given to secure MBAs. In this paper, we have analyzed MBAs of several banks running on two dominant platforms of Android & iOS using both static and dynamic analysis. We have proposed threat model, to detect various vulnerabilities rigorously. We have done a systematic investigation of different unknown vulnerabilities particularly in mobile banking applications and showed how MBAs are vulnerable to MitM attacks. We observe that some MBAs are using simple HTTP protocol to transfer user data without concerning about security requirements. In Most of the cases, MBAs are receiving the fake or self-signed certificates. These are blindly maintaining all certificates as sound and valid, which leads to SSL/TLS Man-in-the-Middle (MitM) attacks. We present a detailed analysis of the security of MBAs which will be useful for application developers, security testers, researchers, bankers and bank customers.

24 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper proposes wireless network virtualization to create different virtual wireless networks (VWNs) through mobile virtual network operators (MVNOs) to support different IoT enabled systems with diverse requirements and resiliency and proposes blockchain based approach which provides quality-of-service to users.
Abstract: With the successful deployment of wireless networks such as cellular and Wi-Fi networks as well as development of lightweight hand-held devices, wireless communication became the fastest growing sector in communication industry and wireless networks have been part of every business. Over 25 billion devices are expected to be connected to the Internet by 2020. Because of exponentially increasing number of connected devices, we have Internet of Things (IoT) enabled applications that offer socioeconomic benefits. Different IoT applications have different operational requirements and constraints. For instance, IoT enabled transportation cyber-physical system needs lowest latency and high data rate, IoT enabled financial systems or banks need high security to support mobile banking, IoT enabled manufacturing systems need high resiliency to combat fault tolerance and cyber-attacks, IoT enabled cyber-physical power system needs the least latency and highest resiliency to avoid power outage caused by faults or cyber-attacks. In this paper, we propose wireless network virtualization to create different virtual wireless networks (VWNs) through mobile virtual network operators (MVNOs) to support different IoT enabled systems with diverse requirements and resiliency. Wireless virtualization is regarded as an emerging paradigm to enhance RF spectrum utilization, provide better coverage, increase network capacity, enhance energy efficiency and provide security. In order to prevent double-spending (allocating same frequency to multiple network providers) of wireless resources, we have proposed to use blockchain based approach which provides quality-of-service to users. Furthermore, IoT is expected to generate massive amount of data (aka big data), we consider edge computing to process big data when individual devices have limited computing/processing and storage capabilities. Moreover, network segmentation through VWNs provides the security and enhances the network performance. Performance of the proposed approach is evaluated using numerical results obtained from simulation.

23 citations


Proceedings ArticleDOI
09 Dec 2017
TL;DR: A deep and broad learning approach based on a Deep Context- aware POI Recommendation (DCPR) model was proposed to structurally learn POI and user characteristics and demonstrates that DCPR model achieves significant improvement over state-of-the-art POI recommendation algorithms and other deep recommendation models.
Abstract: POI recommendation has attracted lots of research attentions recently. There are several key factors that need to be modeled towards effective POI recommendation - POI properties, user preference and sequential momentum of check- ins. The challenge lies in how to synergistically learn multi-source heterogeneous data. Previous work tries to model multi-source information in a flat manner, using either embedding based methods or sequential prediction models in a cross-related space, which cannot generate mutually reinforce results. In this paper, a deep and broad learning approach based on a Deep Context- aware POI Recommendation (DCPR) model was proposed to structurally learn POI and user characteristics. The proposed DCPR model includes three collaborative layers, a CNN layer for POI feature mining, an RNN layer for sequential dependency and user preference modeling, and an interactive layer based on matrix factorization to jointly optimize the overall model. Experiments over three data sets demonstrate that DCPR model achieves significant improvement over state-of-the-art POI recommendation algorithms and other deep recommendation models.

23 citations


Proceedings ArticleDOI
15 Oct 2017
TL;DR: Li et al. as discussed by the authors presented dTrust, a simple social recommendation approach that avoids using user personal information, which relies uniquely on the topology of an anonymized trust-user-item network that combines user trust relations with user rating scores.
Abstract: Rating prediction is a key task of e-commerce recommendation mechanisms. Recent studies in social recommendation enhance the performance of rating predictors by taking advantage of user relationships. However, these prediction approaches mostly rely on user personal information which is a privacy threat. In this paper, we present dTrust, a simple social recommendation approach that avoids using user personal information. It relies uniquely on the topology of an anonymized trust-user-item network that combines user trust relations with user rating scores. This topology is fed into a deep feed-forward neural network. Experiments on real-world data sets showed that dTrust outperforms state-of-the-art in terms of Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) scores for both warm-start and cold-start problems.

21 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: A new approach to quantify the polarization of content semantics by leveraging the word embedding representation and clustering metrics is proposed and an evaluation framework is proposed to verify the proposed quantitative measurement using a stance classification task.
Abstract: Social media like Facebook and Twitter have become major battlegrounds, with increasingly polarized content disseminated to people having different interests and ideologies. This work examines the extent of content polarization during the 2016 U.S. presidential election, from a unique, "content" perspective. We propose a new approach to quantify the polarization of content semantics by leveraging the word embedding representation and clustering metrics. We then propose an evaluation framework to verify the proposed quantitative measurement using a stance classification task. Based on the results, we further explore the extent of content polarization during the election period and how it changed across time, geography, and different types of users. This work contributes to understanding the online "echo chamber" phenomenon based on user-generated content.

20 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper presents an ABAC mining approach that can automatically discover the appropriate ABAC policy rules and proposes a more efficient algorithm, called ABAC-SRM which discovers the most general policy rules from a set of candidate rules.
Abstract: Attribute Based Access Control (ABAC) is fast replacing traditional access control models due to its dynamic nature, flexibility and scalability. ABAC is often used in collaborative environments. However, a major hurdle to deploying ABAC is to precisely configure the ABAC policy. In this paper, we present an ABAC mining approach that can automatically discover the appropriate ABAC policy rules. We first show that the ABAC mining problem is equivalent to identifying a set of functional dependencies in relational databases that cover all of the records in a table. We also propose a more efficient algorithm, called ABAC-SRM which discovers the most general policy rules from a set of candidate rules. We experimentally show that ABAC-SRM is accurate and significantly more efficient than the existing state of the art.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: The results show that when using tweet sentiment, the results obtain similar margins to polls conducted during the election period and come close to the actual popular vote outcome.
Abstract: Tweets are frequently used to express opinions, specifically when the topic of choice is polarizing, as it is in politics. With many variables effecting the choice of vote, the most effective method of determining election outcome is through public opinion polling. We seek to determine whether Twitter can be an effective polling method for the 2016 United States general election. To this aim, we create a dataset consisting of approximately 3 million tweets ranging from September 22nd to November 8th related to either Donald Trump or Hillary Clinton. We incorporate two approaches in polling voter opinion for election outcomes: tweet volume and sentiment. Our data is labeled via a convolutional neural network trained on the sentiment140 dataset. To determine whether Twitter is an indicator of election outcome, we compare our results to three polls conducted by various reputable sources during the 13 days before the election. Our results show that when using tweet sentiment, we obtain similar margins to polls conducted during the election period and come close to the actual popular vote outcome.

Proceedings Article
01 Sep 2017
TL;DR: This paper investigates whether adopting one ground truth over another results in different rankings of illuminant estimation algorithms, and finds that, depending on the ground truth used, the ranking of different algorithms can change, and sometimes dramatically.
Abstract: In illuminant estimation, we attempt to estimate the RGB of the light. We then use this estimate on an image to correct for the light's colour bias. Illuminant estimation is an essential component of all camera reproduction pipelines. How well an illuminant estimation algorithm works is determined by how well it predicts the ground truth illuminant colour. Typically, the ground truth is the RGB of a white surface placed in a scene. Over a large set of images an estimation error is calculated and different algorithms are then ranked according to their average estimation performance. Perhaps the most widely used publically available dataset used in illuminant estimation is Gehler's Colour Checker set that was reprocessed by Shi and Funt. This image set comprises 568 images of typical everyday scenes. Curiously, we have found three different ground truths for the Shi-Funt Colour Checker image set. In this paper, we investigate whether adopting one ground truth over another results in different rankings of illuminant estimation algorithms. We find that, depending on the ground truth used, the ranking of different algorithms can change, and sometimes dramatically. Indeed, it is entirely possible that much of the recent 'advances' made in illuminant estimation were achieved because authors have switched to using a ground truth where better estimation performance is possible.

Journal ArticleDOI
11 Sep 2017
TL;DR: The authors extend the linear minimum mean square error with neighborhood method to the spectral dimension and demonstrate that the method is fast and general on Raw SFA images that span the visible and near infra-red part of the electromagnetic range.
Abstract: Spectral filter array (SFA) technology requires development on demosaicing. The authors extend the linear minimum mean square error with neighborhood method to the spectral dimension. They demonstrate that the method is fast and general on Raw SFA images that span the visible and near infra-red part of the electromagnetic range. The method is quantitatively evaluated in simulation first, then the authors evaluate it on real data by the use of non-reference image quality metrics applied on each band. Resulting images show a much better reconstruction of text and high frequencies at the expense of a zipping effect, compared to the benchmark binary-tree method.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper proposes an access control (AC) framework to address CPS related security issues and presents formal representations of CPAC and GAGM, and provides a sample scenario for a medical CPS.
Abstract: Cyber-physical systems (CPS) integrate cyber components into physical processes. This integration enhances the capabilities of physical systems by incorporating intelligence into objects and services. On the other hand, integration of cyber and physical components and interaction between them introduce new security threats. Since CPSs are mostly safety-critical systems, data stored and communicated in them are highly critical. Hence, there is an inevitable need for protecting the data and resources against unauthorized accesses. In this paper, we propose an access control (AC) framework to address CPS related security issues. The proposed framework consists of two parts: a cyber-physical access control model (CPAC) and a generalized action generation model (GAGM). CPAC utilizes an attribute based approach and extends it with cyber-physical components and cyber-physical interactions. GAGM is used to augment enforcement of authorization policies. We present formal representations of CPAC and GAGM, and provide a sample scenario for a medical CPS. We propose an algorithm for enforcing authorization policies.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: Detailed algorithms for constructing accurate profiles that describe the access patterns of the database users and for matching subsequent accesses by these users to the profiles are presented and shown to be very effective in the detection of anomalies.
Abstract: The mitigation of insider threats against databases is a challenging problem as insiders often have legitimate access privileges to sensitive data. Therefore, conventional security mechanisms, such as authentication and access control, may be insufficient for the protection of databases against insider threats and need to be complemented with techniques that support real-time detection of access anomalies. The existing real-time anomaly detection techniques consider anomalies in references to the database entities and the amounts of accessed data. However, they are unable to track the access frequencies. According to recent security reports, an increase in the access frequency by an insider is an indicator of a potential data misuse and may be the result of malicious intents for stealing or corrupting the data. In this paper, we propose techniques for tracking users' access frequencies and detecting anomalous related activities in real-time. We present detailed algorithms for constructing accurate profiles that describe the access patterns of the database users and for matching subsequent accesses by these users to the profiles. Our methods report and log mismatches as anomalies that may need further investigation. We evaluated our techniques on the OLTP-Benchmark. The results of the evaluation indicate that our techniques are very effective in the detection of anomalies.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: PADS is proposed, a strategy-proof differentially private auction mechanism that allows cloud providers to privately trade resources with cloud consumers in such a way that individual bidding information of the cloud consumers is not exposed by the auction mechanism.
Abstract: With the rapid growth of Cloud Computing technologies, enterprises are increasingly deploying their services in the Cloud. Dynamically priced cloud resources such as the Amazon EC2 Spot Instance provides an efficient mechanism for cloud service providers to trade resources with potential buyers using an auction mechanism. With the dynamically priced cloud resource markets, cloud consumers can buy resources at a significantly lower cost than statically priced cloud resources such as the on-demand instances in Amazon EC2. While dynamically priced cloud resources enable to maximize datacenter resource utilization and minimize cost for the consumers, unfortunately, such auction mechanisms achieve these benefits only at a cost significant of private information leakage. In an auction-based mechanism, the private information includes information on the demands of the consumers that can lead an attacker to understand the current computing requirements of the consumers and perhaps even allow the inference of the workload patterns of the consumers. In this paper, we propose PADS, a strategy-proof differentially private auction mechanism that allows cloud providers to privately trade resources with cloud consumers in such a way that individual bidding information of the cloud consumers is not exposed by the auction mechanism. We demonstrate that PADS achieves differential privacy and approximate truthfulness guarantees while maintaining good performance in terms of revenue gains and allocation efficiency. We evaluate PADS through extensive simulation experiments that demonstrate that in comparison to traditional auction mechanisms, PADS achieves relatively high revenues for cloud providers while guaranteeing the privacy of the participating consumers.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: An infrastructure with collaboration of Fog computing combined with Machine-to-Machine(M2M) intelligent communication protocol followed by integration of the Service Oriented Architecture(SOA) and this model will be able to transfer data by analyzing reliably and systematically with low latency, less bandwidth, heterogeneity in less amount of time maintaining the Quality of Service(QoS) befittingly.
Abstract: The more we are heading towards the future with ever-growing number of IoT devices which is expected to take place in a giant number almost near trillions by 2020, the data access and computing are proceeding towards more complications and impediments requiring more efficient and logical data computation infrastructures. Cloud computing is a centralized Internet based computing model which acts like a storage as well as a network connection bridge between end devices and servers. Cloud computing has been ruling as a data computation model for quite a while but if we try to concentrate on the upcoming IoT generation, the vision will become a little bit blurry as it is not possible for the present cloud computing models to deal with such huge amount of data and to rescue from this foggy situation, Fog computing model comes forward. Instead of being a replacement of the cloud computing model, Fog computing model is an extension of Cloud which works as a distributed decentralized computing infrastructure in which data compute, storage and applications are distributed in the most logical, efficient place between the data source and the cloud. In this paper, we proposed an infrastructure with collaboration of Fog computing combined with Machine-to-Machine(M2M) intelligent communication protocol followed by integration of the Service Oriented Architecture(SOA) and this model will be able to transfer data by analyzing reliably and systematically with low latency, less bandwidth, heterogeneity in less amount of time maintaining the Quality of Service(QoS) befittingly.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A security scheme for preventing eavesdropping attacks in Cloud environments is proposed based on Elliptic Curve Cryptography and it is observed that the proposed scheme outperforms the other schemes in terms of the chosen performance characteristics.
Abstract: Cloud computing has recently become an extremely useful facet of modern distributed systems. Some of its many applications lie in the development of web services, its federation with the Internet of Things (IoT) and services for users in the form of storage, computing and networking facilities. However, as more services start utilizing the Cloud as a viable option, security concerns regarding user data and privacy also need to be tackled. In this paper, a security scheme for preventing eavesdropping attacks in Cloud environments is proposed. The encryption scheme is based on Elliptic Curve Cryptography and is specifically tailored for securing Cloud services providing storage facilities. As it is based on Elliptic Curve Cryptography, subsequent results obtained show that it reduces the computational overhead incurred in the encryption of data. The performances of other traditional security schemes such as RSA are also compared with the proposed encryption scheme. It is observed that the proposed scheme outperforms the other schemes in terms of the chosen performance characteristics.

Proceedings ArticleDOI
20 Aug 2017
TL;DR: The results present a novel signalling mechanism among various users, devices and networks to open one or multi rooms at the same time using the same server, determine room initiator to keep the session active even if the initiator or another peer leaves, sharing new user with current participants, etc.
Abstract: There is a strong focus on the use of Web Real-Time Communication (WebRTC) for many-to-many video conferencing, while the IETF working group has left the signalling issue on the application layer. The main aim of this paper is to create a novel scalable WebRTC signalling mechanism called WebNSM for many-to-many (bi-directional) video conferencing. WebNSM was designed for unlimited users over the mesh topology based on Socket.io (API) mechanism. A real implementation was achieved via LAN and WAN networks, including the evaluation of bandwidth consumption, CPU performance, memory usage, maximum links and RTPs calculation; and Quality of Experience (QoE). In addition, this application supplies video conferencing on different browsers without having to download additional software or user registration. The results present a novel signalling mechanism among various users, devices and networks to open one or multi rooms at the same time using the same server, determine room initiator to keep the session active even if the initiator or another peer leaves, sharing new user with current participants, etc. Moreover, this experiment highlights the limitations of CPU performance, bandwidth consumption and using mesh topology for WebRTC video conferencing.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This study examines the several statistical feature sets from Gray Level Co-occurrence Matrix, Discrete Wavelet Transform, Spatial filters, Wiener filter, Gabor filter, Haralick and fractal filters to identify text and image document by using support vector machine (SVM) and decision fusion of feature selection.
Abstract: Technological advances in digitization with a variety of image manipulation techniques enable the creation of printed documents illegally. Correspondingly, many researchers conduct studies in determining whether the document printed counterfeit or original. This study examines the several statistical feature sets from Gray Level Co-occurrence Matrix (GLCM), Discrete Wavelet Transform (DWT), Spatial filters, Wiener filter, Gabor filter, Haralick and fractal filters to identify text and image document by using support vector machine (SVM) and decision fusion of feature selection. The average experimental results achieves that the image document is higher identification rate than text document. In summary, the proposed method outperforms the previous researches and it is a promising technique that can be implemented in real forensics for printed documents.

Journal ArticleDOI
11 Sep 2017
TL;DR: This work evaluates state-of-the-art no-reference image quality metrics for capsule video endoscopy and uses the best performing metric to optimize one of the capsule videoendoscopy enhancement methods and validate through subjective experiment.
Abstract: Capsule endoscopy, using a wireless camera to capture the digestive track, is becoming a popular alternative to traditional colonoscopy. The images obtained from a capsule have lower quality compared to traditional colonoscopy, and high-quality images are required by medical doctors in order to set an accurate diagnosis. Over the last years several enhancement techniques have been proposed to improve the quality of capsule images. In order to verify that the capsule images have the required diagnostic quality some kind of quality assessment is required. In this work, the authors evaluate state-of-the-art no-reference image quality metrics for capsule video endoscopy. Furthermore, they use the best performing metric to optimize one of the capsule video endoscopy enhancement methods and validate through subjective experiment. c © 2017 Society for Imaging Science and Technology. [DOI: 10.2352/J.ImagingSci.Technol.2017.61.4.040402] INTRODUCTION Capsule video endoscopy has proven to be a powerful tool for diagnosis of the digestive tract diseases. It has many advantages over traditional colonoscopy,1 as being less invasive, the lack of a requirement for sedation, and no need for gas insufflation. Also, since it is less invasive it might also increase participation in colorectal cancer screening. Capsule video endoscopy has been used to diagnose inflammatory bowel disease2 (i.e., Crohn disease and ulcerative colitis), gastrointestinal bleeding, and polyps. It has been shown to have a high sensitivity for the detection of clinically relevant lesions.3 The capsule itself is about 11 mm× 32 mm and usually captures images at a rate of 4 frames per second.4 The images are at a lower resolution compared to traditional colonoscopy (usually full-HD). The images produced by capsule video endoscopy suffers from several problems, such as uneven illumination, low resolution, images taken under low illumination, high compression ratio, and noise. The problem of capsule image quality enhancement has been an active research topic since capsules appeared commercially in 2006. Enhancement techniques for capsule video endoscopy can be categorized based on the image attributes they focus on for accurate diagnosis of pathologies. There are four main categories: (1) making blood vessels visible; (2) removing

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A fine-grained semantic-based access control model that supports multi-owner multi-stakeholder policy specification and enforcement and handles the policy conflicts that might arise at the time of access control policy enforcement is proposed.
Abstract: Pervasive usage and wide-spread sharing of Electronic Health Records (EHRs) in modern healthcare environments has resulted in high availability of patients' medical history from any location and at any time, which has potential to make health care services both cheaper and of higher quality. However, EHRs contain huge amounts of sensitive information which should be protected from unauthorized accesses, otherwise allowing these records to be accessed by multiple parties may put patient privacy at high risk. Access control solutions must assure to reflect access control policies of all healthcare providers who are involved in generating such critical records as well as authorization policies of the patient as the primary stakeholder. In this paper, we propose a fine-grained semantic-based access control model that supports multi-owner multi-stakeholder policy specification and enforcement. In the proposed scheme, a trusted Policy Server is responsible for evaluating access requests to patients' health information. We also handle the policy conflicts that might arise at the time of access control policy enforcement. A proof-of-concept prototype is also implemented to demonstrate the feasibility of our model.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: The research shows that Android Broadcast receivers are intensively used by malware compared to benign applications and a data mining malware detection mechanism based on the statically registered Broadcast receivers is proposed.
Abstract: Android has a large share in the mobile apps market which makes it attractive for both malicious and good developers. Online apps markets, despite their vetting procedures, still admit malicious apps that could be downloaded mistakenly by mobile users. Detecting Android malwares has been studied by many researchers using different approaches and techniques. The vast majority of them though were focused on using the requested permissions that are declared in the AndroidManifest. xml files. A number of researchers have considered other components of the Android applications besides the permissions such as package info, activities, and process name. Some researchers pointed out Android Broadcast receivers' component but it was not discussed thoroughly like other components. In this paper, we are conducting an empirical study to investigate the usage patterns of the Broadcast receivers component by malicious and benign Android applications. In addition to processing the AndroidManifest. xml files, the source code of malware samples, in particular the onReceive() are manually analyzed. We also propose a data mining malware detection mechanism based on the statically registered Broadcast receivers. Our research shows that Android Broadcast receivers are intensively used by malware compared to benign applications. Our Java code analysis shows that malware samples fully utilize Broadcast receivers compared to benign apps. Finally, our experiments showed that using the Broadcast receivers with permissions improves the malwares prediction accuracy.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This work proposes a novel, anonymous attribute based credential scheme with multi-session unlinkability, and presents how the proposed credential scheme can be applied to a collaborative e-health environment to provide its users with the anonymous and unlinkable access.
Abstract: Modern electronic healthcare (e-health) systems constitute collaborative environments in which patients' private health data are shared across multiple domains. In such environments, patients' privacy can be violated through the linkability of different user access sessions over patient health data. Therefore, enforcing anonymous as well as multi-session unlinkable access for the users in e-health systems is of paramount importance. As a way of achieving this requirement, more emphasis has been given to anonymous attribute credentials, which allows a user to anonymously prove the ownership of a set of attributes to a verifier and thereby gain access to protected resources. Among the existing well known credential schemes, the U-Prove does not provide unlinkability across multiple user sessions whereas Idemix provides it with a significant computational overhead. As a solution, we propose a novel, anonymous attribute based credential scheme with multi-session unlinkability. The simulation results and complexity analysis show that the proposed scheme achieves the aforementioned property with substantially lower computational overhead compared to the existing credential schemes with multi-session unlinkability. Finally, we present how the proposed credential scheme can be applied to a collaborative e-health environment to provide its users with the anonymous and unlinkable access.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: It is shown that the optimal policy adaptation problem is NP-Complete and a heuristic solution is presented that provides agility and a faster migration path, especially for organizations participating in collaborative sharing of data.
Abstract: In Attribute-Based Access Control (ABAC), attributes are defined as characteristics of subjects, objects as well as environment, and access is granted or denied based on the values of these attributes. With increasing number of organizations showing interest in migrating to ABAC, it is imperative that algorithmic techniques be developed to facilitate the process. While the traditional ABAC policy mining approaches support the development of an ABAC policy from existing Discretionary Access Control (DAC) or Role-Based Access Control (RBAC) systems, they do not handle adaptation to the policy of a similar organization. As the policy itself need not be developed ab initio in this process, it provides agility and a faster migration path, especially for organizations participating in collaborative sharing of data. With the set of objects and their attributes given, along with an access control policy to be adapted to, the problem is to determine an optimal assignment of attributes to subjects so that a set of desired accesses can be granted. Here, optimality is in the number of ABAC rules the subjects would require to use to gain access to various objects. Such an approach not only helps in assisting collaboration between organizations, but also ensures efficient evaluation of rules during policy enforcement. We show that the optimal policy adaptation problem is NP-Complete and present a heuristic solution.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This research proposes a conceptual framework called CaCM (Context-aware Call Management) for mobile call management and implements a prototype system based on CaCM that incorporates rich context factors, including time, location, event, social relations, environment, body position, and body movement, and leverages machine learning algorithms to build call management models for individual mobile phone users.
Abstract: When a user receives a phone call, his mobile phone will normally ring or vibrate immediately regardless of whether the user is available to answer the call or not, which could be disruptive to his ongoing tasks or social situation. Mobile call management systems are a type of mobile applications for coping with the problem of mobile interruption. They aim to reduce mobile interruption through effective management of incoming phone calls and improve user satisfaction. Many existing systems often utilize only one or two types of user context (e.g., location) to determine the availability of the callee and make real-time decisions on how to handle the incoming call. In reality, however, mobile call management needs to take diverse contextual information of individual users into consideration, such as time, location, event, and social relations. The objective of this research is to propose a conceptual framework called CaCM (Context-aware Call Management) for mobile call management and implement a prototype system based on CaCM that incorporates rich context factors, including time, location, event, social relations, environment, body position, and body movement, and leverages machine learning algorithms to build call management models for individual mobile phone users. An empirical evaluation via a field study shows promising results that demonstrate the effectiveness of the proposed approach.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: An attribute-based URA model is developed called AURA and an attribute- based PRA model called ARPA is demonstrated that can express and unify many prior URA and PRA models.
Abstract: Administrative Role-Based Access Control (ARBAC) models deal with how to manage user-role assignments (URA), permission-role assignments (PRA), and role- role assignments (RRA). A wide-variety of approaches have been proposed in the literature for URA, PRA, and RRA. In this paper, we propose attribute-based administrative models that unify many prior approaches for URA and PRA. The motivating factor is that attributes of various RBAC entities such as admin users, regular users and permissions can be used to administer URA and PRA in a highly-flexible manner. We develop an attribute-based URA model called AURA and an attribute-based PRA model called ARPA. We demonstrate that AURA and ARPA can express and unify many prior URA and PRA models.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: The experimental results show that the semi-supervised approach holds promise in improving change classification effectiveness by leveraging unlabeled data and experiment with new vulnerability predictors, and compares the predictive power of the proposed features with vulnerability prediction techniques based on text mining.
Abstract: Version control systems (VCSs) have almost become the de facto standard for the management of open-source projects and the development of their source code. In VCSs, source code which can potentially be vulnerable is introduced to a system through what are so called commits. Vulnerable commits force the system into an insecure state. The farreaching impact of vulnerabilities attests to the importance of identifying and understanding the characteristics of prior vulnerable changes (or commits), in order to detect future similar ones. The concept of change classification was previously studied in the literature of bug detection to identify commits with defects. In this paper, we borrow the notion of change classification from the literature of defect detection to further investigate its applicability to vulnerability detection problem using semi-supervised learning. In addition, we also experiment with new vulnerability predictors, and compare the predictive power of our proposed features with vulnerability prediction techniques based on text mining. The experimental results show that our semi-supervised approach holds promise in improving change classification effectiveness by leveraging unlabeled data.

Journal ArticleDOI
11 Sep 2017
TL;DR: Visual and quantitative evaluations show that this method outperforms Dark Channel Prior and competes with the most robust dehazing methods, since it separates bright and dark areas and therefore reduces the color cast in very bright regions.
Abstract: Dehazing methods based on prior assumptions derived from statistical image properties fail when these properties do not hold. This is most likely to happen when the scene contains large bright areas, such as snow and sky, due to the ambiguity between the airlight and the depth information. This is the case for the popular dehazing method Dark Channel Prior. In order to improve its performance, the authors propose to combine it with the recent multiscale STRESS, which serves to estimate Bright Channel Prior. Visual and quantitative evaluations show that this method outperforms Dark Channel Prior and competes with the most robust dehazing methods, since it separates bright and dark areas and therefore reduces the color cast in very bright regions. c © 2017 Society for Imaging Science and Technology. [DOI: 10.2352/J.ImagingSci.Technol.2017.61.4.040408] INTRODUCTION Haze is an atmospheric phenomenon, which leads to visibility degradation in digital images. Dehazing is a process that aims to enhance visibility and reduce the undesirable effects present in hazy images.1–5 Dehazing is needed in several applications of computer vision, such as object recognition and tracking in bad visibility conditions.6 Visibility Degradation Model In the presence of haze, the observed intensity of light coming from an object from a given point of view is the sum of two processes that occur concurrently, as shown in Figure 1. One