scispace - formally typeset
Search or ask a question

Showing papers presented at "Color Imaging Conference in 2018"


Proceedings ArticleDOI
01 Oct 2018
TL;DR: Tests showed that the proposed application was successful in addressing the drawbacks of current auction marketplaces, and showed that selling on the application is cheaper than existing online options as well as existing in-person options.
Abstract: Modern centralized online marketplaces such as eBay offer an alternative option for consumers to both sell and purchase goods with relative ease. However, drawbacks to these marketplaces include the platform's ability to block merchants at their own whim, the fees paid to the platform when listing a product and when selling a product, and the lack of privacy of users' data. In this paper, we propose an application that remedies all three of these drawbacks through use of the Ethereum blockchain platform. The application was developed using the Truffle development framework. The application's functions were contained within an Ethereum smart contract, which was then migrated to the Ethereum network. The user's input was read through a web interface and sent to the Ethereum network via the web3.js API. Statistics about the application were gathered on the Rinkeby test network. The application was shown to have an average transaction runtime of 3.8 seconds, and an average gas consumption of 4.6 wei. Contract creation times for the application were shown to be less than a second. A cost analysis of the application was then conducted. The gas consumption of the transactions needed to both buy and sell a product was converted into US dollars, and the gas cost of the application was then compared to the cost to use an online auction marketplace such as eBay as well as an in-person auction house such as Sotheby's. The results showed that selling on the application is cheaper than existing online options as well as existing in-person options. These tests showed that our application was successful in addressing the drawbacks of current auction marketplaces.

69 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: Three blockchain-based alternatives to the CA-based PKI for supporting IoT devices are deployed, based on Emercoin Name Value Service, smart contracts by Ethereum blockchain, and Ethereum Light Sync client, and it is shown that they are much more efficient in terms of computational and storage requirements in addition to providing a more robust and scalable PKI.
Abstract: Traditionally, a Certification Authority (CA) is required to sign, manage, verify and revoke public key certificates. Multiple CAs together form the CA-based Public Key Infrastructure (PKI). The use of a PKI forces one to place trust in the CAs, which have proven to be a single point-of-failure on multiple occasions. Blockchain has emerged as a transformational technology that replaces centralized trusted third parties with a decentralized, publicly verifiable, peer-to-peer data store which maintains data integrity among nodes through various consensus protocols. In this paper, we deploy three blockchain-based alternatives to the CA-based PKI for supporting IoT devices, based on Emercoin Name Value Service (NVS), smart contracts by Ethereum blockchain, and Ethereum Light Sync client. We compare these approaches with CA-based PKI and show that they are much more efficient in terms of computational and storage requirements in addition to providing a more robust and scalable PKI.

56 citations


Proceedings ArticleDOI
24 Apr 2018
TL;DR: In this article, a lightweight Convolutional Neural Network (L-CNN) was proposed for human detection at the edge of a single board computer (SBC) for real-time human tracking.
Abstract: Edge computing efficiently extends the realm of information technology beyond the boundary defined by cloud computing paradigm. Performing computation near the source and destination, edge computing is promising to address the challenges in many delay-sensitive applications, like real-time human surveillance. Leveraging the ubiquitously connected cameras and smart mobile devices, it enables video analytics at the edge. In recent years, many smart video surveillance approaches are proposed for object detection and tracking by using Artificial Intelligence (AI) and Machine Learning (ML) algorithms. This work explores the feasibility of two popular human-objects detection schemes, Harr-Cascade and HOG feature extraction and SVM classifier, at the edge and introduces a lightweight Convolutional Neural Network (L-CNN) leveraging the depthwise separable convolution for less computation, for human detection. Single Board computers (SBC) are used as edge devices for tests and algorithms are validated using real-world campus surveillance video streams and open data sets. The experimental results are promising that the final algorithm is able to track humans with a decent accuracy at a resource consumption affordable by edge devices in real-time manner.

55 citations


Proceedings ArticleDOI
Cong Pu1
01 Oct 2018
TL;DR: A link-quality and traffic-load aware optimized link state routing protocol, also called LTA-OLSR, to provide efficient and reliable communication and data transmission in UANETs is proposed and simulation results indicate that the proposed routing protocol can be a viable approach in UAV Ad Hoc Networks.
Abstract: With increasingly popular multi-sized unmanned aerial vehicles (UAVs), also referred to as drones, UAV Ad Hoc Networks (UANETs) play an essential role in the realization of coordinating the access of drones to controlled airspace, and providing navigation services between locations in the context of Internet-of-Drones (IoD). Because of the versatility, flexibility, easy installation and relatively small operating expenses of drones, UANETs are more efficient in completing complex tasks in harsh environments, e.g., search and destroy operations, border surveillance, disaster monitoring, etc. However, due to the high mobility, drastically changing network topology, and intermittently connected communication links, existing routing protocols and communication algorithms in Mobile Ad Hoc Networks and Vehicular Ad Hoc Networks cannot be directly applied in UANETs. In this paper, we propose a link-quality and traffic-load aware optimized link state routing protocol, also called LTA-OLSR, to provide efficient and reliable communication and data transmission in UANETs. A link quality scheme is proposed to differentiate link qualities between a node and its neighbor nodes by using the statistical information of received signal strength indication (RSSI) of received packets. A traffic load scheme is also proposed to assure a light load path by taking account of MAC layer channel contention information and the number of packets stored in the buffer. We evaluate the proposed schemes through extensive simulation experiments using OMNeT++ and compare their performance with the original OLSR and DSR protocols. The simulation results indicate that the proposed routing protocol can be a viable approach in UAV Ad Hoc Networks.

47 citations


Journal ArticleDOI
01 Jan 2018
TL;DR: In this paper, the authors re-generate a new "recommended" ground-truth set based on the calculation methodology described by Shi and Funt, and then review the performance evaluation of a range of illuminant estimation algorithms.
Abstract: In a previous work, it was shown that there is a curious problem with the benchmark ColorChecker dataset for illuminant estimation. To wit, this dataset has at least 3 different sets of ground-truths. Typically, for a single algorithm a single ground-truth is used. But then different algorithms, whose performance is measured with respect to different ground-truths, are compared against each other and then ranked. This makes no sense. We show in this paper that there are also errors in how each ground-truth set was calculated. As a result, all performance rankings based on the ColorChecker dataset - and there are scores of these - are inaccurate. In this paper, we re-generate a new 'recommended' ground-truth set based on the calculation methodology described by Shi and Funt. We then review the performance evaluation of a range of illuminant estimation algorithms. Compared with the legacy ground-truths, we find that the difference in how algorithms perform can be large, with many local rankings of algorithms being reversed. Finally, we draw the readers attention to our new 'open' data repository which, we hope, will allow the ColorChecker set to be rehabilitated and once again become a useful benchmark for illuminant estimation algorithms.

42 citations


Proceedings ArticleDOI
Shaban Shabani1, Maria Sokhn1
01 Oct 2018
TL;DR: This system combines the human factor with the machine learning approach and a decision-making model that estimates the classification confidence of algorithms and decides whether the task needs human input or not and achieves reasonably higher accuracy compared to the reported baseline results.
Abstract: The rapid growth of fake news, especially in social media has become a challenging problem that has negative social impacts on a global scale. In contrast to fake news which intend to deceive and manipulate the reader, satirical stories are designed to entertain the reader by ridiculing or criticizing a social figure. Due to its serious threats of misleading information, researchers, governments, journalists and fact-checking volunteers are working together to address the fake news issue and increase the accountability of digital media. The automatic fake news detection systems enable identification of deceptive news. Low accuracy remains the main drawback of these systems. The automatic detection using only news' content is a technically challenging task as the language used in these articles is made to bypass the fake news detectors. This becomes even more complicated when the task is to differentiate the satirical stories from fake news. On the other side, human cognitive skills have shown to overperform machine-based systems when it comes to such tasks. In this paper, we address the fake news and satire detection by proposing a method that uses a hybrid machine-crowd approach for detection of potentially deceptive news. This system combines the human factor with the machine learning approach and a decision-making model that estimates the classification confidence of algorithms and decides whether the task needs human input or not. Our approach achieves reasonably higher accuracy compared to the reported baseline results, in exchange of cost and latency of using the crowdsourcing service.

40 citations


Proceedings ArticleDOI
18 Oct 2018
TL;DR: A novel, collaborative framework that assists a security analyst by exploiting the power of semantically rich knowledge representation and reasoning integrated with different machine learning techniques is described.
Abstract: The early detection of cybersecurity events such as attacks is challenging given the constantly evolving threat landscape. Even with advanced monitoring, sophisticated attackers can spend more than 100 days in a system before being detected. This paper describes a novel, collaborative framework that assists a security analyst by exploiting the power of semantically rich knowledge representation and reasoning integrated with different machine learning techniques. Our Cognitive Cybersecurity System ingests information from various textual sources and stores them in a common knowledge graph using terms from an extended version of the Unified Cybersecurity Ontology. The system then reasons over the knowledge graph that combines a variety of collaborative agents representing host and network-based sensors to derive improved actionable intelligence for security administrators, decreasing their cognitive load and increasing their confidence in the result. We describe a proof of concept framework for our approach and demonstrate its capabilities by testing it against a custom-built ransomware similar to WannaCry.

37 citations


Journal ArticleDOI
12 Nov 2018
TL;DR: In this paper, a deep residual network is proposed to learn an end-to-end mapping between Bayer images and high-resolution images, which can recover high-quality super-resolved images from low-resolution Bayer mosaics in a single step without producing the artifacts common to such processing when the two operations are done separately.
Abstract: In digital photography, two image restoration tasks have been studied extensively and resolved independently: demosaicing and super-resolution. Both these tasks are related to resolution limitations of the camera. Performing super-resolution on a demosaiced images simply exacerbates the artifacts introduced by demosaicing. In this paper, we show that such accumulation of errors can be easily averted by jointly performing demosaicing and super-resolution. To this end, we propose a deep residual network for learning an end-to-end mapping between Bayer images and high-resolution images. By training on high-quality samples, our deep residual demosaicing and super-resolution network is able to recover high-quality super-resolved images from low-resolution Bayer mosaics in a single step without producing the artifacts common to such processing when the two operations are done separately. We perform extensive experiments to show that our deep residual network achieves demosaiced and super-resolved images that are superior to the state-of-the-art both qualitatively and in terms of PSNR and SSIM metrics.

37 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: In this paper, the authors proposed AutoBotCatcher, which exploits a Byzantine Fault Tolerant (BFT) blockchain, as a state transition machine that allows collaboration of multiple parties without trust, in order to perform collaborative and dynamic botnet detection.
Abstract: In general, a botnet is a collection of compromised internet computers, controlled by attackers for malicious purposes. To increase attacks' success chance and resilience against defence mechanisms, modern botnets have often a decentralized P2P structure. Here, IoT devices are playing a critical role, becoming one of the major tools for malicious parties to perform attacks. Notable examples are DDoS attacks on Krebs on Security and DYN, which have been performed by IoT devices part of botnets. We take a first step towards detecting P2P botnets in IoT, by proposing AutoBotCatcher, whose design is driven by the consideration that bots of the same botnet frequently communicate with each other and form communities. As such, the purpose of AutoBotCatcher is to dynamically analyze communities of IoT devices, formed according to their network traffic flows, to detect botnets. AutoBotCatcher exploits a Byzantine Fault Tolerant (BFT) blockchain, as a state transition machine that allows collaboration of multiple parties without trust, in order to perform collaborative and dynamic botnet detection by collecting and auditing IoT devices' network traffic flows as blockchain transactions. In this paper, we focus on the design of the AutoBotCatcher by first defining the blockchain structure underlying AutoBot-Catcher, then discussing its components.

33 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: A solution of using neural networks in combination with a Hybrid Deep Learning algorithm to analyze video stream data will be able to quickly identify and assess criminal activity which will in turn reduce workloads on the supervising officials.
Abstract: The quick and accurate identification of criminal activity is paramount to securing any residence. With the rapid growth of smart cities, the integration of crime detection systems seeks to improve this security. In the past a strong reliance has been put on standard video surveillance in order to achieve this goal. This often creates a backlog of video data that must be monitored by a supervising official. For large urban areas, this creates a increasingly large workload for supervising officials which leads to an increase in error rate. Solutions have been implemented to help reduce the workload. Currently, auto regressive models have been used to better forecast criminal acts, but also have a list of shortcomings. We propose a solution of using neural networks in combination with a Hybrid Deep Learning algorithm to analyze video stream data. Our system will be able to quickly identify and assess criminal activity which will in turn reduce workloads on the supervising officials. When implemented across smart city infrastructure it will allow for a efficient and adaptable crime detection system.

32 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: This work leverages natural language processing techniques to extract attacker actions from threat report documents generated by different organizations and automatically classify them into standardized tactics and techniques, while providing relevant mitigation advisories for each attack.
Abstract: With an increase in targeted attacks such as advanced persistent threats (APTs), enterprise system defenders require comprehensive frameworks that allow them to collaborate and evaluate their defense systems against such attacks. MITRE has developed a framework which includes a database of different kill-chains, tactics, techniques, and procedures that attackers employ to perform these attacks. In this work, we leverage natural language processing techniques to extract attacker actions from threat report documents generated by different organizations and automatically classify them into standardized tactics and techniques, while providing relevant mitigation advisories for each attack. A naive method to achieve this is by training a machine learning model to predict labels that associate the reports with relevant categories. In practice, however, sufficient labeled data for model training is not always readily available, so that training and test data come from different sources, resulting in bias. A naive model would typically underperform in such a situation. We address this major challenge by incorporating an importance weighting scheme called bias correction that efficiently utilizes available labeled data, given threat reports, whose categories are to be automatically predicted. We empirically evaluated our approach on 18,257 real-world threat reports generated between year 2000 and 2018 from various computer security organizations to demonstrate its superiority by comparing its performance with an existing approach.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: How blockchain, one of today hottest technology, can be used in support of secure inter-organizational processes is discussed and which additional security issues the use of blockchain can bring are pointed out.
Abstract: Today, most of the services one may think of are based on a collaborative paradigm (e.g., social media services, IoT-based services, etc.). One of the most relevant representative of such class of services are inter-organizational processes, where an organized group of joined activities is carried out by two or more organizations to achieve a common business goal. Inter-organizational processes are therefore vital to achieve business partnerships among different organizations. However, they may also pose serious security and privacy threats to the data each organization exposes. This is mainly due to the weak trust relationships that may hold among the collaborating parties, which result in a potential lack of trust on how data/operations are managed. In this paper, we discuss, how blockchain, one of today hottest technology, can be used in support of secure inter-organizational processes. We further point out which additional security issues the use of blockchain can bring, illustrate the ongoing research projects in the area and discuss future research directions.

Proceedings ArticleDOI
18 Oct 2018
TL;DR: The problem of link-sign prediction is studied by combining random walks for graph sampling, Doc2Vec for node vectorization and Recurrent Neural Networks for prediction, which shows an improved prediction.
Abstract: Many real-world applications can be modeled as signed directed graphs wherein the links between nodes can have either positive or negative signs. Social networks can be modeled as signed directed graphs where positive/negative links represent trust/distrust relationships between users. In order to predict user behavior in social networks, several studies have addressed the link-sign prediction problem that predicts a link sign as positive or negative. However, the existing approaches do not take into account the time when the links were added which plays an important role in understanding the user relationships. Moreover, most of the existing approaches require the complete network information which is not realistic in modern social networks. Last but not least, these approaches are not adapted for dynamic networks and the link-sign prediction algorithms have to be reapplied each time the network changes. In this paper, we study the problem of link-sign prediction by combining random walks for graph sampling, Doc2Vec for node vectorization and Recurrent Neural Networks for prediction. The approach requires only local information and can be trained incrementally. Our experiments on the same datasets as state-of-the-art approaches show an improved prediction.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: This paper presents a novel approach, called Monitor, which first identifies patterns in past consumption data and then uses these patterns to detect abnormalities, which reduces the rate of false positive alarms significantly and makes it more suitable for real-world deployments.
Abstract: With the growth of smart cities, more buildings are now being instrumented with smart meters for providing better energy efficiency for sustainable development. Buildings consume around 39% of electrical energy worldwide and studies report that wasteful consumer behavior such as forgetting to switch off an appliance after use or using an appliance with misconfigured settings adds about one-third to buildings consumption. These instances result in deviations in energy consumption as compared to its normal consumption and are called as abnormalities. Detecting such abnormalities is important for reducing energy wastage. Existing methods detect abnormalities by analyzing smart meter data, however, they result in a high number of false positive alarms. This inaccuracy results in ignoring the alarms by building administrators which also affects genuine alarms. Thus, reducing the false positive alarms and making detection algorithms more accurate is a major aim. In this paper, we present our novel approach, called Monitor, which first identifies patterns in past consumption data and then uses these patterns to detect abnormalities. Our approach requires smart meter data only and reduces the rate of false positive alarms considerably. We have evaluated our approach on 16 weeks smart meter data of real world buildings. The comparison of this approach with existing approaches shows that our approach improves the accuracy by up to 24% in best scenario and on average by 14%. This improvement in accuracy reduces the rate of false positive alarms significantly and makes it more suitable for real-world deployments.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: A multilayer perceptron (MLP) neural network to detect intruders or attackers on an IoV network is proposed and a thorough simulation study demonstrates the effectiveness of the new MLP-based intrusion detection system.
Abstract: Security of Internet of vehicles (IoV) is critical as it promises to provide with safer and secure driving. IoV relies on VANETs which is based on V2V (Vehicle to Vehicle) communication. The vehicles are integrated with various sensors and embedded systems allowing them to gather data related to the situation on the road. The collected data can be information associated with a car accident, the congested highway ahead, parked car, etc. This information exchanged with other neighboring vehicles on the road to promote safe driving. IoV networks are vulnerable to various security attacks. The V2V communication comprises specific vulnerabilities which can be manipulated by attackers to compromise the whole network. In this paper, we concentrate on intrusion detection in IoV and propose a multilayer perceptron (MLP) neural network to detect intruders or attackers on an IoV network. Results are in the form of prediction, classification reports, and confusion matrix. A thorough simulation study demonstrates the effectiveness of the new MLP-based intrusion detection system.

Journal ArticleDOI
15 Nov 2018
TL;DR: A new method is set out for finding the filter that, in a least-squares sense best achieves the Luther condition, where the filter multiplied by the camera spectral sensitivities is ‘almost’ a linear combination from the colour matching functions of the human visual system.
Abstract: The idea of placing a coloured filter in front of a camera to make it more colorimetric has been previously proposed. However, this prior art approach sought to increase the dimensionality of the capture — i.e. to take an image with and without a filter — rather than to change the spectral characteristics of the sensor itself. In this paper, we set out a new method for finding the filter that, in a least-squares sense best achieves the Luther condition. That is, the filter multiplied by the camera spectral sensitivities is ‘almost’ a linear combination from the colour matching functions of the human visual system. We show that for a given sensor set the best filter and best linear mapping can be found together by solving an alternating least-squares problem. Experiments demonstrate that placing an optimal filter in front of a camera can result in a dramatic improvement in its ability to see the world colorimetrically.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: Results show social bots significantly skew perception of candidates when using volume and sentiment as metrics, and when considering the Twitter platform demographic.
Abstract: Twitter has been the go-to platform for political discourse, with politicians and news outlets releasing information via tweets. Since social media has become a staple of political campaigns, the spread of misinformation has greatly increased due to social bots. This study seeks to determine the effect social bots on Twitter had on public opinion of candidates during the 2016 U.S. election. To this end, we collected a tweet dataset consisting of 705,381 unique user accounts during the 2016 U.S. election cycle. Sentiment in the dataset is labeled using a convolutional neural network trained on the sentiment140 dataset. Bot accounts are identified and removed from the dataset and accounts are limited to a single tweet. Tweet volume and sentiment are examined both before and after the removal of bots to determine the effects social bots have on public opinion. When considering the Twitter platform demographic, our results show social bots significantly skew perception of candidates when using volume and sentiment as metrics.



Proceedings ArticleDOI
01 Oct 2018
TL;DR: A consensus decision-making framework based on trust is built and showed that measurement theory-based trust is useful in the consensus-creating process, as it decreases the number of necessary rounds and even creates a consensus when an extreme conflict in preferences exists.
Abstract: The decision-making process is one we encounter in every aspect of our lives. Decision-making becomes more challenging when dealing with multi-stakeholder decisions due to the existence of conflicts among them and the diversity in their expertise. As a result, the influence among them, which is represented by trust, is considered to be an important criterion when one is making a final decision. Such trust is a result of the interactions among those stakeholders. Rating is one of the methods that stakeholders use in their interactions to express their opinions of one another. It requires a decision that is agreed upon by everyone; of course, it might take several rounds to reach a final consensus decision. In this research study, we built a consensus decision-making framework based on trust. The trust framework has been proposed previously and is based on measurement theory. Then, we developed software to simulate the decision-making scenarios to study the rating convergences in these decision-making rounds and to investigate their convergences with and without trust. This simulator was designed to emulate humans' behaviors from a social science perspective. Our result showed that measurement theory-based trust is useful in the consensus-creating process, as it decreases the number of necessary rounds and even creates a consensus when an extreme conflict in preferences exists.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: This paper collaborates with firefighting professionals to training firefighting skills with VR/AR systems to provide situational awareness and address challenges faced by firefighters on the fire ground.
Abstract: It is important to reduce loss caused by fires through improved operations performed by virtual and augmented reality (VR/AR) trained and equipped fire-fighters. This paper collaborates with firefighting professionals to training firefighting skills with VR/AR systems. The system is also integrated with computational models and decision tools to provide situational awareness and address challenges faced by firefighters on the fire ground.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: The osmotic collaborative computing method advocated in this paper will be crucial in ensuring the possibility of shifting many complex applications such as novelty detection and other machine learning based cybersecurity applications to edges of large scale IoT networks using low-cost widely available DSPs.
Abstract: To implement machine learning algorithms and other useful algorithms in industrial Internet of Things (IIoT), new computing approaches are needed to prevent costs associated with having to install state of the art edge analytic devices. A suitable approach may include collaborative edge computing using available, resource-constrained IoT edge analytic hardware. In this paper, collaborative computing method is used to construct a popular and very useful waveform for IoT analytics, the Gaussian Mixture Model (GMM). GMM parameters are learned in the cloud, but the GMMs are constructed at the IIoT edge layer. GMMs are constructed using C28x, a ubiquitous, low-cost, embedded digital signal processor (DSP) that is widely available in many pre-existing IIoT infrastructures and in many edge analytic devices. Several GMMs including 2-GMM and 3-GMMs are constructed using the C28x DSP and Embedded C to show that GMM designs could be achieved in form of an osmotic microservice from the IIoT edge to the IIoT fog layer. Designed GMMs are evaluated using their differential and zero-crossings and are found to satisfy important waveform design criteria. At the fog layer, constructed GMMs are then applied for novelty detection, an IIoT cybersecurity and fault-monitoring application and are found to be able to detect anomalies in IIoT machine data using Hampel identifier, 3-Sigma rule, and the Boxplot rule. The osmotic collaborative computing method advocated in this paper will be crucial in ensuring the possibility of shifting many complex applications such as novelty detection and other machine learning based cybersecurity applications to edges of large scale IoT networks using low-cost widely available DSPs.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: An SDN framework that leverages programmability and centralized control to provide a level of QoS, and results showed that using the presented framework, with or without QoS reduces the overall average delay, jitter, and packet loss by 67%.
Abstract: Nowadays, traditional networks are suffering from lack of information, easy management, and hard QoS guarantee. Recently, SDNs overcome these limitations. They provide network agility, programmability, and centralized network control. These features facilitate solving many of the security, performance, management and QoS issues. In this paper, we propose an SDN framework that leverages programmability and centralized control to provide a level of QoS. Knowing the state of the whole network helps optimizing the decision towards enhancing the network efficiency. The presented framework basically contains modules that provide monitoring, route determination, rule preparation, and configuration functionalities. The monitoring module analyzes ports utilization and probs the links delay. The route determination module relies on the shortest path algorithm, with or without QoS guarantee. Two QoS parameters, namely, port utilization and delay are considered in the monitoring and the route determination. The proposed framework is tested in a fat-tree topology with an OpenDayLight (ODL) controller. Experiments are conducted to prove the efficiency of the presented framework over the traditional standalone controller with the built-in features. Results showed that using the presented framework, with or without QoS reduces the overall average delay by 57%, jitter by 25% and packet loss by 67%. Moreover, the monitored port utilization was reduced by 30% on average.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: A framework that uses artificial neural networks and decision tree machine learning techniques to identify any attempts for access to sensitive information by nonlegitimate users and to facilitate the framework to baffle their access, in order to protect the data.
Abstract: The proliferation of smart phones and ubiquitous Internet access enable the emergence of BYOD (Bring Your Own Device) as an effective policy to increase efficiency and productivity in the workplace. The adoption of BYOD, however, gives rise to a number of security threats, including sensitive information infiltration and exfiltration, DoS attacks and privacy violation. This work proposes a framework to address precisely this issue. The main focus of the paper is on exploring the viability of BYOD in supporting collaboration among team members, in a heterogeneous mobile computing environments. The basic tenet of this work is to leverage artificial neural networks (ANN) and decision tree (DT) machine learning (ML) techniques to identify any attempts for access to sensitive information by nonlegitimate users and to facilitate the framework to baffle their access, in order to protect the data. The goal becomes even more challenging, incorporating the demands for low latency and high accuracy of the framework. The main contributions of the include the formulation of the BYOD unauthorized access control problem, a framework that uses ANN and DT ML techniques to detect anomalous behaviors and to identify unauthorized access to resources on BYOD devices. The proposed security techniques are implemented and evaluated, using a real dataset.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: It is shown that already existing, widely-available, and low-cost hardware can participate successfully in collaborative computing for new analytics applications, at edges of IIoT and CPS networks.
Abstract: In developing state of the art applications for Internet of Things (IoT) and Cyber Physical Systems (CPS), most applications software and associated hardware are always developed from ground up. However, this approach always involves huge capital outlay, extend time-to-market for products, and contribute to integration delay, especially in the case of Industrial IoT (IIoT). In this paper, it is shown that already existing, widely-available, and low-cost hardware can participate successfully in collaborative computing for new analytics applications, at edges of IIoT and CPS networks. A well-known, low-cost, embedded digital signal processor (DSP) that is fully integrated in many industries is selected and used as a case study. The selected DSP is made more scalable by applying it to new, novel uses at the edge of an IIoT network. The new application includes using it to design useful waveforms needed for collaborative computing at IIoT network edges. Embedded-C, a programming language that is suitable for programming resource-constrained network edge devices is used to successfully design needed waveforms for the selected DSP. Collaborative computing is achieved by sending the designed waveforms from the network edge, across diverse communication channels, to the fog layer, and then, use it at the fog layer to remove noise in selected IIoT data. Correlation coefficient is positive and high between output of noise removal achieved by waveforms from the low-cost DSP when compared to noise removal achieved by waveforms from systems with more computing resources at the fog layer. This signifies a successful collaborative computing using such legacy, low-cost DSP. For low-cost hardware to participate successfully in distributed, real-time collaborative network edge-computing, impact of communication network disturbances must be examined. Hence, for the selected DSP, impact of network Bit Error Rate (BER) is examined for both wired and wireless networks. It is discovered that wireless channels have lesser BER than powerline communication (PLC) channels that have impulsive noises. Hence, it may be more suitable for real-time fault-tolerant collaborative computing using the selected DSP.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: This paper uses a Two-Tier VO model and develops the associated VO data naming schema to show how hierarchicallly structured namespaces can be used to manage sets of named resources from different VO sites, and make them available to different VO members, based on their authorization attributes.
Abstract: This paper investigates the use of Named Data Networks (NDNs) and Attribute-Based Encryption (ABE) to support federations of computing resources managed using the Virtual Organization (VO) concept. The NDN architecture focuses on fetching structurally named and secured pieces of application data, instead of pushing packets to host IP addresses. The VO concept allows management of federations across different administrative domains and enable secure collaborations. We show how hierarchicallly structured namespaces can be used to manage sets of named resources from different VO sites, and make them available to different VO members, based on their authorization attributes. For this initial investigation, we use a Two-Tier VO model and develop the associated VO data naming schema. We present an example, discuss outstanding issues, and identify future work.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: This work focuses on the design and the development of the middleware which integrates data coming from mobile and IoT devices specifically deployed in urban contexts using the Osmotic Computing paradigm.
Abstract: Traditional urban pollution monitoring systems suffer of the sole presence of fixed stations. Data gathered from such devices are precise, thanks to the equipment quality and to established and robust measuring protocols, but these sampled data are located in very limited areas and collected by discontinuous monitoring campaigns. The spread of mobile technologies has fostered the development of new approaches like Mobile Crowd Sensing (MCS), offering the chance of using mobile devices, even personal ones, as sensors of urban data. Nevertheless, one of the open challenges is the management of the integration of heterogeneous data flows, different from types, technical specifications (e.g. diverse transmission protocols) and semantics. Osmotic computing aims at creating an abstract level between the mobile devices and the cloud, enabling opportunistic filtering and the addition of metadata for improving the data processing flow. This work focuses on the design and the development of the middleware which integrates data coming from mobile and IoT devices specifically deployed in urban contexts using the Osmotic Computing paradigm.


Journal ArticleDOI
01 Sep 2018
TL;DR: The authors develop several versions of the diffusion equation to demosaic color filter arrays of any kind and find that random mosaics do not perform the best with their algorithms, but rather pseudo-random mosaics give the best results.
Abstract: The authors develop several versions of the diffusion equation to demosaic color filter arrays of any kind. In particular, they compare isotropic versus anisotropic and linear versus non-linear formulations. Using these algorithms, they investigate the effect of mosaics on the resulting demosaiced images. They perform cross analysis on images, mosaics, and algorithms. They find that random mosaics do not perform the best with their algorithms, but rather pseudo-random mosaics give the best results. The Bayer mosaic also shows equivalent results to good pseudo-random mosaics in terms of peak signal-to-noise ratio but causes visual aliasing artifacts. The linear anisotropic diffusion method performs the best of the diffusion versions, at the level of state-of-the-art algorithms. c © 2018 Society for Imaging Science and Technology. [DOI: 10.2352/J.ImagingSci.Technol.2018.62.5.050401]

Proceedings ArticleDOI
01 Oct 2018
TL;DR: A set of instruments, aimed at real-time collecting and storing IoT and Smart City data (data shadow), as well as auditing data traffic flows in an IoT Smart City Architecture, with the purpose of quantitatively monitoring the status and detecting potential anomalies and malfunctions at level of single IoT device and/or service are proposed.
Abstract: Recent advances in the development of the Internet of Things and ICT have completely changed the way citizens interact with Smart City environments increasing the demand of more services and infrastructures in many different contexts. Furthermore, citizens require to be active users in a flexible smart living lab, with the possibility to access Smart City data, analyze them, perform actions and receive notifications based on automated decision-making processes. Critical problems could arise if the continuity of data flows and communication among connected IoT devices and data-driven applications is interrupted or lost, due to some devices or system malfunction or unexpected behavior. The proposed solution is a set of instruments, aimed at real-time collecting and storing IoT and Smart City data (data shadow), as well as auditing data traffic flows in an IoT Smart City Architecture, with the purpose of quantitatively monitoring the status and detecting potential anomalies and malfunctions at level of single IoT device and/or service. These instruments are the DevDash and AMMA tools, designed and realized within the Snap4City framework. Specific use cases have been provided to highlight the capabilities of these instruments in terms of data indexing, monitoring and analysis.