scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Emerging Topics in Computing in 2019"


Journal ArticleDOI
TL;DR: A novel model for intrusion detection based on two-layer dimension reduction and two-tier classification module, designed to detect malicious activities such as User to Root (U2R) and Remote to Local (R2L) attacks is presented.
Abstract: With increasing reliance on Internet of Things (IoT) devices and services, the capability to detect intrusions and malicious activities within IoT networks is critical for resilience of the network infrastructure. In this paper, we present a novel model for intrusion detection based on two-layer dimension reduction and two-tier classification module, designed to detect malicious activities such as User to Root (U2R) and Remote to Local (R2L) attacks. The proposed model is using component analysis and linear discriminate analysis of dimension reduction module to spate the high dimensional dataset to a lower one with lesser features. We then apply a two-tier classification module utilizing Naive Bayes and Certainty Factor version of K-Nearest Neighbor to identify suspicious behaviors. The experiment results using NSL-KDD dataset shows that our model outperforms previous models designed to detect U2R and R2L attacks.

356 citations


Journal ArticleDOI
TL;DR: A fog-computing-based content-aware filtering method for security services, FCSS, is proposed in information centric social networks and shows the advantages of FCSS in terms of hit ratio, filtering delay, and filtering accuracy.
Abstract: Social networks are very important social cyberspaces for people. Currently, information-centric networks (ICN) are the main trend of next-generation networks, which promote traditional social networks to information-centric social networks (IC-SN). Because of the complexity and openness of social networks, the filtering of security services for users is a key issue. However, existing schemes were proposed for traditional social networks and cannot satisfy the new requirements of IC-SN including extendibility, data mobility, use of non-IP addresses, and flexible deployment. To address this challenge, a fog-computing-based content-aware filtering method for security services, FCSS, is proposed in information centric social networks. In FCSS, the assessment and content- matching schemes and the fog-computing-based content-aware filtering scheme is proposed for security services in IC-SN. FCSS contributes to IC-SN as follows. First, fog computing is introduced into IC-SN to shifting intelligence and resources from remote servers to network edge, which provides low-latency for security service filtering and end to end communications. Second, content-label technology based efficient content-aware filtering scheme is adapted for edge of IN-SN to realize accurate filtering for security services. The simulations and evaluations show the advantages of FCSS in terms of hit ratio, filtering delay, and filtering accuracy.

159 citations


Journal ArticleDOI
TL;DR: The Unmanned Aerial Vehicles (UAVs) are considered to be a viable candidate to promptly form a wireless, meshed offloading backbone to support the LBSN data sensing and relevant data computations in theLBSN cloud.
Abstract: Location Based Social Networks (LBSNs) have recently emerged as a hot research area. However, the high mobility of LBSN users and the need to quickly provide access points in their interest zones present a unique research challenge. In order to address this challenge, in this paper, we consider the Unmanned Aerial Vehicles (UAVs) to be a viable candidate to promptly form a wireless, meshed offloading backbone to support the LBSN data sensing and relevant data computations in the LBSN cloud. In the considered network, UAV-mounted cloudlets are assumed to carry out adaptive recommendation in a distributed manner so as to reduce computing and traffic load. Furthermore, the computational complexity and communication overhead of our proposed adaptive recommendation are analyzed. The effectiveness of the proposed recommendation system in the considered LBSN is evaluated through computer-based simulations. Simulation results demonstrate that our proposal achieves much improved performance compared to conventional methods in terms of accuracy, throughput, and delay.

84 citations


Journal ArticleDOI
TL;DR: This paper proposes a utility-based data computing scheme which allows vehicles to collect mobile data in the urban area, in order to provide sensing service in the IoT, and presents an integrated architecture by introducing roadside buffers where each buffer can have a sink node to collect sensor data from vehicles.
Abstract: Recently, the internet-of-things (IoT) has emerged as a new paradigm with an ever-increasing number of things to be connected to the internet. Different from the conventional paradigms, in the IoT the data computing scheme is needed to efficiently collect and offer data to provide sensing service. However, the existing data computing schemes are unfriendly which lack the integrated and incentive consideration to reduce the cost of data collection and encourage more participants for cooperation. Therefore, in this paper we propose a utility-based data computing scheme which allows vehicles to collect mobile data in the urban area, in order to provide sensing service in the IoT. First, we present an integrated architecture by introducing roadside buffers where each buffer can have a sink node to collect sensor data from vehicles. Next, by considering both the time cost and power cost during the data collection, we make the analysis of utilities in data computing process. Then, with a bargaining game to model the interaction among participants, a utility based data computing scheme is proposed with incentives where the optimal price can be determined for sensing service. Finally, extensive simulation experiments prove that the proposed scheme can efficiently improve the sensing service in IoT with a low cost.

83 citations


Journal ArticleDOI
TL;DR: A joint channel access and sampling rate control scheme, named JASC, is proposed considering the real-time channel sensing results and energy harvesting rates and demonstrates that JASC can efficiently improve the network utility in CRSNs based on a realistic energy harvesting dataset.
Abstract: In this paper, we investigate the network utility maximization problem in energy harvesting cognitive radio sensor networks (CRSNs). Different from traditional sensor networks, sensor nodes in CRSNs are embedded cognitive radio modules, enabling them to dynamically access the licensed channels. Since the dynamic channel access is critical to guarantee the network capacity for CRSNs, existing solutions without considering the dynamic channel access cannot be directly applied into CRSNs. To this end, we aim at maximizing the network utility by jointly controlling the sampling rates and channel access of sensor nodes, under the energy consumption, channel capacity and interference constraints. With the consideration of fluctuated energy harvesting rates and channel switching costs, we formulate the network utility maximization as a mix-integer non-linear programming problem and solve it in an efficient and decoupled way by means of dual decomposition. A joint channel access and sampling rate control scheme, named JASC, is then proposed considering the real-time channel sensing results and energy harvesting rates. Extensive simulation results demonstrate that JASC can efficiently improve the network utility in CRSNs based on a realistic energy harvesting dataset.

74 citations


Journal ArticleDOI
TL;DR: This paper aims to summarize and categorize existing benefits/challenges on incorporating blockchain in healthcare domain, and provide a framework that will facilitate new research activities and establish the state of evidence with in-depth assessment.
Abstract: Healthcare is a data-intensive domain, once a considerable volume of data is daily to monitoring patients, managing clinical research, producing medical records, and processing medical insurance claims. While the focus of applications of blockchain in practice has been to build distributed ledgers involving virtual tokens, the impetus of this emerging technology has now extended to the medical domain. With the increased popularity, it is crucial to study how this technology accompanied with a system for smart contracts can support and challenge the healthcare domain for all interrelated actors (patients, physicians, insurance companies, regulators) and involved assets (e.g. patients' data, physician's data, equipment's and drug's supply chain, etc.). The contributions of this paper are the following: (i) report the results of a systematic literature review conducted to identify, extract, evaluate and synthesize the studies on the symbiosis of blockchain in healthcare; (ii) summarize and categorize existing benefits/challenges on incorporating blockchain in healthcare domain; (iii) provide a framework that will facilitate new research activities; and (iv) establish the state of evidence with in-depth assessment.

68 citations


Journal ArticleDOI
Kumud Nepal1, Soheil Hashemi1, Hokchhay Tann1, R. Iris Bahar1, Sherief Reda1 
TL;DR: This article provides an expanded and improved treatment of the ABACUS methodology, which aims to automatically generate approximate designs directly from their behavioral register-transfer level (RTL) descriptions, enabling a wider range of possible approximations.
Abstract: Numerous application domains (e.g., signal and image processing, computer graphics, computer vision, and machine learning) are inherently error tolerant, which can be exploited to produce approximate ASIC implementations with low power consumption at the expense of negligible or small reductions in application quality. A major challenge is the need for approximate and high-level design generation tools that can automatically work on arbitrary designs. In this article, we provide an expanded and improved treatment of our ABACUS methodology, which aims to automatically generate approximate designs directly from their behavioral register-transfer level (RTL) descriptions, enabling a wider range of possible approximations. ABACUS starts by creating an abstract syntax tree (AST) from the input behavioral RTL description of a circuit, and then applies variant operators to the AST to create acceptable approximate designs. The devised variant operators include data type simplifications, arithmetic operation approximations, arithmetic expressions transformations, variable-to-constant substitutions, and loop transformations. A design space exploration technique is devised to explore the space of possible variant approximate designs and to identify the designs along the Pareto frontier that represents the trade-off between accuracy and power consumption. In addition, ABACUS prioritizes generating approximate designs that, when synthesized, lead to circuits with simplified critical paths, which are exploited to realize complementary power savings through standard voltage scaling. We integrate ABACUS with a standard ASIC design flow, and evaluate it on four realistic benchmarks from three different domains—machine learning, signal processing, and computer vision. Our tool automatically generates many approximate design variants with large power savings, while maintaining good accuracy. We demonstrate the scalability of ABACUS by parallelizing the flow and use of recent standard synthesis tools. Compared to our previous efforts, the new ABACUS tool provides up to 20.5× speed-up in runtime, while able to generate approximate circuits that lead to additional power savings reaching up to 40 percent.

63 citations


Journal ArticleDOI
TL;DR: This paper discusses a design method for compact and accurate digital filters based on SC, and formulate the correlation-induced errors produced by the MUX tree, and proposes an algorithm for constructing an optimumMUX tree to minimize the error.
Abstract: Stochastic computing (SC), which is an approximate computation with probabilities, has attracted attention as an alternative to deterministic computing. In this paper, we discuss a design method for compact and accurate digital filters based on SC. Such filter designs are widely used for various purposes, such as image and signal processing and machine learning. Our design method involves two techniques. One is sharing random number sources with several stochastic number generators to reduce the areas required by these generators. Clarifying the influence of the correlation around multiplexers (MUXs) on computation accuracy and utilizing circular shifts of the output of random number sources, we can reduce the number of random number sources for a digital filter without losing accuracy. The other technique is to construct a MUX tree, which is the principal part of an SC-based filter. We formulate the correlation-induced errors produced by the MUX tree, and then propose an algorithm for constructing an optimum MUX tree to minimize the error. Experimental results show that the proposed design method can derive compact (approximately 70 percent area reduction) SC-based filters that retain high accuracy.

62 citations


Journal ArticleDOI
TL;DR: This paper uses worker confidence to represent the reliability of successfully completing the assigned sensing tasks, and formulates two optimization problems, maximum reliability assignment (MRA) under a recruitment budget and minimum cost Assignment (MCA)under a task reliability requirement.
Abstract: The large quantity of mobile devices equipped with various built-in sensors and the easy access to the high-speed wireless networks have made spatial crowdsourcing receive much attention in the research community recently. Generally, the objective of spatial crowdsourcing is to outsource location-based sensing tasks (e.g., traffic monitoring and pollution monitoring) to ordinary mobile workers (e.g., users carrying smartphones) efficiently. In this paper, we study a reliable task assignment problem for spatial crowdsourcing in a large worker market. Specifically, we use worker confidence to represent the reliability of successfully completing the assigned sensing tasks, and we formulate two optimization problems, maximum reliability assignment (MRA) under a recruitment budget and minimum cost assignment (MCA) under a task reliability requirement. We reveal the special structure properties of these problems, based on which we design effective approaches to assign tasks to the most suitable workers. The performances of the proposed algorithms are verified by theoretic analysis and experimental results on both real and synthetic datasets.

61 citations


Journal ArticleDOI
TL;DR: This work designs a novel temporal, functional and spatial big data computing framework for large-scale smart grid that achieves a promising computing efficiency approaching to the optimal solution with 95 percent convergence ratio, and it saves the in-path bandwidth with 81 percent improvement ratio over benchmarks.
Abstract: With the deployment of monitoring devices, the smart grid is collecting large amounts of energy-related data at an unprecedented speed. The smart grid has become data-driven, which necessitates extracting meaningful data from a large dataset. The traditional approach of data extraction improves the computing efficiency in temporal dimension, but it is made for only one task in the smart grid. Moreover, the existing solutions neglect the geographical distribution of computing capacity in a large-scale smart grid. The future large-scale smart grid will run over the internet of energy where the dataset will be sent to a specific destination along power routers hop-by-hop. Consequently, we design a novel temporal, functional and spatial big data computing framework for large-scale smart grid. In functional dimension, we divide every dataset into sub-groups, each of which has data items shared by different tasks. In spatial dimension, we determine which location the power router should be placed to harvest computing resources used for extracting the sub-group of data items. Our method achieves a promising computing efficiency approaching to the optimal solution with 95 percent convergence ratio, and it saves the in-path bandwidth with 81 percent improvement ratio over benchmarks.

60 citations


Journal ArticleDOI
TL;DR: It is shown that a polynomial can be implemented using multiple levels of NAND gates based on Horner’s rule, if the coefficients are alternately positive and negative and their magnitudes are monotonically decreasing.
Abstract: Stochastic logic implementations of complex arithmetic functions, such as trigonometric, exponential, and sigmoid, are derived based on truncated versions of their Maclaurin series expansions. This paper makes three contributions. First, it is shown that a polynomial can be implemented using multiple levels of NAND gates based on Horner’s rule, if the coefficients are alternately positive and negative and their magnitudes are monotonically decreasing. Truncated Maclaurin series expansions of arithmetic functions are used to generate polynomials which satisfy these constraints. The input and output in these functions are represented by unipolar representation. Functions including sine, cosine, tangent hyperbolic, logarithm and exponential can be implemented using this method. Second, for a polynomial that does not satisfy these constraints, it still can be implemented based on Horner’s rule if each factor of the polynomial satisfies these constraints. It is shown that functions such as $\sin \pi x/\pi$ , $e^{-ax}$ , $\tanh ax$ and $\text{sigmoid}(ax3)$ (for values of $a>1$ ) can be implemented using stochastic logic using factorization in combination with Horner’s rule. Third, format conversion is proposed for arithmetic functions with input and output represented in different formats, such as $\text{cos}\,\pi x$ given $x\in [0,1]$ and $\text{sigmoid(x)}$ given $x\in [-1,1]$ . Polynomials are transformed to equivalent forms that naturally exploit format conversions. The proposed stochastic logic circuits outperform the well-known Bernstein polynomial based and finite-state-machine (FSM) based implementations. Furthermore, the hardware complexity and the critical path of the proposed implementations are less than the well-known Bernstein polynomial based and FSM based implementations for most cases.

Journal ArticleDOI
TL;DR: An unsupervised progressive incremental data mining mechanism applied to smart meters energy consumption data through frequent pattern mining to overcome challenges and establish a foundation for efficient energy demand management while ameliorating end-user participation is proposed.
Abstract: Inarguably, buying-in consumer confidence through respecting their energy consumption behavior and preferences in various energy programs is imperative but also demanding. Household energy consumption patterns, which provide great insight into consumers energy consumption behavioral traits, can be learned by understanding user activities along with appliances used and their time of use. Such information can be retrieved from the context-rich smart meters big data. However, the main challenge is how to extract complex interdependencies among multiple appliances operating concurrently, and identify appliances responsible for major energy consumption. Furthermore, due to the continuous generation of energy consumption data, over a period of time, appliance associations can change. Therefore, they need to be captured regularly and continuously. In this paper, we propose an unsupervised progressive incremental data mining mechanism applied to smart meters energy consumption data through frequent pattern mining to overcome these challenges. This can establish a foundation for efficient energy demand management while ameliorating end-user participation. The details and the results of evaluation of the proposed mechanism using real smart meters dataset are also presented in this paper.

Journal ArticleDOI
TL;DR: This work proposes a lightweight searchable public-key encryption with forward privacy that allows us to achieve forward privacy in a public- key cryptography setting, and has the same search performance as a number of practical searchable symmetric encryption schemes.
Abstract: As more data from Industrial Internet of Things (IIoT) devices are been outsourced to the cloud, the need to ensure/achieve data privacy will be increasingly pressing. Searchable public-key encryption is a promising tool to achieve data privacy without sacrificing data usability. However, most existing searchable public-key encryption schemes are either inefficient (e.g., expensive in terms of encryption and searching keywords) or lack the required (security) features such as forward privacy. To address these limitations, we propose a lightweight searchable public-key encryption with forward privacy. Specifically, the scheme allows us to achieve forward privacy in a public-key cryptography (i.e., asymmetric) setting, and has the same search performance as a number of practical searchable symmetric encryption schemes. The formal security analysis shows that the proposed scheme is chosen-keyword attack resilient and achieves forward privacy. Finally, the experimental results on a real-world dataset demonstrate that the proposed scheme is highly efficient and scalable in IIoT applications.

Journal ArticleDOI
TL;DR: Low energy consumption and security against DPA attacks makes EE-SPFAL logic a suitable candidate to implement in IoT devices such as RFID and smart cards.
Abstract: The emergence of Internet of Things (IoT) have increased the need of Radio Frequency Identification (RFID) and smart cards that are energy-efficient and secure against Differential Power Analysis (DPA) attacks. Adiabatic logic is one of the circuit design techniques that can be used to design energy-efficient and secure hardware. However, the existing DPA resistant adiabatic logic families suffer from non-adiabatic energy loss. Therefore, this work presents a novel adiabatic logic family called Energy-Efficient Secure Positive Feedback Adiabatic Logic (EE-SPFAL) family that reduces the non-adiabatic energy loss and also is secure against DPA attacks. The proposed EE-SPFAL is used to design logic gates such as buffers, XOR, and NAND. Further, the logic gates based on EE-SPFAL are used to implement a Positive Polarity Reed Muller (PPRM) architecture based S-box circuit. SPICE simulations at 12.5 MHz show that EE-SPFAL based S-box circuit saves up to 65 percent of energy and 90 percent of energy per cycle as compared to the S-box circuit implemented using existing Secured Quasi-Adiabatic Logic (SQAL) and conventional CMOS logic, respectively. Further, the security of EE-SPFAL based S-box circuit has been evaluated by performing the DPA attack through SPICE simulations. We proved that the EE-SPFAL based S-box circuit is resistant to a DPA attack through a developed DPA attack flow applicable to SPICE simulations. Further, we have implemented the one round of Advanced Standard Encryption (AES) algorithm and we found that one round of EE-SPFAL logic based AES consumes uniform current with different input plain texts. Low energy consumption and security against DPA attacks makes EE-SPFAL logic a suitable candidate to implement in IoT devices such as RFID and smart cards.

Journal ArticleDOI
TL;DR: This paper design and implement a sender-receiver role-based scheduling protocol for Energy-Aware scheduling with Spatial-Temporal reuse, called EAST, which outperforms existing representative MAC protocols in terms of network throughput, delivery success ratio and energy consumption.
Abstract: The advance of Internet-of-Things (IoT) has extended its concept to underwater environments. The networks of underwater sensors and smart interconnected underwater objects have become an integral part of the IoT ecosystem as the Internet of Underwater Things (IoUT). This paper focuses on the problem of providing a scheduling service to support the transmission of sensory data of these smart underwater objects with high computation-utilization and high energy-efficiency. We design and implement a sender-receiver role-based scheduling protocol for Energy-Aware scheduling with Spatial-Temporal reuse, called EAST. Our EAST protocol is unique in three aspects. First, we introduce a probability-based contending model to address the hidden and exposed terminal problems. Second, we explore fine granularity reuse opportunities by introducing a sender-receiver role-based spatial and temporal reuse optimization and a multifactorial state transition mechanism to regulate the engagement status of each node. Third but not the least, the EAST protocol addresses the known uncertainty problem of packet loss by building a sender-initiated behavior model using Prospect Theory. We evaluate EAST through extensive experiments and show that our EAST protocol outperforms existing representative MAC protocols in terms of network throughput, delivery success ratio and energy consumption.

Journal ArticleDOI
TL;DR: The challenges and the emerging solutions in testing three classes of memories: 3D stacked memories, Resistive memories and Spin-Transfer-Torque Magnetic memories are discussed.
Abstract: The research and prototyping of new memory technologies are getting a lot of attention in order to enable new (computer) architectures and provide new opportunities for today's and future applications. Delivering high quality and reliability products was and will remain a crucial step in the introduction of new technologies. Therefore, appropriate fault modelling, test development and design for testability (DfT) is needed. This paper overviews and discusses the challenges and the emerging solutions in testing three classes of memories: 3D stacked memories, Resistive memories and Spin-Transfer-Torque Magnetic memories. Defects mechanisms, fault models, and emerging test solutions will be discussed.

Journal ArticleDOI
TL;DR: An enhanced one-round blind filter protocol based on the Paillier cryptosystem to securely filter out redundant records generated by the inline-formula-anonymity technique and the pseudo random function to protect the location privacy and the query message privacy of users.
Abstract: To take advantages of location-based services (LBS) while protecting user privacy against untrusted LBS providers, privacy preserving LBS have attracted increasing attention. Considering that users in an LBS system are often equipped with resource-constrained mobile devices, most existing privacy preserving LBS methods are based on anonymization techniques. However, these existing schemes still have some privacy and efficiency limitations. In this paper, we propose a novel privacy preserving LBS scheme, which simultaneously achieves user privacy protection and high query efficiency. Specifically, we utilize the k-anonymity technique and the pseudo random function to protect the location privacy and the query message privacy of users. We design an enhanced one-round blind filter protocol (ORBFe) based on the Paillier cryptosystem to securely filter out redundant records generated by the k-anonymity technique. Compared with existing solutions, our ORBFe protocol not only ensures that users receive exactly satisfying results but also incurs a low computation and communication cost on the server side. Through the theoretical analysis and extensive experiments, we demonstrate the security and efficiency of our proposed scheme.

Journal ArticleDOI
TL;DR: Initial tests show that to attack the proposed FF-APUF design requires more effort for the adversary than a conventional APUF design, and the empirical min-entropy of the FF-apUF design across different devices is shown to be more than twice that of the conventional APF design.
Abstract: The PUF is a physical security primitive that permits to extract intrinsic digital identifiers from electronic devices. As low-cost nature PUF is a promising candidate to meet security in lightweight devices for IoT application. The Arbiter PUF or APUF has been widely studied in the technical literature. However it often suffers from disadvantages such as poor uniqueness and reliability, particularly when implemented on FPGAs due to features such as physical layout restrictions. To address these problems, a new design known as the FF-APUF has been proposed; it offers a compact architecture, combined with good uniqueness and reliability, as well as suitable for FPGA implementation. Many PUF designs have been shown to be vulnerable to ML based modeling attacks. In this paper, it is initially shown that the FF-APUF design requires more efforts than a conventional APUF design for the adversary to attack. A comprehensive analysis of the experimental results for the FF-APUF design is also presented. An improved APUF design with a balanced arbiter and a FF-APUF design are proposed and implemented on the Xilinx Artix-7 FPGA at 28 nm technology. The experimental min-entropy of the FF-APUF design across different devices is more than twice of a conventional APUF design.

Journal ArticleDOI
TL;DR: A hybrid SG communication architecture integrating fiber optic and WiFi-based mesh networks, and results reveal that both GARA and HGRA can achieve near-optimal solutions to the problem of data acquisition under failures, and have higher computational efficiency compared to the benchmark, i.e., OERA.
Abstract: With the increasing of monitoring devices and advanced measurement infrastructures, smart grids (SGs) collect large amounts of data every moment, which gives rise to the SG big data. In order to fulfill diverse communication requirements of various energy-related data in SG, it is obviously impractical to rely on a single communication technology, and a hybrid communication architecture of low-latency fiber optic and cost-effective wireless technologies could be a promising solution. Note that, low-latency data acquisition under failures is of particular importance to SG reliability, considering that SGs are vulnerable to various failures. Toward this end, we provide in this paper a hybrid SG communication architecture integrating fiber optic and WiFi-based mesh networks, i.e., fiber-wireless (FiWi) enhanced SG, and study the problem of data acquisition under failures in FiWi enhanced SG. The problem is first formulated as a constrained optimization problem, and then three algorithms are proposed as our solutions, i.e., an optimal enumeration routing algorithm (OERA), a greedy approximation routing algorithm (GARA), and a heuristic greedy routing algorithm (HGRA). Numerical results reveal that both GARA and HGRA can achieve near-optimal solutions to the problem of data acquisition under failures, and have higher computational efficiency compared to our benchmark, i.e., OERA.

Journal ArticleDOI
TL;DR: A novel searchable encryption scheme for the client-server architecture that exploits the properties of the modular inverse to generate a probabilistic trapdoor which facilitates the search over the secure inverted index table is presented.
Abstract: Searchable Encryption is an emerging cryptographic technique that enables searching capabilities over encrypted data on the cloud. In this paper, a novel searchable encryption scheme for the client-server architecture has been presented. The scheme exploits the properties of the modular inverse to generate a probabilistic trapdoor which facilitates the search over the secure inverted index table. We propose indistinguishability that is achieved by using the property of a probabilistic trapdoor. We design and implement a proof of concept prototype and test our scheme with a real dataset of files. We analyze the performance of our scheme against our claim of the scheme being light weight. The security analysis yields that our scheme assures a higher level of security as compared to other existing schemes.

Journal ArticleDOI
TL;DR: This article proposes a framework, namely Discovery Information using COmmunity detection (DICO), for identifying overlapped communities of authors from Big Scholarly Data by modeling authors’ interactions through a novel graph-based data model combining jointly document metadata with semantic information.
Abstract: The widespread use of Online Social Networks has also involved the scientific field in which researchers interact each other by publishing or citing a given paper. The huge amount of information about scientific research documents has been described through the term Big Scholarly Data. In this paper we propose a framework, namely Discovery Information using COmmunity detection (DICO), for identifying overlapped communities of authors from Big Scholarly Data by modeling authors' interactions through a novel graph-based data model combining jointly document metadata with semantic information. In particular, DICO presents three distinctive characteristics:i) the co-authorship network has been built from publication records using a novel approach for estimating relationships weight between users;ii) a new community detection algorithm based on Node Location Analysis has been developed to identify overlapped communities;iii) some built-in queries are provided to browse the generated network, though any graph-traversal query can be implemented through the Cypher query language. The experimental evaluation has been carried out to evaluate the efficacy of the proposed community detection algorithm on benchmark networks.Finally, DICO has been tested on a real-world Big Scholarly Dataset to show its usefulness working on the DBLP+AMiner dataset, that contains 1.7M+ distinct authors, 3M+ papers, handling 25M+ citation relationships.

Journal ArticleDOI
TL;DR: The algorithmic-level approximate computing method is applied to a software High Efficiency Video Coding (HEVC) video decoder and is shown to offer multiple trade-offs between the quality of the decoded video and the energy required for the decoding process.
Abstract: This paper presents a novel method for applying approximate computing at the level of a complete application. The method decomposes the application into processing blocks which types define the classes of approximate computing techniques they may tolerate. By applying these approximation techniques to the most computationally intensive blocks, drastic energy reduction can be obtained at a limited cost in terms of Quality of Service. The algorithmic-level approximate computing method is applied to a software High Efficiency Video Coding (HEVC) video decoder. The method is shown to offer multiple trade-offs between the quality of the decoded video and the energy required for the decoding process. The algorithmic-level approximate computing method offers new possibilities in terms of application energy budgeting. Energy reductions of up to 40 percent are demonstrated for a limited degradation of the application Quality of Service.

Journal ArticleDOI
TL;DR: A resistive content addressable memory (CAM) accelerator, called RCA, which exploits data locality to have an approximate memory-based computation, and shows that RCA can accelerate CPU computation by 12.6× and improve the energy efficiency by 6.6 × as compared to a traditional CPU architecture, while providing acceptable quality of service.
Abstract: The Internet of Things significantly increases the amount of data generated, straining the processing capability of current computing systems. Approximate computing is a promising solution to accelerate computation by trading off energy and accuracy. In this paper, we propose a resistive content addressable memory (CAM) accelerator, called RCA, which exploits data locality to have an approximate memory-based computation. RCA stores high frequency patterns and performs computation inside CAM without using processing cores. During execution time, RCA searches an input operand among all prestored values on a CAM and returns the row with the nearest distance. To manage accuracy, we use a distance metric which considers the impact of each bit indices on computation accuracy. We evaluate an application of proposed RCA on CPU approximation, where RCA can be used as a stand-alone or as a hybrid computing unit besides CPU cores for tunable CPU approximation. We evaluate the architecture of the proposed RCA using HSPICE and multi2sim by testing our results on x86 CPU processor. Our evaluation shows that RCA can accelerate CPU computation by 12.6× and improve the energy efficiency by 6.6× as compared to a traditional CPU architecture, while providing acceptable quality of service.

Journal ArticleDOI
TL;DR: The degree to which energy can be throttled for energy management purposes without affecting the end user's comfort level is described, and the potential of offering ACs as interruptible load into the market without compromising user comfort is investigated.
Abstract: Effective control of air conditioning systems (ACs) has the potential of significant electricity savings and demand response management for an entire power system. In this context, this paper demonstrates some key experimental results on controlling the electricity consumption of ACs. In particular, the degree to which energy can be throttled for energy management purposes without affecting the end user's comfort level is described. The testbed was set up in a residential building, in which the set point temperature of ACs installed within each apartment unit is controllable from a remote server. The main objectives were to reduce the consumption of electricity by the compressors, and to investigate the feasibility of having residential ACs as interruptible loads to participate in the electricity market. The algorithm used for controlling is explained in detail, and the users’ experiences during the experiments are briefly discussed. Extensive data collected throughout the experiment are provided to show the effectiveness of having ACs as flexible loads in reducing the power consumption by the compressors as well as the potential of offering ACs as interruptible load into the market without compromising user comfort.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed SaEF-AKT framework can outperform several state-of-the-art multi-task optimization algorithms.
Abstract: Multi-task optimization is a hot research topic in the field of evolutionary computation. This paper proposes an efficient surrogate-assisted multi-task evolutionary framework (named SaEF-AKT) with adaptive knowledge transfer for multi-task optimization. In the proposed SaEF-AKT, several tasks which are computationally expensive are solved jointly in each generation. Surrogate models are built based on the historical search information of each task to reduce the number of fitness evaluations. To improve the search efficiency, a general similarity measure mechanism and an adaptive knowledge transfer mechanism are proposed, which can help knowledge transfer among the tasks to be solved. The proposed SaEF-AKT is tested on a number of benchmark problems in multi-task optimization scenario and real-world time series regression problems. The experimental results demonstrate that the proposed framework can outperform several state-of-the-art multi-task optimization algorithms.

Journal ArticleDOI
TL;DR: An attack that transforms a scan obfuscated circuit to its logic-locked version and applies the Boolean satisfiability (SAT) based attack, thereby extracting the secret key is proposed, and can break both static and dynamic scan obfuscation schemes.
Abstract: While financially advantageous, outsourcing key steps, such as testing, to potentially untrusted Outsourced Assembly and Test (OSAT) companies may pose a risk of compromising on-chip assets. Obfuscation of scan chains is a technique that hides the actual scan data from the untrusted testers; logic inserted between the scan cells, driven by a secret key, hides the transformation functions that map the scan-in stimulus (scan-out response) and the delivered scan pattern (captured response). While static scan obfuscation utilizes the same secret key, and thus, the same secret transformation functions throughout the lifetime of the chip, dynamic scan obfuscation updates the key periodically. In this paper, we propose ScanSAT: an attack that transforms a scan obfuscated circuit to its logic-locked version and applies the Boolean satisfiability (SAT) based attack, thereby extracting the secret key. We implement our attack, apply on representative scan obfuscation techniques, and show that ScanSAT can break both static and dynamic scan obfuscation schemes with 100% success rate. Moreover, ScanSAT is effective even for large key sizes and in the presence of scan compression.

Journal ArticleDOI
TL;DR: This work proposes a novel, two-step anomaly detection approach that processes raw PMU data using the MapReduce paradigm, and implements this approach on a multicore system to process a dataset derived from real PMUs containing 4,500 PMUs.
Abstract: The rapid detection of anomalous behavior in SCADA systems such as the U.S. power grid is critical for system resiliency and operator response in cases of power fluctuations due to hazardous weather conditions or other events. Phasor measurement units are time synchronized devices that provide accurate synchrophasor measurements in power grids. The rapid deployment of PMUs enable improved real-time situational awareness to grid operators through wide area measurement systems. However, the quantity and rate of measurements obtained from PMUs is significantly higher than traditional devices, and continues to grow as more are deployed. Efficient algorithms for processing large-scale PMU data and notifying operators of anomalies is critical for real-time system monitoring. In this paper, we propose a novel, two-step anomaly detection approach that processes raw PMU data using the MapReduce paradigm. We implement our approach on a multicore system to process a dataset derived from real PMUs containing 4,500 PMUs ($\sim 18$∼18 million measurements). Our experimental results indicate the proposed approach detects constraint and temporal anomalies in under three seconds on 8 cores. Our work demonstrates the applicability of MapReduce for designing anomaly detection algorithms for the smart grid, and motivates the creation of novel MapReduce approaches for other SCADA applications.

Journal ArticleDOI
TL;DR: A redundant transmission strategy to meet the reliability requirement for state estimation incorporates the ISM channels with the opportunistically harvested channels to provide adequate spectrum opportunities for redundant transmissions and increases the sum rate of the WSN.
Abstract: State estimation over wireless sensor networks (WSNs) plays an important role for the ubiquitous monitoring in industrial cyber-physical systems (ICPSs). However, the unreliable wireless channels lead to the transmitted measurements arriving at the remote estimator intermittently, which will deteriorate the estimation performance. Question of how to improve the transmission reliability in the hostile industrial environment to guarantee the pre-defined estimation performance for ICPSs is largely unexplored. This paper is concerned with a redundant transmission strategy to meet the reliability requirement for state estimation. This strategy incorporates the ISM channels with the opportunistically harvested channels to provide adequate spectrum opportunities for redundant transmissions. First, we explore the relationship between the estimation performance and the transmission reliability, based on which a joint optimization of channel allocation and power control is then developed to guarantee the estimation performance and maximize the sum rate of the WSN. Second, we formulate the optimization into a mix-integer nonlinear programming problem, which is solved efficiently by decomposing it into the channel allocation and power control subproblems. Ultimately, simulation study demonstrates that the proposed strategy not only ensures the required state estimation performance, but also increases the sum rate of the WSN.

Journal ArticleDOI
TL;DR: An efficient technique, so-called Adaptive Way Allocation for Reconfigurable ECCs (AWARE), to correct write errors in STT-RAM caches by exploiting the asymmetric error rate in cell switching directions to reduce the ECC overheads without compromising the reliability of the cache.
Abstract: Spin-Transfer Torque Random Access Memories (STT-RAMs) are a promising alternative to SRAMs in on-chip caches. STT-RAMs face with a high error rate in write operations due to stochastic switching. To alleviate this problem, Error-Correcting Codes (ECCs) are commonly used, which results in a significant area and energy consumption overhead. This paper proposes an efficient technique, so-called Adaptive Way Allocation for Reconfigurable ECCs (AWARE), to correct write errors in STT-RAM caches. AWARE exploits the asymmetric error rate in cell switching directions, which leads to data-dependent write error rates, to reduce the ECC overheads without compromising the reliability of the cache. To this end, instead of protecting all cache lines using strong ECCs, AWARE employs a simple ECC that guarantees a given reliability level for the majority of writes. Meanwhile, when a data block with high error rate is written, one way in the target set is adaptively configured to store the check bits of a strong ECC for this block. The evaluation results show that, compared with conventional ECCs, AWARE reduces the ECC area by about 81.2 percent and the cache energy consumption by about 9.5 percent. These reductions are achieved by imposing less than 1 percent performance overhead and without compromising the reliability.

Journal ArticleDOI
TL;DR: This paper addresses remote sensing satellite network and proposes novel beam control technique to coexist with the terrestrial networks in Q band and controls satellite's transmitting antenna boresight adaptively to maximize signal to noise power ratio of satellite ground station and minimize interference to terrestrial networks.
Abstract: The earth observation missions have been improved its sensor performance which results in a huge amount of data to be stored in remote sensing satellite and transmitted to a ground station. Although, satellite to ground transmitter had been usually used X band, several mitigation techniques for rain attenuation have been studied in recent years to migrate to Ka band for broadband transmission. Furthermore, Q band is expected to achieve higher data rate because of its broader bandwidth. However, Q band is shared with terrestrial networks, remote sensing satellite has to take into account potential constrains to them. Therefore, this paper addresses remote sensing satellite network and proposes novel beam control technique to coexist with the terrestrial networks in Q band. The proposed method controls satellite's transmitting antenna boresight adaptively to maximize signal to noise power ratio of satellite ground station and minimize interference to terrestrial networks. The effectiveness of our proposal is verified through simulation results.