scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Emerging Topics in Computing in 2017"


Journal ArticleDOI
TL;DR: This survey attempts to provide a comprehensive list of vulnerabilities and countermeasures against them on the edge-side layer of IoT, which consists of three levels: (i) edge nodes, (ii) communication, and (iii) edge computing.
Abstract: Internet of Things (IoT), also referred to as the Internet of Objects, is envisioned as a transformative approach for providing numerous services. Compact smart devices constitute an essential part of IoT. They range widely in use, size, energy capacity, and computation power. However, the integration of these smart things into the standard Internet introduces several security challenges because the majority of Internet technologies and communication protocols were not designed to support IoT. Moreover, commercialization of IoT has led to public security concerns, including personal privacy issues, threat of cyber attacks, and organized crime. In order to provide a guideline for those who want to investigate IoT security and contribute to its improvement, this survey attempts to provide a comprehensive list of vulnerabilities and countermeasures against them on the edge-side layer of IoT, which consists of three levels: (i) edge nodes, (ii) communication, and (iii) edge computing. To achieve this goal, we first briefly describe three widely-known IoT reference models and define security in the context of IoT. Second, we discuss the possible applications of IoT and potential motivations of the attackers who target this new paradigm. Third, we discuss different attacks and threats. Fourth, we describe possible countermeasures against these attacks. Finally, we introduce two emerging security challenges not yet explained in detail in previous literature.

547 citations


Journal ArticleDOI
TL;DR: Fog computation and MCPS are integrated to build fog computing supported MCPS (FC-MCPS), and an LP-based two-phase heuristic algorithm is proposed that produces near optimal solution and significantly outperforms a greedy algorithm.
Abstract: With the recent development in information and communication technology, more and more smart devices penetrate into people’s daily life to promote the life quality. As a growing healthcare trend, medical cyber-physical systems (MCPSs) enable seamless and intelligent interaction between the computational elements and the medical devices. To support MCPSs, cloud resources are usually explored to process the sensing data from medical devices. However, the high quality-of-service of MCPS challenges the unstable and long-delay links between cloud data center and medical devices. To combat this issue, mobile edge cloud computing, or fog computing, which pushes the computation resources onto the network edge (e.g., cellular base stations), emerges as a promising solution. We are thus motivated to integrate fog computation and MCPS to build fog computing supported MCPS (FC-MCPS). In particular, we jointly investigate base station association, task distribution, and virtual machine placement toward cost-efficient FC-MCPS. We first formulate the problem into a mixed-integer non-linear linear program and then linearize it into a mixed integer linear programming (LP). To address the computation complexity, we further propose an LP-based two-phase heuristic algorithm. Extensive experiment results validate the high-cost efficiency of our algorithm by the fact that it produces near optimal solution and significantly outperforms a greedy algorithm.

309 citations


Journal ArticleDOI
TL;DR: The case study showed that the proposed approach could generate models with higher accuracy and feasibility than the traditional frequency aggregation approaches and the four phases in student’s learning process detected holiday effect and illustrate at-risk students' behaviors before and after a long holiday break.
Abstract: The purpose of this paper is to identify at-risk online students earlier, more often, and with greater accuracy using time-series clustering. The case study showed that the proposed approach could generate models with higher accuracy and feasibility than the traditional frequency aggregation approaches. The best performing model can start to capture at-risk students from week 10. In addition, the four phases in student’s learning process detected holiday effect and illustrate at-risk students’ behaviors before and after a long holiday break. The findings also enable online instructors to develop corresponding instructional interventions via course design or student–teacher communications.

77 citations


Journal ArticleDOI
TL;DR: A novel DNU tolerant latch design is proposed that is designed specifically to provide additional reliability when clock gating is used and is shown to provide superior soft error resiliency while incurring a 40 percent overhead compared toDNU tolerant designs.
Abstract: As the process feature size continues to scale down, the susceptibility of logic circuits to radiation induced error has increased. This trend has led to the increase in sensitivity of circuits to multi-node upsets. Previously, work has been done to harden latches against single event upsets (SEU). Currently, there has been a concerted effort to design latches that are tolerant to double node upsets (DNU) and triple node upsets (TNU). In this paper, we first propose a novel DNU tolerant latch design. The latch is designed specifically to provide additional reliability when clock gating is used. Through experimentation, it is shown that the DNU tolerant latch is 11.3 percent more power efficient than existing latch designs suited for clock gating. In addition to the DNU tolerant design, we propose the first TNU tolerant latch. The TNU tolerant latch is shown to provide superior soft error resiliency while incurring a 40 percent overhead compared to DNU tolerant designs.

69 citations


Journal ArticleDOI
TL;DR: The development phase of the Circuit Warz game is presented to demonstrate how electronic engineering education can radically be reimagined to create immersive, highly engaging learning experiences that are problem-centered and pedagogically sound.
Abstract: In a world where students are increasing digitally tethered to powerful, always on mobile devices, new models of engagement and approaches to teaching and learning are required from educators. Serious games (SGs) have proved to have instructional potential, but there is still a lack of methodologies and tools not only for their design but also to support the game analysis and assessment. This paper explores the use of SG to increase student engagement and retention. The development phase of the Circuit Warz game is presented to demonstrate how electronic engineering education can radically be reimagined to create immersive, highly engaging learning experiences that are problem-centered and pedagogically sound. The learning mechanics-game mechanics framework for the SG game analysis is introduced and its practical use in an educational game design scenario is shown as a case study.

55 citations


Journal ArticleDOI
TL;DR: This paper designs and implements an FPGA-based computation accelerator as part of a Homomorphic Encryption Processing Unit (HEPU) co-processor that improves the practicality of computing on encrypted data by reducing the computational bottlenecks of lattice encryption primitives that support homomorphic encryption schemes.
Abstract: In this paper we report on our advances designing and implementing an FPGA-based computation accelerator as part of a Homomorphic Encryption Processing Unit (HEPU) co-processor. This hardware accelerator technology improves the practicality of computing on encrypted data by reducing the computational bottlenecks of lattice encryption primitives that support homomorphic encryption schemes. We focus on accelerating the Chinese Remainder Transform (CRT) and inverse Chinese Remainder Transform (iCRT) for power-of-2 cyclotomic rings, but also accelerate other basic ring arithmetic such as Ring Addition, Ring Subtraction and Ring Multiplication. We instantiate this capability in a Xilinx Virtex-7 FPGA that can attach to a host computer through either a PCI-Express port or Ethernet. We focus our experimental performance analysis on the NTRU-based LTV Homomorphic Encryption scheme. This is a leveled homomorphic encryption scheme, but our accelerator is compatible with other lattice-based schemes and recent improved bootstrapping designs to support arbitrary depth computation. We experimentally compare performance with a reference software implementations of the CRT and iCRT bottlenecks and when used in a practical application of encrypted string comparison.

52 citations


Journal ArticleDOI
TL;DR: This paper introduces multiple fossil-fuel and multiple renewable energy sources-based utility companies on the supply side, and proposes an end-user oriented utility company selection scheme to minimize user costs.
Abstract: The smart grid is the next generation power grid with bidirectional communications between the electricity users and the providers. Demand response management is vital in the smart grid to reduce power generation costs as well as to lower the users’ electricity bills. In this paper, we introduce multiple fossil-fuel and multiple renewable energy sources-based utility companies on the supply side, and propose an end-user oriented utility company selection scheme to minimize user costs. We formulate the problem as a game, incorporating the uncertainty associated with the power supply of the renewable sources, and prove that there exists a Nash equilibrium for the game. To further reduce users’ costs, we develop a joint scheme by integrating shiftable load scheduling with utility company selection. We model the joint scheme also as a game, and prove the existence of a Nash equilibrium for the game. For both schemes, we propose distributed algorithms for the users to find the equilibrium of the game using only local information. We evaluate our schemes and compare their performances to two other approaches. The results show that our joint utility company selection and shiftable load scheduling scheme incurs the least cost to the users.

45 citations


Journal ArticleDOI
TL;DR: The efficiency and applicability of the proposed approach is demonstrated via two novel applications: i) predictable auto-scaling policy setting which highlights the potential of distribution prediction in consistent definition of cloud elasticity rules; and ii) a distribution based admission controller which is able to efficiently admit or reject incoming queries based on probabilistic service level agreements compliance goals.
Abstract: Resource usage estimation for managing streaming workload in emerging applications domains such as enterprise computing, smart cities, remote healthcare, and astronomy, has emerged as a challenging research problem. Such resource estimation for processing continuous queries over streaming data is challenging due to: (i) uncertain stream arrival patterns, (ii) need to process different mixes of queries, and (iii) varying resource consumption. Existing techniques approximate resource usage for a query as a single point value which may not be sufficient because it is neither expressive enough nor does it capture the aforementioned nature of streaming workload. In this paper, we present a novel approach of using mixture density networks to estimate the whole spectrum of resource usage as probability density functions. We have evaluated our technique using the linear road benchmark and TPC-H in both private and public clouds. The efficiency and applicability of the proposed approach is demonstrated via two novel applications: i) predictable auto-scaling policy setting which highlights the potential of distribution prediction in consistent definition of cloud elasticity rules; and ii) a distribution based admission controller which is able to efficiently admit or reject incoming queries based on probabilistic service level agreements compliance goals.

42 citations


Journal ArticleDOI
TL;DR: This paper designs a vehicle-assist resilient information and network system for disaster management, despite of the Internet unavailability, and proposes online algorithms that schedules mobile stations for disasters management tasks with the objective of maximizing the total weight of finished tasks, without any knowledge of future task arrivals.
Abstract: After big disasters, a damaged area can be out of contact because of severe damage of existing network infrastructures. Meanwhile, high demands for network connections to the disaster area will arise to collect damage information and disseminate rescue instructions. In this paper, we design a vehicle-assist resilient information and network system for disaster management, despite of the Internet unavailability. It contains three main components: (1) smartphone apps that provide functions of SOS reporting, life and medical resources request/provision, and safe road navigation; (2) mobile stations that assist data exchange between smartphone apps and servers; (3) geo-distributed servers that collect user data, conduct distributed data analysis, and make disaster management decisions. Since the vehicle-assist network is critical to connect isolated smartphones and servers, we continue to study the scheduling problem of mobile stations. Given a number of disaster management tasks, such as sensing, information collection, and message dissemination, we propose online algorithms that schedules mobile stations for disaster management tasks with the objective of maximizing the total weight of finished tasks, without any knowledge of future task arrivals. We derive the competitive ratio of our proposed algorithms and conduct extensive simulations for performance evaluation.

42 citations


Journal ArticleDOI
TL;DR: A low-overhead detection technique which inserts malicious logic detection circuitry at netlist sites chosen by an algorithm that employs an intelligent and accurate analysis of fault propagation through logic gates.
Abstract: Hardware Trojan Horses have emerged as great threats to modern electronic design and manufacturing practices. Because of their inherent surreptitious nature, test vector generation to detect hardware Trojan horses is a difficult problem. Efficient online detection techniques can be more effective in detection of hardware Trojan horses. In this paper, we propose a low-overhead detection technique which inserts malicious logic detection circuitry at netlist sites chosen by an algorithm that employs an intelligent and accurate analysis of fault propagation through logic gates. Proactive system-level countermeasures can be activated on detection of malicious logic, thereby avoiding disastrous system failure. Experimental results on benchmark circuits show close to 100 percent HTH detection coverage when our proposed technique is employed, as well as acceptable overheads.

39 citations


Journal ArticleDOI
TL;DR: Experimental results on a light-weight cryptographic circuit, KATAN32, show that TFET-based current mode logic (CML) can both improve DPA resilience and preserve low power consumption in the target design.
Abstract: Emerging devices have been designed and fabricated to extend Moore’s Law. While traditional metrics such as power, energy, delay, and area certainly apply to emerging device technologies, new devices may offer additional benefits in addition to improvements in the aforementioned metrics. In this sense, we consider how new transistor technologies could also have a positive impact on hardware security. More specifically, we consider how tunnel transistors (TFETs) could offer superior protection to integrated circuits and embedded systems that are subjected to hardware-level attacks – e.g., differential power analysis (DPA). Experimental results on a light-weight cryptographic circuit, KATAN32, show that TFET-based current mode logic (CML) can both improve DPA resilience and preserve low power consumption in the target design. Compared to the CMOS-based CML designs, the TFET CML circuit consumes 15 times less power while achieving a similar level of DPA resistance.

Journal ArticleDOI
TL;DR: Both the theoretical and numerical results show that the proposed information caching strategy for cyber social computing can achieve high coverage probability, throughput, EE and delay by optimizing STF threshold, VIBS coverage and D2D communication radius.
Abstract: Cyber social computing has brought great changes and potential intelligent technologies for wireless networks. Among these technologies, information caching strategies are promising approaches to achieving lower delay, higher throughput and energy efficiency (EE) of user equipment (UE) in 5G wireless networks, by deploying intelligent caching and computing at the mobile edge. However, the static information caching strategies ignore the relevance of traffic fluctuation among different base stations (BSs) and the variance of users’ interests. Thus in this paper, an information caching strategy for cyber social computing based wireless network is proposed, taking advantages of two layer social cyberspaces in both traffic correlation between BSs and the social relationship between UEs. In the first layer, a base station social network (BSSN) is constructed based on the social relationship between BSs, which is defined as social-tie factor (STF). In the second layer, the Indian Buffet Model (IBM) is used to describe the social influence of one UE to another. To reduce base station’s traffic load, users with similar social interest can share the contents they cached with each other. Therefore, device-to-device (D2D) communication is taken as the underlay to cellular networks in our proposed information caching strategy. By utilizing the social characteristic of BSSN, the very important BSs (VIBSs) with higher averages STF are selected. Then the normal small cells (NSCs) within the VIBS’s coverage are linked to the VIBSs only and the other unique small cells (USCs) will be routed back into the core network (CN) directly. Limited cache and backhaul capacity in the whole network are only shared by VIBSs and USCs. UEs will communicate with each other via D2D links only if they have i) similar interests, ii) enough encounter duration between users and iii) are adjacent with each other. Otherwise, the UE shall obtain the required contents via cellular networks. With the tool of stochastic geometry process, key performance indicators, e.g., coverage probability, network throughput power consumption, EE, delay and offloaded traffic are studied. Both the theoretical and numerical results show that the proposed information caching strategy for cyber social computing can achieve high coverage probability, throughput, EE and delay by optimizing STF threshold, VIBS coverage and D2D communication radius.

Journal ArticleDOI
TL;DR: This work introduces a fully automatic method to detect the presence of HW Trojans in third party behavioral IPs (3PBIPs) using formal verification methods and performs High-Level Synthesis on these IPs and re-constructing the C code in order to perform the verification on them.
Abstract: This work introduces a fully automatic method to detect the presence of HW Trojans in third party behavioral IPs (3PBIPs) using formal verification methods. In particular, property checking at the behavioral level. Some state of the art High-Level Synthesis (HLS) tools now also include advanced formal verification tools. This work leverages these tools to detect the malicious alteration of 3PIPs when no golden reference IP is available. This work has also been extended to detect HW Trojans built into encrypted 3PBIPs by performing High-Level Synthesis on these IPs and re-constructing the C code in order to perform the verification on them. We present three case studies of two of the most typical HW Trojans with different trigger and payload mechanisms. The first leads to the malfunction of the IP, the second leaks information while the third leads to the denial of service. In all three cases, our proposed method was able to detect the HW Trojan in a fully automatic way.

Journal ArticleDOI
TL;DR: This paper proposes that approximation by reducing bit-precision and using inexact multiplier can save power consumption of digital multilayer perceptron accelerator during the classification of MNIST (inference) with negligible accuracy degradation.
Abstract: This paper proposes that approximation by reducing bit-precision and using inexact multiplier can save power consumption of digital multilayer perceptron accelerator during the classification of MNIST (inference) with negligible accuracy degradation. Based on the error sensitivity precomputed during the training, synaptic weights with less sensitivity are approximated. Under given bit-precision modes, our proposed algorithm determines bit precision for all synapse to minimize power consumption for given target accuracy. For entire network, earlier layer can be more approximated since it has lower error sensitivity. Proposed algorithm can save power 57.4 percent while accuracy is degraded about 1.7 percent. After approximation, retraining with few iterations can improve the accuracy while maintaining power consumption. The impact of different training conditions on the approximation is also studied. Training with small quantization error (less bit precision) allows more power saving in inference. It also shows that enough number of iteration during the training is important for approximation in inference. Network with more layers is more sensitive to the approximation.

Journal ArticleDOI
TL;DR: A microfluidic impedance cytometer based on a solid-state micropore for the detection and enumeration of cancer cells from the red blood cells and the SVM algorithm can provide a promising platform, which may be used for construction of sensor network in smart hospital.
Abstract: Smart hospital is believed to be a promising technology and platform that could tremendously improve healthcare in the future. This technology is featured by a front sensor to collect the relevant biomedical data to help the doctor analyze the patient’s condition and make diagnostic decisions. Precise analysis of these data is critical to improve the reliability of diagnostic methods. In this paper, we demonstrate a microfluidic impedance cytometer based on a solid-state micropore for the detection and enumeration of cancer cells from the red blood cells. The two important parameters of the signal pulses, including the peak amplitude and the pulse bandwidth, were analyzed by the support vector machine (SVM) algorithm to classify all the cell events into two different subpopulations accurately. The proposed microfluidic sensor combined with the SVM algorithm can provide a promising platform, which may be used for construction of sensor network in smart hospital.

Journal ArticleDOI
TL;DR: For the first time in the literature, the Attack Exploiting Static Power (AESP) is formulated as a univariate attack by using the mutual information approach to quantify the information that leaks through the static power side channel independently from the adopted leakage model.
Abstract: In this work we focus on Power Analysis Attacks (PAAs) which exploit the dependence of the static current of sub-50 nm CMOS integrated circuits on the internally processed data. Spice simulations of static power have been carried out to show that the coefficient of variation of nanometer logic gates is increasing with the scaling of CMOS technology. We demonstrate that it is possible to recover the secret key of a cryptographic core by exploiting this data dependence by means of different statistical distinguishers. For the first time in the literature we formulate the Attack Exploiting Static Power (AESP) as a univariate attack by using the mutual information approach to quantify the information that leaks through the static power side channel independently from the adopted leakage model. This analysis shows that countermeasures conceived to protect cryptographic hardware from attacks based on dynamic power consumption (e.g., WDDL, MDPL, SABL) still exhibit a leakage through the static power side channel. Finally, we show that the Time Enclosed Logic (TEL) concept does not leak information through the static power and is suitable to be used as a countermeasure against both attacks explointig dynamic power and attacks exploiting static power.

Journal ArticleDOI
TL;DR: A curriculum-level, competency-based visualized analytic system, called visualized analytics of core competencies (VACCs), was implemented and showed that more than 70% of students reported that VACC helped in reflecting on their core competency, assisting them in understanding the correspondence between their taken courses and core Competencies, and helping them to set goals regarding taking additional courses.
Abstract: This paper proposes an approach for developing curriculum-level open student models. This approach entails evaluating student core competencies using the correspondence between courses and core competencies in a competency-based curriculum and students’ taken courses and grades. On the basis of this approach, a curriculum-level, competency-based visualized analytic system, called visualized analytics of core competencies (VACCs), was implemented. Course-competency diagnostic tools, course work performance radar charts, and peer-based ranking tables were developed as part of the VACC analytic system for student reflection and to position their levels of core competencies. These curriculum-level open student models revealed multiple aspects of students’ core competencies by monitoring the quantity and quality of courses taken by students, and evaluating students’ ranks regarding core competencies compared with their classmates and graduates. VACC evaluation was conducted in this paper. The results showed that more than 70% of students reported that VACC helped in reflecting on their core competencies, assisting them in understanding the correspondence between their taken courses and core competencies, and helping them to set goals regarding taking additional courses. In addition, this paper discusses potential analytics and applications of open student models of core competencies.

Journal ArticleDOI
TL;DR: This work presents a scalable and low-overhead TSV usage and design method for 3D-NoC systems where the TSVs of a router can be utilized by its neighbors to deal with the cluster open defects.
Abstract: 3D-Network-on-Chips exploit the benefits of Network-on-Chips and 3D-Integrated Circuits allowing them to be considered as one of the most advanced and auspicious communication methodologies. On the other hand, the reliability of 3D-NoCs, due to the vulnerability of Through Silicon Vias, remains a major problem. Most of the existing techniques rely on correcting the TSV defects by using redundancies or employing routing algorithms. Nevertheless, they are not suitable for TSV-cluster defects as they can either lead to costly area and power consumption overheads, or they may result in non-minimal routing paths; thus, posing serious threats to the system reliability and overall performance. In this work, we present a scalable and low-overhead TSV usage and design method for 3D-NoC systems where the TSVs of a router can be utilized by its neighbors to deal with the cluster open defects. An adaptive online algorithm is also introduced to assist the proposed system to immediately work around the newly detected defects without using redundancies. The experimental results show the proposal ensure less than 2 percent of the routers being disabled, even with 50 percent of the TSV clusters defects. The performance evaluations also demonstrate unchanged performances for real applications under 5 percent of cluster defects.

Journal ArticleDOI
TL;DR: A novel and effective clone detection approach, termed double-track detection, for radio frequency identification-enabled supply chains and has a relatively high clone detection rate when compared with a leading method in this area.
Abstract: Toward improving the traditional clone detection technique whose performance may be affected by dynamic changes of supply chains and misread, we present a novel and effective clone detection approach, termed double-track detection, for radio frequency identification-enabled supply chains. As part of a tag’s attributes, verification information is written into tags so that the set of all verification information in the collected tag events forms a time series sequence. Genuine tags can be differentiated from clone tags due to the discrepancy in their verification sequences which are constructed as products flow along the supply chain. The verification sequence together with the sequence formed by business actions performed during the supply chains yield two tracks which can be assessed to detect the presence of clone tags. Theoretical analysis and experimental results show that our proposed mechanism is effective, reasonable, and has a relatively high clone detection rate when compared with a leading method in this area.

Journal ArticleDOI
TL;DR: A novel design of reconfigurable Spintronic Threshold Logic Gate (STLG), which employs spintronic weight devices to perform current-mode weighted summation of binary inputs, whereas, the low voltage fast switching spintronics threshold device carries out the threshold operation in an energy efficient manner.
Abstract: A Threshold Logic Gate (TLG) performs weighted summation of multiple binary inputs and compares the summation with a threshold. Different logic functions can be implemented by reconfiguring the weights and threshold of the same TLG circuit. This paper introduces a novel design of reconfigurable Spintronic Threshold Logic Gate (STLG), which employs spintronic weight devices to perform current-mode weighted summation of binary inputs, whereas, the low voltage fast switching spintronic threshold device carries out the threshold operation in an energy efficient manner. The proposed STLG operates at a small terminal voltage of 50 mV, resulting in ultra-low energy consumption. A bottom-up cross-layer simulation framework is developed to synthesize and map large scale digital logic functions to the proposed STLG circuits. The simulation results of ISCAS-85 benchmarks show that the proposed STLG based reconfigurable logic hardware can achieve two orders lower Energy-Delay Product (EDP) compared with state-of-the-art CMOS Field Programmable Gate Arrays (FPGA), and smaller EDP compared to large scale Memristive Threshold Logic (MTL) based FPGA. Moreover, the ultra-low programming energy of spintronic weight device also leads to three orders lower reconfiguration energy of STLG compared to MTL design.

Journal ArticleDOI
TL;DR: This work transforms the classic CMOS time-delay PUF (TD-PUF) utilizing integrated nanoscale ReRAM devices to achieve better performance metrics including uniqueness and reliabilitiiy, and exploits the property of high resistance variability of ReRAMs for the design of a ReRAM based delay stage that exhibits excellent uniqueness.
Abstract: Currently the semiconductor industry is in search of a Physically-Unclonable-Function (PUF) implementation, which combines high reliability and uniqueness with low area and power consumption. The characteristics of emerging nanoscale Resistive Random Access Memory (ReRAM) devices fulfill most of these properties, as they exhibit inherent variability with low area consumption. Of particular interest is that the resistive states of ReRAM devices show a strong dependence on the distribution of grain boundaries within the device, which leads to variability in total device resistance. In this work we transform the classic CMOS time-delay PUF (TD-PUF) utilizing integrated nanoscale ReRAM devices to achieve better performance metrics including uniqueness and reliabilitiiy. The enhanced design exploits the property of high resistance variability of ReRAMs for the design of a ReRAM based delay stage that exhibits excellent uniqueness. Accurate simulation and characterization of the proposed PUF was achieved by extracting resistance values, temperature dependence and usage stress of ReRAM devices fabricated in-house and their application in the proposed TD-PUF are discussed. A 24 stage time-delay PUF utilizing 48 ReRAM devices was simulated and results show excellent reliability with respect to environmental parameters. A temperature range of 0 to 125°C was simulated and an optimum reliability was observed at 0.79 V. A supply voltage noise of ±30 mV had no impact on the uniqueness and reliability. The proposed design was compared against two pure CMOS implementations of a TD-PUF. The comparison was performed with respect to the aforementioned metrics and under the same environmental conditions, showing up to 5 times increase in performance.

Journal ArticleDOI
TL;DR: A lightweight rule verification and resolution framework which mainly include a rule verification system for content anomaly detection and rule conflict detection by using domain knowledge and probability analysis and a quick resolution strategy for rule conflicts based on conflict-scenario-analysis is proposed.
Abstract: As an important component of Internet of Things (IoT), wireless sensor-actuator networks can significantly improve the practicality and flexibility of smart building systems. In smart building systems, services, stored as rules, are achieved by rule analyzing and executing. However, the irrational contents of rules and the conflicts among rules may bring confusion and maloperation in the system. To verify the correctness of rules, we propose a lightweight rule verification and resolution framework which mainly include: 1) a rule verification system for content anomaly detection and rule conflict detection by using domain knowledge and probability analysis and 2) a quick resolution strategy for rule conflicts based on conflict-scenario-analysis. This framework can balance the verification quality with speed and guarantee that the rule system performs appropriately. Moreover, in order to reduce the comparison in rule verification, we apply a tree structure to store the building structure hierarchically and bind every rule to one node in the location tree. This way, the position information of a rule can be extracted, and the verification efficiency can be improved. The experimental results show that our proposed framework and strategies can perform efficiently and flexibly in a smart building system.

Journal ArticleDOI
TL;DR: Preliminary results show that educational effectiveness is highest when a team consists of management and anchor types without leadership types, and the practical usefulness of the results is limited as the experiment of the paper targeted only one PBL course of one university.
Abstract: To improve practical IT education, many Japanese universities are implementing project-based learning (PBL). Although a previous study examined the relationship between educational effectiveness and the scatter of personal characteristics, the relationship between educational effectiveness and the combination of personal characteristics in a team, which is important to optimize the team composition for PBL, has yet to be examined. Herein, we use the five factor and stress theory to measure personal characteristics and classify students enrolled in a PBL class at Waseda University into four types—leadership, management, tugboat, and anchor. Then, knowledge and skills questionnaires are used to measure educational effectiveness. The results show that educational effectiveness is highest when a team consists of management and anchor types without leadership types. The results are preliminary, because the practical usefulness of our results is limited as the experiment of the paper targeted only one PBL course of one university. For that reason, we need to collect data from other PBL course at the same or other university.

Journal ArticleDOI
TL;DR: The S-IOT architecture and scheme called MoGaHo-Prox for a group handoff from a mobile AP to a fixed AP and vice versa in a k-member touring group is proposed, for which members in the group benefit for downloading and sharing of geo touring information, e.g., the contents of Point Of Interests (POIs), in a more efficient way.
Abstract: Internet of Things (IOT) is booming and has already been along with us everywhere The things not only communicate with each other without human intervention but also own some relationships called Social IOT (S-IOT), which is the same as social relationships of human beings In this work, the S-IOT architecture and scheme called MoGaHo-Prox for a group handoff from a mobile AP to a fixed AP and vice versa in a k-member touring group is proposed, for which members in the group benefit for downloading and sharing of geo touring information, eg, the contents of Point Of Interests (POIs), in a more efficient way This work (1) defines an S-IOT architecture and two functional scenarios called m-AP mode and f-AP mode for group touring, (2) proposes two control schemes called the conservative policy and the aggressive policy to handle the group handoff from the m-AP mode to the f-AP mode, and (3) provides control schemes for group handoff from the m-AP mode to the f-AP mode and vice versa A real system is developed using the Android system and the performance analysis is evaluated from the perspective of expense fee, power consuming, and service time

Journal ArticleDOI
TL;DR: This work proposes methods on access point placement and routing to fast connect users in a middle-scale post-disaster scenario model ofContent-Centric Networking and shows that CCN can bring more efficient routing and robust framework to fulfill the urgent demands of post- Disaster recovery.
Abstract: Content-Centric Networking (CCN) is now a research hotspot aiming at building up a new network architecture compared with the traditional IP-based, host-centric one. In this paper, after leaning that CCN's content naming and content-based properties make it suitable for fast network organizing in disaster recovery, we propose methods on access point placement and routing to fast connect users in a middle-scale post-disaster scenario model. Our work includes the design of a placement algorithm using graphic union coverage and a CCN routing strategy based on Breadth-First Searching, both extracting the social attributes of user node distribution. We use real-world maps for simulation and carry out comparative analysis with existing Ad Hoc methods under the same experimental conditions. The simulation results show that CCN can bring more efficient routing and robust framework to fulfill the urgent demands of post-disaster recovery.

Journal ArticleDOI
TL;DR: This paper tackles the issue of enabling accurate and robust tracking of off-the-shelf robots endowed with limited sensing capabilities, and proposes a solution by fusing visual tracking data gathered via a fixed camera in a smart environment with odometry data obtained from robot's on-board sensors.
Abstract: The collaboration between humans and robots is one of the most disruptive and challenging research areas. Even considering advances in design and artificial intelligence, humans and robots could soon ally to perform together a number of different tasks. Robots could also became new playmates. In fact, an emerging trend is associated with the so-called phygital gaming, which builds upon the idea of merging the physical world with a virtual one in order to let physical and virtual entities, such as players, robots, animated characters and other game objects interact seamlessly as if they were all part of the same reality. This paper specifically focuses on mixed reality gaming environments that can be created by using floor projection, and tackles the issue of enabling accurate and robust tracking of off-the-shelf robots endowed with limited sensing capabilities. The proposed solution is implemented by fusing visual tracking data gathered via a fixed camera in a smart environment with odometry data obtained from robot's on-board sensors. The solution has been tested within a phygital gaming platform in a real usage scenario, by experimenting with a robotic game that exhibits many challenging situations which would be hard to manage using conventional tracking techniques.

Journal ArticleDOI
TL;DR: The FORGE toolkit is presented, which leverages experimentation facilities currently deployed in international initiatives for the development of e-learning materials and builds an ecosystem, where teaching and educational materials, tools, and experiments are available under open scheme and policies.
Abstract: While more and more services become virtualized and always accessible in our society, laboratories supporting computer science (CS) lectures have mainly remained offline and class-based. This apparent abnormality is due to several limiting factors, discussed in the literature, such as the high cost of deploying and maintaining computer network testbeds and the lack of standardization for the presentation of eLearning platforms. In this paper, we present the FORGE toolkit, which leverages experimentation facilities currently deployed in international initiatives for the development of e-learning materials. Thus, we solve the institutional challenge mentioned in the ACM/IEEE 2013 CS curricula concerning the access and maintenance of specialized and heterogeneous hardware thanks to a seamless integration with the networking test-bed community. Moreover, this project builds an ecosystem, where teaching and educational materials, tools, and experiments are available under open scheme and policies. We demonstrate how it already meets most of the requirements from the network and communication component of CS 2013 and some of the labs of the Cisco academy. Finally, we present experience reports illustrating the potential benefits of this framework based on the first deployments in four post-graduate courses in prestigious institutions around the world.

Journal ArticleDOI
TL;DR: Experimental results of both user study and parameter evaluation demonstrate that the GRA-based method can improve accuracy of video affective analysis and performance of video recommendation.
Abstract: As an important cyber-enabled application, online video recommendation is seeing significant interest from both industry and academia To effectively recommend video content becomes a popular research topic However, it has been found that existing recommendation methods based on video affective analysis ignore the temporal factor, leading to poor performance especially when the order of emotion components does affect the recommendation quality This motivates us to study the feature of emotion fluctuation, which we call Temporal Factor of Emotion (TFE) In this paper, a novel recommendation method based on the Grey Relational Analysis (GRA) is proposed to tackle this problem GRA preserves the temporal factor of objects during analysis and is suitable for analyzing systems with unknown correlation (a set of independent videos) In our work, first, specific video features are extracted and mapped to the well-known Lovheim emotion-space, through the SVMs (Support Vector Machine) Then, GRA is applied to compute the quantitative relation among videos by using extracted emotions as factors Finally, a pick-filter pattern and GRA-based recommendation method under the Fisher model are proposed To evaluate the performance of our method, an online video recommendation system is developed Experimental results of both user study and parameter evaluation demonstrate that the GRA-based method can improve accuracy of video affective analysis and performance of video recommendation

Journal ArticleDOI
TL;DR: A sudden power-outage resilient in-processor checkpointing for energy-harvesting nonvolatile processors designed using hybrid 90 nm CMOS and 70 nm magnetic tunnel junction technologies achieves a several order-of magnitude reduction in rollback error probability.
Abstract: This paper introduces a sudden power-outage resilient in-processor checkpointing for energy-harvesting nonvolatile processors. In energy harvesting applications, a power supply generated from a renewable power source is unstable that may induce frequent sudden power outages, causing the inconsistency among distributed nonvolatile flip-flops (NVFFs) and hence failure rollbacks in conventional nonvolatile processors. To realize continuous operations upon the frequent sudden power outages, the proposed in-processor checkpointing technique fixes the inconsistency using time-reminding redundant NVFFs (TM-RNVFFs). The TM-RNVFFs store the current and the past few data with the timing information of storing. If several NVFFs fail to store the current data due to the sudden power outages, the proposed in-processor checkpointing technique exploits the timing information to find the common newest state among distributed NVFFs, leading to correct rollbacks to the state with consistency. The sudden power-outage effect is modeled to perform design space explorations at different configurations, such as redundancy and checkpointing period. Nonvolatile ARM Cortex-M0 processors are designed using hybrid 90 nm CMOS and 70 nm magnetic tunnel junction (MTJ) technologies. Based on the design space explorations, the proposed nonvolatile processor achieves a several order-of magnitude reduction in rollback error probability with a power dissipation overhead of 11.6 percent and an area overhead of 52.1 percent in comparison with the conventional nonvolatile processor.

Journal ArticleDOI
TL;DR: A user centered handoff scheme for hybrid 5G environments that guarantees that based on limited local information, each user can select a new BS with high achievable data receiving rate and low block probability in handoff.
Abstract: In this paper, we propose a user centered handoff scheme for hybrid 5G environments. The handoff problem is formulated as a multi-objective optimization problem which maximizes the achievable data receiving rate and minimizes the block probability simultaneously. When a user needs to select a new Base Station (BS) in handoff, the user will calculate the achievable data receiving rate and estimate the block probability for each available BS based on limited local information. By taking the throughput metric into consideration, the formulated multi-objective optimization problem is then transformed into a maximization problem. We solve the transformed maximization problem to calculate the network selection result in a distributed method. The calculated network selection result is proved to be a Pareto Optimal solution of the original multi-objective optimization problem. The proposed scheme guarantees that based on limited local information, each user can select a new BS with high achievable data receiving rate and low block probability in handoff. Comprehensive experiment has been conducted. It is shown that the proposed scheme promotes the total throughput and ratio of users served significantly.