scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Emerging Topics in Computing in 2015"


Journal ArticleDOI
TL;DR: This paper surveys over one hundred IoT smart solutions in the marketplace and examines them closely in order to identify the technologies used, functionalities, and applications, and suggests a number of potentially significant research directions.
Abstract: The Internet of Things (IoT) is a dynamic global information network consisting of Internet-connected objects, such as Radio frequency identifications, sensors, actuators, as well as other instruments and smart appliances that are becoming an integral component of the future Internet. Over the last decade, we have seen a large number of the IoT solutions developed by start-ups, small and medium enterprises, large corporations, academic research institutes (such as universities), and private and public research organizations making their way into the market. In this paper, we survey over one hundred IoT smart solutions in the marketplace and examine them closely in order to identify the technologies used, functionalities, and applications. Based on the application domain, we classify and discuss these solutions under five different categories: 1) smart wearable; 2) smart home; 3) smart city; 4) smart environment; and 5) smart enterprise. This survey is intended to serve as a guideline and a conceptual framework for future research in the IoT and to motivate and inspire further developments. It also provides a systematic exploration of existing research and suggests a number of potentially significant research directions.

388 citations


Journal ArticleDOI
TL;DR: This paper develops the searchable encryption for multi-keyword ranked search over the storage data by considering the large number of outsourced documents ( data) in the cloud and utilizing the relevance score and k-nearest neighbor techniques to develop an efficient multi- keyword search scheme.
Abstract: In mobile cloud computing, a fundamental application is to outsource the mobile data to external cloud servers for scalable data storage. The outsourced data, however, need to be encrypted due to the privacy and confidentiality concerns of their owner. This results in the distinguished difficulties on the accurate search over the encrypted mobile cloud data. To tackle this issue, in this paper, we develop the searchable encryption for multi-keyword ranked search over the storage data. Specifically, by considering the large number of outsourced documents (data) in the cloud, we utilize the relevance score and ${k}$ -nearest neighbor techniques to develop an efficient multi-keyword search scheme that can return the ranked search results based on the accuracy. Within this framework, we leverage an efficient index to further improve the search efficiency, and adopt the blind storage system to conceal access pattern of the search user. Security analysis demonstrates that our scheme can achieve confidentiality of documents and index, trapdoor privacy, trapdoor unlinkability, and concealing access pattern of the search user. Finally, using extensive simulations, we show that our proposal can achieve much improved efficiency in terms of search functionality and search time compared with the existing proposals.

168 citations


Journal ArticleDOI
TL;DR: The proposed method is applied to estimate the slab temperature distribution in a hot rolling process monitoring system, which is a typical industrial CPS, and it is demonstrated that the introduction of RNs improves temperature estimation efficiency and accuracy compared with the homogeneous WSN with SNs only.
Abstract: Ubiquitous monitoring over wireless sensor networks (WSNs) is of increasing interest in industrial cyber-physical systems (CPSs). Question of how to understand a situation of physical system by estimating process parameters is largely unexplored. This paper is concerned with the distributed estimation problem for industrial automation over relay-assisted WSNs. Different from most existing works on WSN with homogeneous sensor nodes, the network considered in this paper consists of two types of nodes, i.e., sensing nodes (SNs), which is capable of sensing and computing, and relay nodes (RNs), which is only capable of simple data aggregation. We first adopt a Kalman filtering (KF) approach to estimate the unknown physical parameters. In order to facilitate the decentralized implementation of the KF algorithm in relay-assisted WSNs, a tree-based broadcasting strategy is provided for distributed sensor fusion. With the fused information, the consensus-based estimation algorithms are proposed for SNs and RNs, respectively. The proposed method is applied to estimate the slab temperature distribution in a hot rolling process monitoring system, which is a typical industrial CPS. It is demonstrated that the introduction of RNs improves temperature estimation efficiency and accuracy compared with the homogeneous WSN with SNs only.

136 citations


Journal ArticleDOI
TL;DR: An analytical model is developed that adopts preimmunity and immunity to represent the features of mobile nodes when they change their interests and is more accurate to mimic epidemic information dissemination than other existing ones.
Abstract: With the advancement of smartphones, mobile social networks (MSNs) have emerged where information can be shared among mobile users via opportunistic peer-to-peer links. Since the social ties and users’ behaviors in MSNs have diverse characteristics, the information dissemination in MSNs becomes a new challenge. In particular, mobile users’ interested information may vary, which can significantly affect the information dissemination. In this paper, we develop an analytical model to analyze the epidemic information dissemination in MSNs. We first adopt preimmunity and immunity to represent the features of mobile nodes when they change their interests. Then, the information dissemination mechanism is introduced with four proposed dissemination rules according to the process of the epidemic information dissemination. We develop the analytical model through ordinary differential equations to mimic epidemic information dissemination in MSNs. The trace-driven simulation demonstrates that our analytical model is more accurate to mimic epidemic information dissemination than other existing ones.

127 citations


Journal ArticleDOI
TL;DR: This analysis provides an comprehensive understanding of user behavior in mobile Internet, which may be used by network operators to design appropriate mechanisms in resource provision and mobility management for resource consumers based on different categories of applications.
Abstract: Smart devices bring us the ubiquitous mobile accessing to Internet, making mobile Internet grow rapidly. Using the mobile traffic data collected at core metropolitan 2G and 3G networks of China over a week, this paper studies the mobile user behavior from three aspects: 1) data usage; 2) mobility pattern; and 3) application usage. We classify mobile users into different groups to study the resource consumption in mobile Internet. We observe that traffic heavy users and high mobility users tend to consume massive data and radio resources simultaneously. Both the data usage and the mobility pattern are closely related to the application access behavior of the users. Users can be clustered through their application usage behavior, and application categories can be identified by the ways to attract the users. Our analysis provides an comprehensive understanding of user behavior in mobile Internet, which may be used by network operators to design appropriate mechanisms in resource provision and mobility management for resource consumers based on different categories of applications.

116 citations


Journal ArticleDOI
TL;DR: This work uses lightweight semantics for metadata to enhance rich sensor data acquisition and heavyweight semantics for top level W3C Web Ontology Language ontology models describing multileveled knowledge-bases and semantically driven decision support and workflow orchestration for semantic EWS deployment.
Abstract: An early warning system (EWS) is a core type of data driven Internet of Things (IoTs) system used for environment disaster risk and effect management. The potential benefits of using a semantic-type EWS include easier sensor and data source plug-and-play, simpler, richer, and more dynamic metadata-driven data analysis and easier service interoperability and orchestration. The challenges faced during practical deployments of semantic EWSs are the need for scalable time-sensitive data exchange and processing (especially involving heterogeneous data sources) and the need for resilience to changing ICT resource constraints in crisis zones. We present a novel IoT EWS system framework that addresses these challenges, based upon a multisemantic representation model. We use lightweight semantics for metadata to enhance rich sensor data acquisition. We use heavyweight semantics for top level W3C Web Ontology Language ontology models describing multileveled knowledge-bases and semantically driven decision support and workflow orchestration. This approach is validated through determining both system related metrics and a case study involving an advanced prototype system of the semantic EWS, integrated with a deployed EWS infrastructure.

103 citations


Journal ArticleDOI
TL;DR: A novel three-layer architecture consisting of wearable devices, mobile devices, and a remote cloud for code offloading to offload a portion of computation tasks from wearable devices to local mobile devices or remote cloud such that even applications with a heavy computation load can still be upheld on wearable devices.
Abstract: Wearable computing becomes an emerging computing paradigm for various recently developed wearable devices, such as Google Glass and the Samsung Galaxy Smartwatch, which have significantly changed our daily life with new functions. To magnify the applications on wearable devices with limited computational capability, storage, and battery capacity, in this paper, we propose a novel three-layer architecture consisting of wearable devices, mobile devices, and a remote cloud for code offloading. In particular, we offload a portion of computation tasks from wearable devices to local mobile devices or remote cloud such that even applications with a heavy computation load can still be upheld on wearable devices. Furthermore, considering the special characteristics and the requirements of wearable devices, we investigate a code offloading strategy with a novel just-in-time objective, i.e., maximizing the number of tasks that should be executed on wearable devices with guaranteed delay requirements. Because of the NP-hardness of this problem as we prove, we propose a fast heuristic algorithm based on the genetic algorithm to solve it. Finally, extensive simulations are conducted to show that our proposed algorithm significantly outperforms the other three offloading strategies.

96 citations


Journal ArticleDOI
TL;DR: This paper proposes a genetic algorithm to perform data allocation to different memory units, therefore, reducing memory access cost in terms of power consumption and latency and shows the merits of the heterogeneous scratchpad architecture over the traditional pure memory system and the effectiveness of the proposed algorithms.
Abstract: The gradually widening speed disparity between CPU and memory has become an overwhelming bottleneck for the development of chip multiprocessor systems. In addition, increasing penalties caused by frequent on-chip memory accesses have raised critical challenges in delivering high memory access performance with tight power and latency budgets. To overcome the daunting memory wall and energy wall issues, this paper focuses on proposing a new heterogeneous scratchpad memory architecture, which is configured from SRAM, MRAM, and Z-RAM. Based on this architecture, we propose a genetic algorithm to perform data allocation to different memory units, therefore, reducing memory access cost in terms of power consumption and latency. Extensive and experiments are performed to show the merits of the heterogeneous scratchpad architecture over the traditional pure memory system and the effectiveness of the proposed algorithms.

86 citations


Journal ArticleDOI
TL;DR: To design an energy-aware offloading strategy, energy models of the WLAN, third-generation, and fourth-generation interfaces of smartphones are developed that make smartphones capable of accurately estimating the energy cost of task offloading.
Abstract: Task offloading from smartphones to the cloud is a promising strategy to enhance the computing capability of smartphones and prolong their battery life. However, task offloading introduces a communication cost for those devices. Therefore, the consideration of the communication cost is crucial for the effectiveness of task offloading. To make task offloading beneficial, one of the challenges is to estimate the energy consumed in communication activities of task offloading. Accurate energy estimation models will enable these devices to make the right decisions as to whether or not to perform task offloading, based on the energy cost of the communication activities. Simply put, if the offloading process consumes less energy than processing the task on the device itself, then the task is offloaded to the cloud. To design an energy-aware offloading strategy, we develop energy models of the WLAN, third-generation, and fourth-generation interfaces of smartphones. These models make smartphones capable of accurately estimating the energy cost of task offloading. We validate the models by conducting an extensive set of experiments on five smartphones from different vendors. The experimental results show that our estimation models accurately estimate the energy required to offload tasks.

81 citations


Journal ArticleDOI
TL;DR: A cooperative downloading algorithm, namely, max-throughput and min-delay cooperative downloading (MMCD), which minimizes an average delivery delay of each user request while maximizing the amount of data packets downloaded from the RSU is proposed.
Abstract: Advances in low-power wireless communications and microelectronics make a great impact on a transportation system and pervasive deployment of roadside units (RSUs) is promising to provide drive-thru Internet to vehicular users anytime and anywhere. Downloading data packets from the RSU, however, is not always reliable because of high mobility of vehicles and high contention among vehicular users. Using intervehicle communication, cooperative downloading can maximize the amount of data packets downloaded per user request. In this paper, we focus on effective data downloading for real-time applications (e.g., video streaming and online game) where each user request is prioritized by the delivery deadline. We propose a cooperative downloading algorithm, namely, max-throughput and min-delay cooperative downloading (MMCD), which minimizes an average delivery delay of each user request while maximizing the amount of data packets downloaded from the RSU. The performance of MMCD is evaluated by extensive simulations and results demonstrate that our algorithm can reduce mean delivery delay while gaining downloading throughput as high as that of a state-of-the-art method although vehicles highly compete for access to the RSU in a conventional highway scenario.

68 citations


Journal ArticleDOI
TL;DR: The crowding or protection effect is considered and a novel model called improved SIR model is developed, which uses both deterministic and stochastic models to characterize the dynamics of epidemics on social contact networks.
Abstract: Social contact networks and the way people interact with each other are the key factors that impact on epidemics spreading. However, it is challenging to model the behavior of epidemics based on social contact networks due to their high dynamics. Traditional models such as susceptible-infected-recovered (SIR) model ignore the crowding or protection effect and thus has some unrealistic assumption. In this paper, we consider the crowding or protection effect and develop a novel model called improved SIR model. Then, we use both deterministic and stochastic models to characterize the dynamics of epidemics on social contact networks. The results from both simulations and real data set conclude that the epidemics are more likely to outbreak on social contact networks with higher average degree. We also present some potential immunization strategies, such as random set immunization, dominating set immunization, and high degree set immunization to further prove the conclusion.

Journal ArticleDOI
TL;DR: A new dynamic programming algorithm is proposed that inserts the minimum number of FRTUs satisfying the detection rate constraint and can perform FRTU insertion for a large scale power system.
Abstract: In the modern smart home and community, smart meters have been massively deployed for the replacement of traditional analog meters. Although it significantly reduces the cost of data collection as the meter readings are wireless transmitted, a smart meter is not tamper-resistant. As a consequence, the smart grid infrastructure is under threat of energy theft, by means of attacking a smart meter so that it undercounts the electricity usage. Deployment of feeder remote terminal unit (FRTU) helps narrow the search zone of energy theft in smart home and community. However, due to budgetary limit, utility companies can only afford to insert the minimum number of FRTUs. This imposes a signifcant challenge to deploy the minimum number of FRTUs while each smart meter is still effectively monitored. To the best of our knowledge, the only work addressing this problem is [1] , which uses stochastic optimization methods. Their algorithm is not very practical as it cannot handle large distribution networks because of the scalability issue. Due to the inherent heuristic and non-deterministic nature, there is no guarantee on the solution quality as well. Thus, the high performance energy theft detection is still needed for this energy theft problem. In order to resolve this challenge, we propose a new dynamic programming algorithm that inserts the minimum number of FRTUs satisfying the detection rate constraint. It evaluates every candidate solution in a bottom-up fashion using an innovative pruning technique. As a deterministic polynomial time algorithm, it is able to handle large distribution networks. In contrast to [1] which can only handle small system, our technique can perform FRTU insertion for a large scale power system. Our experimental results demonstrate that the average number of FRTUs required is only 26% of the number of smart meters in the community. Compared with the previous work, the number of FRTUs is reduced by 18.8% while the solution quality in terms of anomaly coverage index metric is still improved.

Journal ArticleDOI
TL;DR: A novel WSN-MCC integration scheme named TPSS is proposed, which consists of two main parts: time and priority-based selective data transmission (TPSDT) for WSN gateway to selectively transmit sensory data that are more useful to the cloud, considering the time and priorities of the data requested by the mobile user.
Abstract: The integration of ubiquitous wireless sensor network (WSN) and powerful mobile cloud computing (MCC) is a research topic that is attracting growing interest in both academia and industry. In this new paradigm, WSN provides data to the cloud and mobile users request data from the cloud. To support applications involving WSN-MCC integration, which need to reliably offer data that are more useful to the mobile users from WSN to cloud, this paper first identifies the critical issues that affect the usefulness of sensory data and the reliability of WSN, then proposes a novel WSN-MCC integration scheme named TPSS, which consists of two main parts: 1) time and priority-based selective data transmission (TPSDT) for WSN gateway to selectively transmit sensory data that are more useful to the cloud, considering the time and priority features of the data requested by the mobile user and 2) priority-based sleep scheduling (PSS) algorithm for WSN to save energy consumption so that it can gather and transmit data in a more reliable way. Analytical and experimental results demonstrate the effectiveness of TPSS in improving usefulness of sensory data and reliability of WSN for WSN-MCC integration.

Journal ArticleDOI
TL;DR: A tutorial on the development of the smart controller to schedule household appliances, which is also known as smart home scheduling, is presented and results demonstrate that it can reduce the electricity bill by 30.11% while still improving peak-to-average ratio (PAR) in the power grid.
Abstract: The smart home infrastructure features the automatic control of various household appliances in the advanced metering infrastructure, which enables the connection of individual smart home systems to a smart grid. In such an infrastructure, each smart meter receives electricity price from utilities and uses a smart controller to schedule the household appliances accordingly. This helps shift the heavy energy load from peak hours to nonpeak hours. Such an architecture significantly improves the reliability of the power grid through reducing the peak energy usage, while benefiting the customers through reducing electricity bills. This paper presents a tutorial on the development of the smart controller to schedule household appliances, which is also known as smart home scheduling. For each individual user, a dynamic programming-based algorithm that schedules household appliances with discrete power levels is introduced. Based on it, a game theoretic framework is designed for multi-user smart home scheduling to mitigate the accumulated energy usage during the peak hours. The simulation results demonstrate that it can reduce the electricity bill by 30.11% while still improving peak-to-average ratio (PAR) in the power grid. Furthermore, the deployment of smart home scheduling techniques in a big city is discussed. In such a context, the parallel computation is explored to tackle the large computational complexity, a machine assignment approximation algorithm is proposed to accelerate the smart home scheduling, and a new hieratical framework is proposed to reduce the communication overhead. The simulation results on large test cases demonstrate that the city level hierarchical smart home scheduling can achieve the bill reduction of 43.04% and the PAR reduction of 47.50% on average.

Journal ArticleDOI
TL;DR: Simulation results show that the faster the nodes move in the network, the more stable topology can be found, which is quite suitable for UAV applications.
Abstract: In recent years, unmanned aerial vehicle (UAV) has been widely adopted in military and civilian applications. For small UAVs, cooperation based on communication networks can effectively expand their working area. Although the UAV networks are quite similar to the traditional mobile ad hoc networks, the special characteristics of the UAV application scenario have not been considered in the literature. In this paper, we propose a distributed gateway selection algorithm with dynamic network partition by taking into account the application characteristics of UAV networks. In the proposed algorithm, the influence of the asymmetry information phenomenon on UAVs’ topology control is weakened by dividing the network into several subareas. During the operation of the network, the partition of the network can be adaptively adjusted to keep the whole network topology stable even though UAVs are moving rapidly. Meanwhile, the number of gateways can be completely controlled according to the system requirements. In particular, we define the stability of UAV networks, build a network partition model, and design a distributed gateway selection algorithm. Simulation results show using our proposed scheme that the faster the nodes move in the network, the more stable topology can be found, which is quite suitable for UAV applications.

Journal ArticleDOI
TL;DR: This paper introduces a new coverage quality metric and proposes an adaptive framework to deal with energy prediction fluctuations, and forms a novel coverage quality maximization problem that considers both sensing coverage quality and network connectivity that consists of active sensors and the base station.
Abstract: Sensing coverage is a fundamental problem in wireless sensor networks for event detection, environment monitoring, and surveillance purposes. In this paper, we study the sensing coverage problem in an energy harvesting sensor network deployed for monitoring a set of targets for a given monitoring period, where sensors are powered by renewable energy sources and operate in duty-cycle mode, for which we first introduce a new coverage quality metric to measure the coverage quality within two different time scales. We then formulate a novel coverage quality maximization problem that considers both sensing coverage quality and network connectivity that consists of active sensors and the base station. Due to the NP-hardness of the problem, we instead devise efficient centralized and distributed algorithms for the problem, assuming that the harvesting energy prediction at each sensor is accurate during the entire monitoring period. Otherwise, we propose an adaptive framework to deal with energy prediction fluctuations, under which we show that the proposed centralized and distributed algorithms are still applicable. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed solutions are promising.

Journal ArticleDOI
TL;DR: This paper proposes a Wikipedia-based semantic similarity measurement method that is intended for real-world noisy short texts and is effective especially when the short text is semantically noisy, i.e., they contain some meaningless or misleading terms for estimating their main topic.
Abstract: This paper proposes a Wikipedia-based semantic similarity measurement method that is intended for real-world noisy short texts. Our method is a kind of explicit semantic analysis (ESA), which adds a bag of Wikipedia entities (Wikipedia pages) to a text as its semantic representation and uses the vector of entities for computing the semantic similarity. Adding related entities to a text, not a single word or phrase, is a challenging practical problem because it usually consists of several subproblems, e.g., key term extraction from texts, related entity finding for each key term, and weight aggregation of related entities. Our proposed method solves this aggregation problem using extended naive Bayes, a probabilistic weighting mechanism based on the Bayes’ theorem. Our method is effective especially when the short text is semantically noisy, i.e., they contain some meaningless or misleading terms for estimating their main topic. Experimental results on Twitter message and Web snippet clustering revealed that our method outperformed ESA for noisy short texts. We also found that reducing the dimension of the vector to representative Wikipedia entities scarcely affected the performance while decreasing the vector size and hence the storage space and the processing time of computing the cosine similarity.

Journal ArticleDOI
TL;DR: A practical implementation and experimental evaluations of S-Aframe are presented to demonstrate its reliability and efficiency in terms of computation and communication performance on popular mobile devices and a VSN-based smart ride application is developed to demonstrate the functionality and practical usefulness of the framework.
Abstract: This paper presents S-Aframe, an agent-based multilayer framework with context-aware semantic service (CSS) to support the development and deployment of context-aware applications for vehicular social networks (VSNs) formed by in-vehicle or mobile devices used by drivers, passengers, and pedestrians. The programming model of the framework incorporates features that support collaborations between mobile agents to provide communication services on behalf of owner applications, and service (or resident) agents to provide application services on mobile devices. Using this model, different self-adaptive applications and services for VSNs can be effectively developed and deployed. Built on top of the mobile devices’ operating systems, the framework architecture consists of framework service layer, software agent layer and owner application layer. Integrated with the proposed novel CSS, applications developed on the framework can autonomously and intelligently self-adapt to rapidly changing network connectivity and dynamic contexts of VSN users. A practical implementation and experimental evaluations of S-Aframe are presented to demonstrate its reliability and efficiency in terms of computation and communication performance on popular mobile devices. In addition, a VSN-based smart ride application is developed to demonstrate the functionality and practical usefulness of S-Aframe.

Journal ArticleDOI
TL;DR: The advantages between legitimate partners are extended via developing novel security codes on top of the proposed cross-layer DFRFT security communication model, aiming to achieve an error-free legitimate channel while preventing the eavesdropper from any useful information.
Abstract: Discrete fractional Fourier transform (DFRFT) is a generalization of discrete Fourier transform. There are a number of DFRFT proposals, which are useful for various signal processing applications. This paper investigates practical solutions toward the construction of unconditionally secure communication systems based on DFRFT via cross-layer approach. By introducing a distort signal parameter, the sender randomly flip-flops between the distort signal parameter and the general signal parameter to confuse the attacker. The advantages of the legitimate partners are guaranteed. We extend the advantages between legitimate partners via developing novel security codes on top of the proposed cross-layer DFRFT security communication model, aiming to achieve an error-free legitimate channel while preventing the eavesdropper from any useful information. Thus, a cross-layer strong mobile communication secure model is built.

Journal ArticleDOI
TL;DR: A new strategy for deploying Internet gateways on the roads, together with a novel scheme for data packet routing, in order to allow a vehicle to access the Internet via multihop communications in a VANET.
Abstract: In-vehicle Internet access is one of the main applications of vehicular ad hoc networks (VANETs), which aims at providing the vehicle passengers with a low-cost access to the Internet via on-road gateways. This paper introduces a new strategy for deploying Internet gateways on the roads, together with a novel scheme for data packet routing, in order to allow a vehicle to access the Internet via multihop communications in a VANET. The gateway placement strategy is to minimize the total cost of gateway deployment, while ensuring that a vehicle can connect to an Internet gateway (using multihop communications) with a probability greater than a specified threshold. This cost-minimization problem is formulated using binary integer programming, and applied to a realistic city scenario, consisting of the roads around the University of Waterloo, Waterloo, ON, Canada. To the best of our knowledge, the proposed deployment strategy is the first study to address the probability of multihop connectivity among the vehicles and the deployed gateways. On the other hand, the developed packet routing scheme is based on a multichannel medium access control protocol, known as VeMAC, using time division multiple access. The performance of this cross-layer design is evaluated for a multichannel VANET in a highway scenario, mainly in terms of the end-to-end packet delivery delay. The end-to-end delay is calculated by modeling each relay vehicle as a queuing system, in which the packets are served in batches of no more than a specified maximum batch size. The proposed gateway placement and packet routing schemes represent a step toward providing reliable and ubiquitous in-vehicle Internet connectivity.

Journal ArticleDOI
TL;DR: It is shown that the cell association problem is NP-hard, and several near-optimal solution algorithms for assigning users to BSs are proposed, including a sequential fixing algorithm, a rounding approximation algorithms, a greedy approximation algorithm, and a randomized algorithm.
Abstract: Femtocells are recognized as effective for improving network coverage and capacity and for reducing power consumption due to the reduced range of wireless transmissions. Although highly appealing, a plethora of challenging problems need to be addressed for fully harvesting its potential. In this paper, we investigate the problem of cell association and service scheduling in femtocell networks. In addition to the general goal of offloading traffic from the macrobase station (BS), we also aim at minimizing the latency of service requested by the users, while considering both open and closed access strategies. We show that the cell association problem is NP-hard, and propose several near-optimal solution algorithms for assigning users to BSs, including a sequential fixing algorithm, a rounding approximation algorithm, a greedy approximation algorithm, and a randomized algorithm. For service scheduling, we develop an optimal algorithm to minimize the average waiting time for the users associated with the same BS. The proposed algorithms are analyzed with respect to performance bounds, approximation ratios, and optimality, and are evaluated with simulations.

Journal ArticleDOI
TL;DR: An expert selection system that learns online the best expert to assign to each patient depending on the context of the patient, and it is proved that as the number of patients grows, the proposed context-adaptive algorithm will discover the optimal expert to select for patients with a specific context.
Abstract: In this paper, we propose an expert selection system that learns online the best expert to assign to each patient depending on the context of the patient. In general, the context can include an enormous number and variety of information related to the patient’s health condition, age, gender, previous drug doses, and so forth, but the most relevant information is embedded in only a few contexts. If these most relevant contexts were known in advance, learning would be relatively simple but they are not. Moreover, the relevant contexts may be different for different health conditions. To address these challenges, we develop a new class of algorithms aimed at discovering the most relevant contexts and the best clinic and expert to use to make a diagnosis given a patient’s contexts. We prove that as the number of patients grows, the proposed context-adaptive algorithm will discover the optimal expert to select for patients with a specific context. Moreover, the algorithm also provides confidence bounds on the diagnostic accuracy of the expert it selects, which can be considered by the primary care physician before making the final decision. While our algorithm is general and can be applied in numerous medical scenarios, we illustrate its functionality and performance by applying it to a real-world breast cancer diagnosis data set. Finally, while the application we consider in this paper is medical diagnosis, our proposed algorithm can be applied in other environments where expertise needs to be discovered.

Journal ArticleDOI
TL;DR: The static metrics and switching attributes of graphene nanoribbon field-effect transistors (GNR FETs) for scaling the channel length from 15 nm down to 2.5 nm and GNR width by approaching the ultimate vertical scaling of oxide thickness are investigated.
Abstract: In this paper, we have investigated the static metrics and switching attributes of graphene nanoribbon field-effect transistors (GNR FETs) for scaling the channel length from 15 nm down to 2.5 nm and GNR width by approaching the ultimate vertical scaling of oxide thickness. We have simulated the double-gate GNR FET by solving a numerical quantum transport model based on self-consistent solution of the 3D Poisson equation and 1D Schrodinger equation within the non-equilibrium Green’s function formulism. The narrow armchair GNR, e.g., (7,0), improved the device robustness to short-channel effects, leading to better OFF-state performance considering OFF-current, ${I}_{\mathrm {ON}} / {I}_{\mathrm {OFF}}$ ratio, subthreshold swing, and drain-induced barrier-lowering. The wider armchair GNRs allow the scaling of channel length and supply voltage, resulting in better ON-state performance, such as the larger intrinsic cut-off frequency for the channel length below 7.5 nm at smaller gate voltage as well as smaller intrinsic gate-delay time with the constant slope for scaling the channel length and supply voltage. The wider armchair GNRs, e.g., (13,0), have smaller power-delay product for scaling the channel length and supply voltage, reaching to $\sim 0.18$ (fJ/ $\mu \text{m}$ ).

Journal ArticleDOI
TL;DR: A new point of view is presented that makes it possible to give a statistical interpretation of the traditional latent semantic analysis (LSA) paradigm based on the truncated singular value decomposition (TSVD) technique by introducing a figure of merit arising from the Solomonoff approach.
Abstract: The aim of this paper is to present a new point of view that makes it possible to give a statistical interpretation of the traditional latent semantic analysis (LSA) paradigm based on the truncated singular value decomposition (TSVD) technique. We show how the TSVD can be interpreted as a statistical estimator derived from the LSA co-occurrence relationship matrix by mapping probability distributions on Riemanian manifolds. Besides, the quality of the estimator model can be expressed by introducing a figure of merit arising from the Solomonoff approach. This figure of merit takes into account both the adherence to the sample data and the simplicity of the model. In our model, the simplicity parameter of the proposed figure of merit depends on the number of the singular values retained after the truncation process, while the TSVD estimator, according to the Hellinger distance, guarantees the minimal distance between the sample probability distribution and the inferred probabilistic model.

Journal ArticleDOI
TL;DR: It is shown that the CS-based cryptosystem achieves the extended Wyner-sense perfect secrecy, but when the key is used repeatedly, both the plaintext and the key could be conditionally accessed.
Abstract: With the development of the cyber-physical systems (CPS), the security analysis of the data therein becomes more and more important. Recently, due to the advantage of joint encryption and compression for data transmission in CPS, the emerging compressed sensing (CS)-based cryptosystem has attracted much attention, where security is of extreme importance. The existing methods only analyze the security of the plaintext under the assumption that the key is absolutely safe. However, for sparse plaintext, the prior sparsity knowledge of the plaintext could be exploited to partly retrieve the key, and then the plaintext, from the ciphertext. So, the existing methods do not provide a satisfactory security analysis. In this paper, it is conducted in the information theory frame, where the plaintext sparsity feature and the mutual information of the ciphertext, key, and plaintext are involved. In addition, the perfect secrecy criteria (Shannon-sense and Wyner-sense) are extended to measure the security. While the security level is given, the illegal access risk is also discussed. It is shown that the CS-based cryptosystem achieves the extended Wyner-sense perfect secrecy, but when the key is used repeatedly, both the plaintext and the key could be conditionally accessed.

Journal ArticleDOI
TL;DR: This paper analyzes the post-disaster communication issue and forms this problem into a Mixed Integer Linear Programming (MILP) problem, and proposes an off-line energy efficient scheme using the expectation of traffic demands.
Abstract: We study the emergency communication problem in the post-disaster scenario. The emergence of Renewable Energy-enabled Based Station (REBS), which has pre-equipped energy harvesting devices, provides a new perspective to solve this problem, since the post-disaster communication scenario happens to be a Wireless Mesh Network (WMN) constituted by REBSs. However, one needs to address the unstable and inadequate power supply, insufficient capacities of the communication links, and the time-varying traffic demands accordingly during a period of time. In this paper, we deal with these problems and focus on optimizing data traffic throughput with the lowest weighted energy consumption based on the expectation of traffic demands. We firstly analyze the post-disaster communication issue and formulate this problem into a Mixed Integer Linear Programming (MILP) problem, and propose an off-line energy efficient scheme using the expectation of traffic demands. Furthermore, an on-line scheme is put forward which dynamically adjusts the off-line result with the knowledge of the current real data traffic demands. The efficiency of our proposal is demonstrated by theoretical analyses and numerical results.

Journal ArticleDOI
TL;DR: Interestingly, the night burst phenomenon of college students is revealed by comparing the base stations location and real-world map, and it is concluded that it is not proper to model the voice call arrivals as Poisson process.
Abstract: With the impressive development of wireless devices and growth of mobile users, telecommunication operators are thirsty for understanding the characteristics of mobile network behavior. Based on the big data generated in the telecommunication networks, telecommunication operators are able to obtain substantial insights using big data analysis and computing techniques. This paper introduces the important aspects in this topic, including data set information, data analysis techniques, and two case studies. We categorize the data set in the telecommunication networks into two types, user-oriented and network-oriented, and discuss the potential application. Then, several important data analysis techniques are summarized and reviewed, from temporal and spatial analysis to data mining and statistical test. Finally, we present two case studies, using Erlang measurement and call detail record, respectively, to understand the base station behavior. Interestingly, the night burst phenomenon of college students is revealed by comparing the base stations location and real-world map, and we conclude that it is not proper to model the voice call arrivals as Poisson process.

Journal ArticleDOI
TL;DR: It is found that the electron mean-free path in GNRs strongly depends on surrounding dielectric environment, and the mean- free path increases with interlayer insertion of high-k dielectrics, which improves the performance of the proposed GNR interconnects.
Abstract: Due to their higher resistance, single layer graphene nanoribbons (GNRs) are not suitable for high-speed on-chip interconnect applications. Hence, we use multilayer GNRs (MLGNRs) that offer multiple conduction channels and lower resistance. However, MLGNRs turn into graphite as the number of layers increase, which reduces the mean-free path of each layer. Insertion of a dielectric between GNR layers prevents its conversion into graphite, thereby improving its mean-free path and scattering rate. In this paper, we are proposing an analytical model for the time-domain analysis of side-contact MLGNRs (SC-MLGNRs) with intermediate dielectric insertion. The proposed model computes scattering rate, mean-free path, and carrier mobility in GNRs where dielectric has been inserted between individual layers. Our analytical results for mobility and scattering rate have been verified with the existing experimental data that exhibit excellent accuracy (maximum of error 4% for mobility and 16% for scattering time). Based on our analysis, we have found that the electron mean-free path in GNRs strongly depends on surrounding dielectric environment. In that, the mean-free path increases with interlayer insertion of high- $k$ dielectrics. Equivalent $RLC$ parameters, delay, energy-delay product, and bandwidth density are calculated for our proposed GNR interconnects using our model. We observe that these performance metrics significantly improve due to the presence of dielectric between GNR layers. When compared with Cu interconnects, insertion of HfO2 between GNR layers results in reduction in both propagation delay and energy-delay product by $2\times $ for interconnect lengths of $1400 ~\mu \text{m}$ . In addition, zigzag SC-MLGNR interconnect with $N=10$ and $\varepsilon _{2}=20$ gives nearly 35% higher bandwidth density than that of Cu interconnects for all interconnect lengths. In our analysis, we propose a new performance metric, bandwidth density/energy-delay product to determine the performance limits of our proposed interconnect structure. Finally, we compare the performance of SC-MLGNR interconnect structure with copper and optical interconnects to exhibit its application in local and global interconnects.

Journal ArticleDOI
TL;DR: This paper presents a distributed multiagent-based task allocation model by dispatching a mobile and cooperative agent to each subtask of each complex task, which also addresses the objective of social effectiveness maximization.
Abstract: In many social networks (SNs), social individuals often need to work together to accomplish a complex task (e.g., software product development). In the context of SNs, due to the presence of social connections, complex task allocation must achieve satisfactory social effectiveness; in other words, each complex task should be allocated to socially close individuals to enable them to communicate and collaborate effectively. Although several approaches have been proposed to tackle this so-called social task allocation problem, they either suffer from being centralized or ignore the objective of maximizing the social effectiveness. In this paper, we present a distributed multiagent-based task allocation model by dispatching a mobile and cooperative agent to each subtask of each complex task, which also addresses the objective of social effectiveness maximization. With respect to mobility, each agent can transport itself to a suitable individual that has the relevant capability. With respect to cooperativeness, agents can cooperate with each other by forming teams and moving to a suitable individual jointly if the cooperation is beneficial. Our theoretical analyses provide provable performance guarantees of this model. We also apply this model in a set of static and dynamic network settings to investigate its effectiveness, scalability, and robustness. Through experimental results, our model is determined to be effective in improving the system load balance and social effectiveness; this model is scalable in reducing the computation time and is robust in adapting the system dynamics.

Journal ArticleDOI
TL;DR: A sparse linear integration (SLI) model that focuses on integrating visual content and its associated metadata, which are referred to as the content and the context modalities, respectively, for semantic concept retrieval is proposed.
Abstract: The semantic gap between low-level visual features and high-level semantics is a well-known challenge in content-based multimedia information retrieval. With the rapid popularization of social media, which allows users to assign tags to describe images and videos, attention is naturally drawn to take advantage of these metadata in order to bridge the semantic gap. This paper proposes a sparse linear integration (SLI) model that focuses on integrating visual content and its associated metadata, which are referred to as the content and the context modalities, respectively, for semantic concept retrieval. An optimization problem is formulated to approximate an instance using a sparse linear combination of other instances and minimize the difference between them. The prediction score of a concept for a test instance measures how well it can be reconstructed by the positive instances of that concept. Two benchmark image data sets and their associated tags are used to evaluate the SLI model. Experimental results show promising performance by comparing with the approaches based on a single modality and approaches based on popular fusion methods.