scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Access in 2014"


Journal ArticleDOI
TL;DR: A brief overview of deep learning is provided, and current research efforts and the challenges to big data, as well as the future trends are highlighted.
Abstract: Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.

1,003 citations


Journal ArticleDOI
TL;DR: This paper presents a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics, and presents the prevalent Hadoop framework for addressing big data challenges.
Abstract: Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.

1,002 citations


Journal ArticleDOI
TL;DR: The state of the art in motion planning is surveyed and selected planners that tackle current issues in robotics are addressed, for instance, real-life kinodynamic planning, optimal planning, replanning in dynamic environments, and planning under uncertainty are discussed.
Abstract: Motion planning is a fundamental research area in robotics. Sampling-based methods offer an efficient solution for what is otherwise a rather challenging dilemma of path planning. Consequently, these methods have been extended further away from basic robot planning into further difficult scenarios and diverse applications. A comprehensive survey of the growing body of work in sampling-based planning is given here. Simulations are executed to evaluate some of the proposed planners and highlight some of the implementation details that are often left unspecified. An emphasis is placed on contemporary research directions in this field. We address planners that tackle current issues in robotics. For instance, real-life kinodynamic planning, optimal planning, replanning in dynamic environments, and planning under uncertainty are discussed. The aim of this paper is to survey the state of the art in motion planning and to assess selected planners, examine implementation details and above all shed a light on the current challenges in motion planning and the promising approaches that will potentially overcome those problems.

602 citations


Journal ArticleDOI
TL;DR: This survey is intended to serve as a guideline and a conceptual framework for context-aware product development and research in the IoT paradigm and provides a systematic exploration of existing IoT products in the marketplace and highlights a number of potentially significant research directions and trends.
Abstract: The Internet of Things (IoT) is a dynamic global information network consisting of Internet-connected objects, such as radio frequency identifications, sensors, and actuators, as well as other instruments and smart appliances that are becoming an integral component of the Internet. Over the last few years, we have seen a plethora of IoT solutions making their way into the industry marketplace. Context-aware communications and computing have played a critical role throughout the last few years of ubiquitous computing and are expected to play a significant role in the IoT paradigm as well. In this paper, we examine a variety of popular and innovative IoT solutions in terms of context-aware technology perspectives. More importantly, we evaluate these IoT solutions using a framework that we built around well-known context-aware computing theories. This survey is intended to serve as a guideline and a conceptual framework for context-aware product development and research in the IoT paradigm. It also provides a systematic exploration of existing IoT products in the marketplace and highlights a number of potentially significant research directions and trends.

547 citations


Journal ArticleDOI
Lei Xu1, Chunxiao Jiang1, Jian Wang1, Jian Yuan1, Yong Ren1 
TL;DR: This paper identifies four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker, and examines various approaches that can help to protect sensitive information.
Abstract: The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.

528 citations


Journal ArticleDOI
TL;DR: The development process used to design a novelty six-sided gaming die is described, which includes a microprocessor and accelerometer, which together detect motion and upon halting, identify the top surface through gravity and illuminate light-emitting diodes for a striking effect.
Abstract: In new product development, time to market (TTM) is critical for the success and profitability of next generation products. When these products include sophisticated electronics encased in 3D packaging with complex geometries and intricate detail, TTM can be compromised - resulting in lost opportunity. The use of advanced 3D printing technology enhanced with component placement and electrical interconnect deposition can provide electronic prototypes that now can be rapidly fabricated in comparable time frames as traditional 2D bread-boarded prototypes; however, these 3D prototypes include the advantage of being embedded within more appropriate shapes in order to authentically prototype products earlier in the development cycle. The fabrication freedom offered by 3D printing techniques, such as stereolithography and fused deposition modeling have recently been explored in the context of 3D electronics integration - referred to as 3D structural electronics or 3D printed electronics. Enhanced 3D printing may eventually be employed to manufacture end-use parts and thus offer unit-level customization with local manufacturing; however, until the materials and dimensional accuracies improve (an eventuality), 3D printing technologies can be employed to reduce development times by providing advanced geometrically appropriate electronic prototypes. This paper describes the development process used to design a novelty six-sided gaming die. The die includes a microprocessor and accelerometer, which together detect motion and upon halting, identify the top surface through gravity and illuminate light-emitting diodes for a striking effect. By applying 3D printing of structural electronics to expedite prototyping, the development cycle was reduced from weeks to hours.

500 citations


Journal ArticleDOI
TL;DR: The proposed dynamic clustering algorithm can achieve significant performance gain over existing naive clustering schemes and is shown to solve the weighted sum rate maximization problem through a generalized weighted minimum mean square error approach.
Abstract: This paper considers a downlink cloud radio access network (C-RAN) in which all the base-stations (BSs) are connected to a central computing cloud via digital backhaul links with finite capacities. Each user is associated with a user-centric cluster of BSs; the central processor shares the user's data with the BSs in the cluster, which then cooperatively serve the user through joint beamforming. Under this setup, this paper investigates the user scheduling, BS clustering, and beamforming design problem from a network utility maximization perspective. Differing from previous works, this paper explicitly considers the per-BS backhaul capacity constraints. We formulate the network utility maximization problem for the downlink C-RAN under two different models depending on whether the BS clustering for each user is dynamic or static over different user scheduling time slots. In the former case, the user-centric BS cluster is dynamically optimized for each scheduled user along with the beamforming vector in each time-frequency slot, whereas in the latter case, the user-centric BS cluster is fixed for each user and we jointly optimize the user scheduling and the beamforming vector to account for the backhaul constraints. In both cases, the nonconvex per-BS backhaul constraints are approximated using the reweighted l 1 -norm technique. This approximation allows us to reformulate the per-BS backhaul constraints into weighted per-BS power constraints and solve the weighted sum rate maximization problem through a generalized weighted minimum mean square error approach. This paper shows that the proposed dynamic clustering algorithm can achieve significant performance gain over existing naive clustering schemes. This paper also proposes two heuristic static clustering schemes that can already achieve a substantial portion of the gain.

409 citations


Journal ArticleDOI
TL;DR: The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging field of antispoofing, with special attention to the mature and largely deployed face modality.
Abstract: In recent decades, we have witnessed the evolution of biometric technology from the first pioneering works in face and voice recognition to the current state of development wherein a wide spectrum of highly accurate systems may be found, ranging from largely deployed modalities, such as fingerprint, face, or iris, to more marginal ones, such as signature or hand. This path of technological evolution has naturally led to a critical issue that has only started to be addressed recently: the resistance of this rapidly emerging technology to external attacks and, in particular, to spoofing. Spoofing, referred to by the term presentation attack in current standards, is a purely biometric vulnerability that is not shared with other IT security solutions. It refers to the ability to fool a biometric system into recognizing an illegitimate user as a genuine one by means of presenting a synthetic forged version of the original biometric trait to the sensor. The entire biometric community, including researchers, developers, standardizing bodies, and vendors, has thrown itself into the challenging task of proposing and developing efficient protection methods against this threat. The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging field of antispoofing, with special attention to the mature and largely deployed face modality. The work covers theories, methodologies, state-of-the-art techniques, and evaluation databases and also aims at providing an outlook into the future of this very active field of research.

366 citations


Journal ArticleDOI
TL;DR: This paper presents the latest progress on cloud RAN (C-RAN) in the areas of centralization and virtualization and demonstrates the viability of various front-haul solutions, including common public radio interface compression, single fiber bidirection and wavelength-division multiplexing.
Abstract: This paper presents the latest progress on cloud RAN (C-RAN) in the areas of centralization and virtualization. A C-RAN system centralizes the baseband processing resources into a pool and virtualizes soft base-band units on demand. The major challenges for C-RAN including front-haul and virtualization are analyzed with potential solutions proposed. Extensive field trials verify the viability of various front-haul solutions, including common public radio interface compression, single fiber bidirection and wavelength-division multiplexing. In addition, C-RANs facilitation of coordinated multipoint (CoMP) implementation is demonstrated with 50%-100% uplink CoMP gain observed in field trials. Finally, a test bed is established based on general purpose platform with assisted accelerators. It is demonstrated that this test bed can support multi-RAT, i.e., Time-Division Duplexing Long Term Evolution, Frequency-Division Duplexing Long Term Evolution, and Global System for Mobile Communications efficiently and presents similar performance to traditional systems.

286 citations


Journal ArticleDOI
TL;DR: This bibliography will aid academic researchers and practicing engineers in adopting appropriate topics and will stimulate utilities toward development and implementation of software packages.
Abstract: Phasor measurement units (PMUs) are rapidly being deployed in electric power networks across the globe. Wide-area measurement system (WAMS), which builds upon PMUs and fast communication links, is consequently emerging as an advanced monitoring and control infrastructure. Rapid adaptation of such devices and technologies has led the researchers to investigate multitude of challenges and pursue opportunities in synchrophasor measurement technology, PMU structural design, PMU placement, miscellaneous applications of PMU from local perspectives, and various WAMS functionalities from the system perspective. Relevant research articles appeared in the IEEE and IET publications from 1983 through 2014 are rigorously surveyed in this paper to represent a panorama of research progress lines. This bibliography will aid academic researchers and practicing engineers in adopting appropriate topics and will stimulate utilities toward development and implementation of software packages.

239 citations


Journal ArticleDOI
TL;DR: The theory of one-ended and two-ended impedance-based fault location algorithms are presented and what additional information can be gleaned from waveforms recorded by intelligent electronic devices (IEDs) during a fault is assessed.
Abstract: A number of impedance-based fault location algorithms have been developed for estimating the distance to faults in a transmission network. Each algorithm has specific input data requirements and makes certain assumptions that may or may not hold true in a particular fault location scenario. Without a detailed understanding of the principle of each fault-locating method, choosing the most suitable fault location algorithm can be a challenging task. This paper, therefore, presents the theory of one-ended (simple reactance, Takagi, modified Takagi, Eriksson, and Novosel et al. ) and two-ended (synchronized, unsynchronized, and current-only) impedance-based fault location algorithms and demonstrates their application in locating real-world faults. The theory details the formulation and input data requirement of each fault-locating algorithm and evaluates the sensitivity of each to the following error sources: 1) load; 2) remote infeed; 3) fault resistance; 4) mutual coupling; 5) inaccurate line impedances; 6) DC offset and CT saturation; 7) three-terminal lines; and 8) tapped radial lines. From the theoretical analysis and field data testing, the following criteria are recommended for choosing the most suitable fault-locating algorithm: 1) data availability and 2) fault location application scenario. Another objective of this paper is to assess what additional information can be gleaned from waveforms recorded by intelligent electronic devices (IEDs) during a fault. Actual fault event data captured in utility networks is exploited to gain valuable feedback about the transmission network upstream from the IED device, and estimate the value of fault resistance.

Journal ArticleDOI
TL;DR: A platform-based methodology is proposed, which enables independent implementation of system topology and control protocol by using a compositional approach and is shown to be effective on a proof-of-concept electric power system design.
Abstract: In an aircraft electric power system, one or more supervisory control units actuate a set of electromechanical switches to dynamically distribute power from generators to loads, while satisfying safety, reliability, and real-time performance requirements. To reduce expensive redesign steps, this control problem is generally addressed by minor incremental changes on top of consolidated solutions. A more systematic approach is hindered by a lack of rigorous design methodologies that allow estimating the impact of earlier design decisions on the final implementation. To achieve an optimal implementation that satisfies a set of requirements, we propose a platform-based methodology for electric power system design, which enables independent implementation of system topology (i.e., interconnection among elements) and control protocol by using a compositional approach. In our flow, design space exploration is carried out as a sequence of refinement steps from the initial specification toward a final implementation by mapping higher level behavioral and performance models into a set of either existing or virtual library components at the lower level of abstraction. Specifications are first expressed using the formalisms of linear temporal logic, signal temporal logic, and arithmetic constraints on Boolean variables. To reason about different requirements, we use specialized analysis and synthesis frameworks and formulate assume guarantee contracts at the articulation points in the design flow. We show the effectiveness of our approach on a proof-of-concept electric power system design.

Journal ArticleDOI
TL;DR: Three novel similarity measures for user grouping based on weighted likelihood, subspace projection, and Fubini-Study, respectively, as well as two novel clustering methods, including hierarchical and K-medoids clustering, are proposed for FDD massive MIMO systems.
Abstract: The massive multiple-input multiple-output (MIMO) system has drawn increasing attention recently as it is expected to boost the system throughput and result in lower costs. Previous studies mainly focus on time division duplexing (TDD) systems, which are more amenable to practical implementations due to channel reciprocity. However, there are many frequency division duplexing (FDD) systems deployed worldwide. Consequently, it is of great importance to investigate the design and performance of FDD massive MIMO systems. To reduce the overhead of channel estimation in FDD systems, a two-stage precoding scheme was recently proposed to decompose the precoding procedure into intergroup precoding and intragroup precoding. The problem of user grouping and scheduling thus arises. In this paper, we first propose three novel similarity measures for user grouping based on weighted likelihood, subspace projection, and Fubini-Study, respectively, as well as two novel clustering methods, including hierarchical and K-medoids clustering. We then propose a dynamic user scheduling scheme to further enhance the system throughput once the user groups are formed. The load balancing problem is considered when few users are active and solved with an effective algorithm. The efficacy of the proposed schemes are validated with theoretical analysis and simulations.

Journal ArticleDOI
TL;DR: An overview of the cloud radio access network (C-RAN), which is a key enabler for future mobile networks in order to meet the explosive capacity demand of mobile traffic, and reduce the capital and operating expenditure burden faced by operators, is presented.
Abstract: This paper presents an overview of the cloud radio access network (C-RAN), which is a key enabler for future mobile networks in order to meet the explosive capacity demand of mobile traffic, and reduce the capital and operating expenditure burden faced by operators. We start by reviewing the requirements of future mobile networks, called 5G, followed by a discussion on emerging network concepts for 5G network architecture. Then, an overview of C-RAN and related works are presented. As a significant scenario of a 5G system, the ultra dense network deployment based on C-RAN is discussed with focuses on flexible backhauling, automated network organization, and advanced mobility management. Another import feature of a 5G system is the long-term coexistence of multiple radio access technologies (multi-RATs). Therefore, we present some directions and preliminary thoughts for future C-RAN-supporting Multi-RATs, including joint resource allocation, mobility management, as well as traffic steering and service mapping.

Journal ArticleDOI
TL;DR: This work presents a novel semantic level interoperability architecture for pervasive computing and IoTs that conforms to the common IoT-A architecture reference model (ARM), and maps the central components of the architecture to the IoT-ARM.
Abstract: Pervasive computing and Internet of Things (IoTs) paradigms have created a huge potential for new business. To fully realize this potential, there is a need for a common way to abstract the heterogeneity of devices so that their functionality can be represented as a virtual computing platform. To this end, we present novel semantic level interoperability architecture for pervasive computing and IoTs. There are two main principles in the proposed architecture. First, information and capabilities of devices are represented with semantic web knowledge representation technologies and interaction with devices and the physical world is achieved by accessing and modifying their virtual representations. Second, global IoT is divided into numerous local smart spaces managed by a semantic information broker (SIB) that provides a means to monitor and update the virtual representation of the physical world. An integral part of the architecture is a resolution infrastructure that provides a means to resolve the network address of a SIB either using a physical object identifier as a pointer to information or by searching SIBs matching a specification represented with SPARQL. We present several reference implementations and applications that we have developed to evaluate the architecture in practice. The evaluation also includes performance studies that, together with the applications, demonstrate the suitability of the architecture to real-life IoT scenarios. In addition, to validate that the proposed architecture conforms to the common IoT-A architecture reference model (ARM), we map the central components of the architecture to the IoT-ARM.

Journal ArticleDOI
TL;DR: A clustering-procedure-based approach to the design of a system that integrates cellular and ad hoc operation modes depending on the availability of infrastructure nodes is proposed, and system simulations demonstrate the viability of the proposed design.
Abstract: Device-to-device (D2D) communications have been proposed as an underlay to long-term evolution (LTE) networks as a means of harvesting the proximity, reuse, and hop gains. However, D2D communications can also serve as a technology component for providing public protection and disaster relief (PPDR) and national security and public safety (NSPS) services. In the United States, for example, spectrum has been reserved in the 700-MHz band for an LTE-based public safety network. The key requirement for the evolving broadband PPDR and NSPS services capable systems is to provide access to cellular services when the infrastructure is available and to efficiently support local services even if a subset or all of the network nodes become dysfunctional due to public disaster or emergency situations. This paper reviews some of the key requirements, technology challenges, and solution approaches that must be in place in order to enable LTE networks and, in particular, D2D communications, to meet PPDR and NSPS-related requirements. In particular, we propose a clustering-procedure-based approach to the design of a system that integrates cellular and ad hoc operation modes depending on the availability of infrastructure nodes. System simulations demonstrate the viability of the proposed design. The proposed scheme is currently considered as a technology component of the evolving 5G concept developed by the European 5G research project METIS.

Journal ArticleDOI
TL;DR: A new dense dielectric (DD) patch array antenna prototype operating at 28 GHz for future fifth generation (5G) cellular networks is presented and can be considered as a good candidate for 5G communication applications.
Abstract: In this paper, a new dense dielectric (DD) patch array antenna prototype operating at 28 GHz for future fifth generation (5G) cellular networks is presented. This array antenna is proposed and designed with a standard printed circuit board process to be suitable for integration with radio frequency/microwave circuitry. The proposed structure employs four circular-shaped DD patch radiator antenna elements fed by a 1-to-4 Wilkinson power divider. To improve the array radiation characteristics, a ground structure based on a compact uniplanar electromagnetic bandgap unit cell has been used. The DD patch shows better radiation and total efficiencies compared with the metallic patch radiator. For further gain improvement, a dielectric layer of a superstrate is applied above the array antenna. The measured impedance bandwidth of the proposed array antenna ranges from 27 to beyond 32 GHz for a reflection coefficient (S11) of less than -10 dB. The proposed design exhibits stable radiation patterns over the whole frequency band of interest, with a total realized gain more than 16 dBi. Due to the remarkable performance of the proposed array, it can be considered as a good candidate for 5G communication applications.

Journal ArticleDOI
TL;DR: The development of a hurricane power outage prediction model applicable along the full U.S. coastline is described, the use of the model is demonstrated for Hurricane Sandy, and what the impacts of a number of historic storms, including Typhoon Haiyan, would be on current U.s. energy infrastructure are estimated.
Abstract: Hurricanes regularly cause widespread and prolonged power outages along the U.S. coastline. These power outages have significant impacts on other infrastructure dependent on electric power and on the population living in the impacted area. Efficient and effective emergency response planning within power utilities, other utilities dependent on electric power, private companies, and local, state, and federal government agencies benefit from accurate estimates of the extent and spatial distribution of power outages in advance of an approaching hurricane. A number of models have been developed for predicting power outages in advance of a hurricane, but these have been specific to a given utility service area, limiting their use to support wider emergency response planning. In this paper, we describe the development of a hurricane power outage prediction model applicable along the full U.S. coastline using only publicly available data, we demonstrate the use of the model for Hurricane Sandy, and we use the model to estimate what the impacts of a number of historic storms, including Typhoon Haiyan, would be on current U.S. energy infrastructure.

Journal ArticleDOI
TL;DR: A cross-layer architecture combining SDR and SDN characteristics is proposed that can effectively use the frequency spectrum and considerably enhance network performance and suggestions are proposed for follow-up studies on the proposed architecture.
Abstract: Wireless networks have evolved from 1G to 4G networks, allowing smart devices to become important tools in daily life. The 5G network is a revolutionary technology that can change consumers' Internet use habits, as it creates a truly wireless environment. It is faster, with better quality, and is more secure. Most importantly, users can truly use network services anytime, anywhere. With increasing demand, the use of bandwidth and frequency spectrum resources is beyond expectations. This paper found that the frequency spectrum and network information have considerable relevance; thus, spectrum utilization and channel flow interactions should be simultaneously considered. We considered that software defined radio (SDR) and software defined networks (SDNs) are the best solution. We propose a cross-layer architecture combining SDR and SDN characteristics. As the simulation evaluation results suggest, the proposed architecture can effectively use the frequency spectrum and considerably enhance network performance. Based on the results, suggestions are proposed for follow-up studies on the proposed architecture.

Journal ArticleDOI
TL;DR: A 3-D ray tracing model is used as a propagation-prediction engine to evaluate performance in a number of simple, reference cases and Ray tracing itself is proposed and evaluated as a real-time prediction tool to assist future BF techniques.
Abstract: The use of large-size antenna arrays to implement pencil-beam forming techniques is becoming a key asset to cope with the very high throughput density requirements and high path-loss of future millimeter-wave (mm-wave) gigabit-wireless applications. Suboptimal beamforming (BF) strategies based on search over discrete set of beams (steering vectors) are proposed and implemented in present standards and applications. The potential of fully adaptive advanced BF strategies that will become possible in the future, thanks to the availability of accurate localization and powerful distributed computing, is evaluated in this paper through system simulation. After validation and calibration against mm-wave directional indoor channel measurements, a 3-D ray tracing model is used as a propagation-prediction engine to evaluate performance in a number of simple, reference cases. Ray tracing itself, however, is proposed and evaluated as a real-time prediction tool to assist future BF techniques.

Journal ArticleDOI
TL;DR: Focusing on cases of the adenocarcinoma nonsmall cell lung cancer tumor subtype from a larger data set, it is shown that classifiers can be built to predict survival time, the first known result to make such predictions from CT scans of lung cancer.
Abstract: Nonsmall cell lung cancer is a prevalent disease. It is diagnosed and treated with the help of computed tomography (CT) scans. In this paper, we apply radiomics to select 3-D features from CT images of the lung toward providing prognostic information. Focusing on cases of the adenocarcinoma nonsmall cell lung cancer tumor subtype from a larger data set, we show that classifiers can be built to predict survival time. This is the first known result to make such predictions from CT scans of lung cancer. We compare classifiers and feature selection approaches. The best accuracy when predicting survival was 77.5% using a decision tree in a leave-one-out cross validation and was obtained after selecting five features per fold from 219.

Journal ArticleDOI
TL;DR: Solutions for the reliable actuation of a gallium-based, non-toxic liquid-metal alloy (Galinstan) are presented that mitigate the tendency of the alloy to form a surface oxide layer capable of wetting to the channel walls, inhibiting motion.
Abstract: Continuous electrowetting (CEW) is demonstrated to be an effective actuation mechanism for reconfigurable radio frequency (RF) devices that use non-toxic liquid-metal tuning elements. Previous research has shown CEW is an efficient means of electrically inducing motion in a liquid-metal slug, but precise control of the slug's position within fluidic channels has not been demonstrated. Here, the precise positioning of liquid-metal slugs is achieved using CEW actuation in conjunction with channels designed to minimize the liquid-metal surface energy at discrete locations. This approach leverages the high surface tension of liquid metal to control its resting position with submillimeter accuracy. The CEW actuation and fluidic channel design were optimized to create reconfigurable RF devices. In addition, solutions for the reliable actuation of a gallium-based, non-toxic liquid-metal alloy (Galinstan) are presented that mitigate the tendency of the alloy to form a surface oxide layer capable of wetting to the channel walls, inhibiting motion. A reconfigurable slot antenna utilizing these techniques to achieve a 15.2% tunable frequency bandwidth is demonstrated.

Journal ArticleDOI
TL;DR: A novel tree-based diversionary routing scheme for preserving source location privacy using hide and seek strategy to create diversionary or decoy routes along the path to the sink from the real source, where the end of each diversionary route is a decoy (fake source node), which periodically emits fake events.
Abstract: Wireless sensor networks (WSNs) have been proliferating due to their wide applications in both military and commercial use. However, one critical challenge to WSNs implementation is source location privacy. In this paper, we propose a novel tree-based diversionary routing scheme for preserving source location privacy using hide and seek strategy to create diversionary or decoy routes along the path to the sink from the real source, where the end of each diversionary route is a decoy (fake source node), which periodically emits fake events. Meanwhile, the proposed scheme is able to maximize the network lifetime of WSNs. The main idea is that the lifetime of WSNs depends on the nodes with high energy consumption or hotspot, and then the proposed scheme minimizes energy consumption in hotspot and creates redundancy diversionary routes in nonhotspot regions with abundant energy. Hence, it achieves not only privacy preservation, but also network lifetime maximization. Furthermore, we systematically analyze the energy consumption in WSNs, and provide guidance on the number of diversionary routes, which can be created in different regions away from the sink. In addition, we identify a novel attack against phantom routing, which is widely used for source location privacy preservation, namely, direction-oriented attack. We also perform a comprehensive analysis on how the direction-oriented attack can be defeated by the proposed scheme. Theoretical and experimental results show that our scheme is very effective to improve the privacy protection while maximizing the network lifetime.

Journal ArticleDOI
TL;DR: This paper focuses on comparing these two major paradigms of techniques, namely, homomorphic encryption-based techniques and feature/index randomization- based techniques, for confidentiality-preserving image search, and develops novel and systematic metrics to quantitatively evaluate security strength in this unique type of data and applications.
Abstract: Recent years have seen increasing popularity of storing and managing personal multimedia data using online services. Preserving confidentiality of online personal data while offering efficient functionalities thus becomes an important and pressing research issue. In this paper, we study the problem of content-based search of image data archived online while preserving content confidentiality. The problem has different settings from those typically considered in the secure computation literature, as it deals with data in rank-ordered search, and has a different security-efficiency requirement. Secure computation techniques, such as homomorphic encryption, can potentially be used in this application, at a cost of high computational and communication complexity. Alternatively, efficient techniques based on randomizing visual feature and search indexes have been proposed recently to enable similarity comparison between encrypted images. This paper focuses on comparing these two major paradigms of techniques, namely, homomorphic encryption-based techniques and feature/index randomization-based techniques, for confidentiality-preserving image search. We develop novel and systematic metrics to quantitatively evaluate security strength in this unique type of data and applications. We compare these two paradigms of techniques in terms of their search performance, security strength, and computational efficiency. The insights obtained through this paper and comparison will help design practical algorithms appropriate for privacy-aware cloud multimedia systems.

Journal ArticleDOI
TL;DR: This paper investigates in datacenter networks and provides a general overview and analysis of the literature covering various research areas, including data center network interconnection architectures, network protocols for data center networks, and network resource sharing in multitenant cloud data centers.
Abstract: Large-scale data centers enable the new era of cloud computing and provide the core infrastructure to meet the computing and storage requirements for both enterprise information technology needs and cloud-based services. To support the ever-growing cloud computing needs, the number of servers in today’s data centers are increasing exponentially, which in turn leads to enormous challenges in designing an efficient and cost-effective data center network. With data availability and security at stake, the issues with data center networks are more critical than ever. Motivated by these challenges and critical issues, many novel and creative research works have been proposed in recent years. In this paper, we investigate in data center networks and provide a general overview and analysis of the literature covering various research areas, including data center network interconnection architectures, network protocols for data center networks, and network resource sharing in multitenant cloud data centers. We start with an overview on data center networks and together with its requirements navigate the data center network designs. We then present the research literature related to the aforementioned research topics in the subsequent sections. Finally, we draw the conclusions.

Journal ArticleDOI
TL;DR: A vision of the advantages of the RANaaS is given, its benefits in terms of energy efficiency are presented, and a consistent system-level power model is proposed as a reference for assessing innovative functionalities toward 5G systems.
Abstract: This paper focuses on energy efficiency aspects and related benefits of radio-access-network-as-a-service (RANaaS) implementation (using commodity hardware) as architectural evolution of LTE-advanced networks toward 5G infrastructure. RANaaS is a novel concept introduced recently, which enables the partial centralization of RAN functionalities depending on the actual needs as well as on network characteristics. In the view of future definition of 5G systems, this cloud-based design is an important solution in terms of efficient usage of network resources. The aim of this paper is to give a vision of the advantages of the RANaaS, to present its benefits in terms of energy efficiency and to propose a consistent system-level power model as a reference for assessing innovative functionalities toward 5G systems. The incremental benefits through the years are also discussed in perspective, by considering technological evolution of IT platforms and the increasing matching between their capabilities and the need for progressive virtualization of RAN functionalities. The description is complemented by an exemplary evaluation in terms of energy efficiency, analyzing the achievable gains associated with the RANaaS paradigm.

Journal ArticleDOI
TL;DR: This paper built two systems, one for music genre classification and another for music emotion estimation using both SVM and GP models, and compared their performances on two databases of similar size.
Abstract: Gaussian Processes (GPs) are Bayesian nonparametric models that are becoming more and more popular for their superior capabilities to capture highly nonlinear data relationships in various tasks, such as dimensionality reduction, time series analysis, novelty detection, as well as classical regression and classification tasks. In this paper, we investigate the feasibility and applicability of GP models for music genre classification and music emotion estimation. These are two of the main tasks in the music information retrieval (MIR) field. So far, the support vector machine (SVM) has been the dominant model used in MIR systems. Like SVM, GP models are based on kernel functions and Gram matrices; but, in contrast, they produce truly probabilistic outputs with an explicit degree of prediction uncertainty. In addition, there exist algorithms for GP hyperparameter learning-something the SVM framework lacks. In this paper, we built two systems, one for music genre classification and another for music emotion estimation using both SVM and GP models, and compared their performances on two databases of similar size. In all cases, the music audio signal was processed in the same way, and the effects of different feature extraction methods and their various combinations were also investigated. The evaluation experiments clearly showed that in both music genre classification and music emotion estimation tasks the GP performed consistently better than the SVM. The GP achieved a 13.6% relative genre classification error reduction and up to an 11% absolute increase of the coefficient of determination in the emotion estimation task.

Journal ArticleDOI
TL;DR: A circularly polarized patch antenna for future fifth-generation mobile phones is presented in this paper and a parametric study of the effect of the metallic block and the surrounding dielectric substrate on the gain at a low elevation angle and the axial ratio of the proposed antenna are presented.
Abstract: A circularly polarized patch antenna for future fifth-generation mobile phones is presented in this paper. Miniaturization and beamwidth enhancement of a patch antenna are the two main areas to be discussed. By folding the edge of the radiating patch with loading slots, the size of the patch antenna is 44.8% smaller than a conventional half wavelength patch, which allows it to be accommodated inside handsets easily. Wide beamwidth is obtained by surrounding the patch with a dielectric substrate and supporting the antenna by a metallic block. A measured half power beamwidth of 124° is achieved. The impedance bandwidth of the antenna is over 10%, and the 3-dB axial ratio bandwidth is 3.05%. The proposed antenna covers a wide elevation angle and complete azimuth range. A parametric study of the effect of the metallic block and the surrounding dielectric substrate on the gain at a low elevation angle and the axial ratio of the proposed antenna are presented.

Journal ArticleDOI
TL;DR: The challenges of building a simulation platform for 5G considering the emerging new technologies and network architectures are analyzed and a cloud-based two-level framework of system-level simulator is proposed to validate the candidate technologies and fulfill the promising technology performance identified for5G.
Abstract: With the evaluation and simulation of long-term evolution/4G cellular network and hot discussion about new technologies or network architecture for 5G, the appearance of simulation and evaluation guidelines for 5G is in urgent need This paper analyzes the challenges of building a simulation platform for 5G considering the emerging new technologies and network architectures Based on the overview of evaluation methodologies issued for 4G candidates, challenges in 5G evaluation are formulated Additionally, a cloud-based two-level framework of system-level simulator is proposed to validate the candidate technologies and fulfill the promising technology performance identified for 5G

Journal ArticleDOI
TL;DR: The actors of the ODE and their roles in the ecosystem as well as the business model elements and services that are needed in open data based business are defined.
Abstract: Emerging opportunities for open data based business have been recognized around the world. Open data can provide new business opportunities for actors that provide data, for actors that consume data, and for actors that develop innovative services and applications around the data. Open data based business requires business models and a collaborative environment-called an ecosystem-to support businesses based on open data, services, and applications. This paper outlines the open data ecosystem (ODE) from the business viewpoint and then defines the requirements of such an ecosystem. The outline and requirements are based on the state-of-the-art knowledge explored from the literature and the state of the practice on data-based business in the industry collected through interviews. The interviews revealed several motives and advantages of the ODE. However, there are also obstacles that should be carefully considered and solved. This paper defines the actors of the ODE and their roles in the ecosystem as well as the business model elements and services that are needed in open data based business. According to the interviews, the interest in open data and open data ecosystems is high at this moment. However, further research work is required to establish and validate the ODE in the near future.