scispace - formally typeset
Search or ask a question

Showing papers by "Beijing University of Posts and Telecommunications published in 2017"


Proceedings ArticleDOI
01 Jul 2017
TL;DR: Residual Attention Network as mentioned in this paper is a convolutional neural network using attention mechanism which can incorporate with state-of-the-art feed forward network architecture in an end-to-end training fashion.
Abstract: In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.

2,625 citations


Posted Content
TL;DR: Residual Attention Network as discussed by the authors is a convolutional neural network using attention mechanism which can incorporate with state-of-the-art feed forward network architecture in an end-to-end training fashion.
Abstract: In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.

1,360 citations


Proceedings ArticleDOI
21 Jul 2017
TL;DR: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results and gauges the state-of-the-art in single imagesuper-resolution.
Abstract: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.

1,243 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors conduct a systematic study on the security threats to blockchain and survey the corresponding real attacks by examining popular blockchain systems. And they also review the security enhancement solutions for blockchain, which could be used in the development of various blockchain systems, and suggest some future directions to stir research efforts into this area.

1,071 citations


Journal ArticleDOI
TL;DR: Propagation parameters and channel models for understanding mmWave propagation, such as line-of-sight (LOS) probabilities, large-scale path loss, and building penetration loss, as modeled by various standardization bodies are compared over the 0.5–100 GHz range.
Abstract: This paper provides an overview of the features of fifth generation (5G) wireless communication systems now being developed for use in the millimeter wave (mmWave) frequency bands. Early results and key concepts of 5G networks are presented, and the channel modeling efforts of many international groups for both licensed and unlicensed applications are described here. Propagation parameters and channel models for understanding mmWave propagation, such as line-of-sight (LOS) probabilities, large-scale path loss, and building penetration loss, as modeled by various standardization bodies, are compared over the 0.5–100 GHz range.

943 citations


Journal ArticleDOI
TL;DR: This survey makes an exhaustive review on the state-of-the-art research efforts on mobile edge networks, including definition, architecture, and advantages, and presents a comprehensive survey of issues on computing, caching, and communication techniques at the network edge.
Abstract: As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.

782 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: A new DLP-CNN (Deep Locality-Preserving CNN) method, which aims to enhance the discriminative power of deep features by preserving the locality closeness while maximizing the inter-class scatters, is proposed.
Abstract: Past research on facial expressions have used relatively limited datasets, which makes it unclear whether current methods can be employed in real world. In this paper, we present a novel database, RAF-DB, which contains about 30000 facial images from thousands of individuals. Each image has been individually labeled about 40 times, then EM algorithm was used to filter out unreliable labels. Crowdsourcing reveals that real-world faces often express compound emotions, or even mixture ones. For all we know, RAF-DB is the first database that contains compound expressions in the wild. Our cross-database study shows that the action units of basic emotions in RAF-DB are much more diverse than, or even deviate from, those of lab-controlled ones. To address this problem, we propose a new DLP-CNN (Deep Locality-Preserving CNN) method, which aims to enhance the discriminative power of deep features by preserving the locality closeness while maximizing the inter-class scatters. The benchmark experiments on the 7-class basic expressions and 11-class compound expressions, as well as the additional experiments on SFEW and CK+ databases, show that the proposed DLP-CNN outperforms the state-of-the-art handcrafted features and deep learning based methods for the expression recognition in the wild.

746 citations


Journal ArticleDOI
TL;DR: In this article, the problem of proactive deployment of cache-enabled unmanned aerial vehicles (UAVs) for optimizing the quality of experience (QoE) of wireless devices in a cloud radio access network is studied.
Abstract: In this paper, the problem of proactive deployment of cache-enabled unmanned aerial vehicles (UAVs) for optimizing the quality-of-experience (QoE) of wireless devices in a cloud radio access network is studied. In the considered model, the network can leverage human-centric information, such as users’ visited locations, requested contents, gender, job, and device type to predict the content request distribution, and mobility pattern of each user. Then, given these behavior predictions, the proposed approach seeks to find the user-UAV associations, the optimal UAVs’ locations, and the contents to cache at UAVs. This problem is formulated as an optimization problem whose goal is to maximize the users’ QoE while minimizing the transmit power used by the UAVs. To solve this problem, a novel algorithm based on the machine learning framework of conceptor-based echo state networks (ESNs) is proposed. Using ESNs, the network can effectively predict each user’s content request distribution and its mobility pattern when limited information on the states of users and the network is available. Based on the predictions of the users’ content request distribution and their mobility patterns, we derive the optimal locations of UAVs as well as the content to cache at UAVs. Simulation results using real pedestrian mobility patterns from BUPT and actual content transmission data from Youku show that the proposed algorithm can yield 33.3% and 59.6% gains, respectively, in terms of the average transmit power and the percentage of the users with satisfied QoE compared with a benchmark algorithm without caching and a benchmark solution without UAVs.

732 citations


Journal ArticleDOI
26 Jul 2017
TL;DR: The overview of requirements and use cases in V2X services in 3GPP is presented, and the up-to-date standardization of LTE V2x in 3 GPP is surveyed, where the enhanced V2Z (eV2X) services and possible 5G solutions are analyzed.
Abstract: Vehicle-to-everything (V2X), including vehicle- to-vehicle (V2V), vehicle-to-pedestrian (V2P), vehicle-to-infrastructure (V2I), and vehicle-to-network (V2N) communications, improves road safety, traffic efficiency, and the availability of infotainment services. Standardization of Long Term Evolution (LTE)-based V2X has been actively conducted by the Third Generation Partnership Project (3GPP) to provide solutions for V2X communications, and has benefited from the global deployment and fast commercialization of LTE systems. LTE-based V2X was widely used as LTE-V in the Chinese vehicular communications industry, and LTE-based V2X was redefined as LTE V2X in 3GPP standardization progress. In this article, the overview of requirements and use cases in V2X services in 3GPP is presented. The up-to-date standardization of LTE V2X in 3GPP is surveyed. The challenges and detailed design aspects in LTE V2X are also discussed. Meanwhile, the enhanced V2X (eV2X) services and possible 5G solutions are analyzed. Finally, the implementations of LTE V2X are presented with the latest progress in industrial alliances, research, development of prototypes, and field tests.

670 citations


Journal ArticleDOI
TL;DR: A survey of heterogeneous information network analysis can be found in this article, where the authors introduce basic concepts of HIN analysis, examine its developments on different data mining tasks, discuss some advanced topics, and point out some future research directions.
Abstract: Most real systems consist of a large number of interacting, multi-typed components, while most contemporary researches model them as homogeneous information networks, without distinguishing different types of objects and links in the networks. Recently, more and more researchers begin to consider these interconnected, multi-typed data as heterogeneous information networks, and develop structural analysis approaches by leveraging the rich semantic meaning of structural types of objects and links in the networks. Compared to widely studied homogeneous information network, the heterogeneous information network contains richer structure and semantic information, which provides plenty of opportunities as well as a lot of challenges for data mining. In this paper, we provide a survey of heterogeneous information network analysis. We will introduce basic concepts of heterogeneous information network analysis, examine its developments on different data mining tasks, discuss some advanced topics, and point out some future research directions.

571 citations


Proceedings ArticleDOI
Matej Kristan1, Ales Leonardis2, Jiri Matas3, Michael Felsberg4, Roman Pflugfelder5, Luka Čehovin Zajc1, Tomas Vojir3, Gustav Häger4, Alan Lukezic1, Abdelrahman Eldesokey4, Gustavo Fernandez5, Alvaro Garcia-Martin6, Andrej Muhič1, Alfredo Petrosino7, Alireza Memarmoghadam8, Andrea Vedaldi9, Antoine Manzanera10, Antoine Tran10, A. Aydin Alatan11, Bogdan Mocanu, Boyu Chen12, Chang Huang, Changsheng Xu13, Chong Sun12, Dalong Du, David Zhang, Dawei Du13, Deepak Mishra, Erhan Gundogdu14, Erhan Gundogdu11, Erik Velasco-Salido, Fahad Shahbaz Khan4, Francesco Battistone, Gorthi R. K. Sai Subrahmanyam, Goutam Bhat4, Guan Huang, Guilherme Sousa Bastos, Guna Seetharaman15, Hongliang Zhang16, Houqiang Li17, Huchuan Lu12, Isabela Drummond, Jack Valmadre9, Jae-chan Jeong18, Jaeil Cho18, Jae-Yeong Lee18, Jana Noskova, Jianke Zhu19, Jin Gao13, Jingyu Liu13, Ji-Wan Kim18, João F. Henriques9, José M. Martínez, Junfei Zhuang20, Junliang Xing13, Junyu Gao13, Kai Chen21, Kannappan Palaniappan22, Karel Lebeda, Ke Gao22, Kris M. Kitani23, Lei Zhang, Lijun Wang12, Lingxiao Yang, Longyin Wen24, Luca Bertinetto9, Mahdieh Poostchi22, Martin Danelljan4, Matthias Mueller25, Mengdan Zhang13, Ming-Hsuan Yang26, Nianhao Xie16, Ning Wang17, Ondrej Miksik9, Payman Moallem8, Pallavi Venugopal M, Pedro Senna, Philip H. S. Torr9, Qiang Wang13, Qifeng Yu16, Qingming Huang13, Rafael Martin-Nieto, Richard Bowden27, Risheng Liu12, Ruxandra Tapu, Simon Hadfield27, Siwei Lyu28, Stuart Golodetz9, Sunglok Choi18, Tianzhu Zhang13, Titus Zaharia, Vincenzo Santopietro, Wei Zou13, Weiming Hu13, Wenbing Tao21, Wenbo Li28, Wengang Zhou17, Xianguo Yu16, Xiao Bian24, Yang Li19, Yifan Xing23, Yingruo Fan20, Zheng Zhu13, Zhipeng Zhang13, Zhiqun He20 
01 Jul 2017
TL;DR: The Visual Object Tracking challenge VOT2017 is the fifth annual tracker benchmarking activity organized by the VOT initiative; results of 51 trackers are presented; many are state-of-the-art published at major computer vision conferences or journals in recent years.
Abstract: The Visual Object Tracking challenge VOT2017 is the fifth annual tracker benchmarking activity organized by the VOT initiative. Results of 51 trackers are presented; many are state-of-the-art published at major computer vision conferences or journals in recent years. The evaluation included the standard VOT and other popular methodologies and a new "real-time" experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The VOT2017 goes beyond its predecessors by (i) improving the VOT public dataset and introducing a separate VOT2017 sequestered dataset, (ii) introducing a realtime tracking experiment and (iii) releasing a redesigned toolkit that supports complex experiments. The dataset, the evaluation kit and the results are publicly available at the challenge website1.

Journal ArticleDOI
TL;DR: Due to the simplicity of its geometry structure and its easiness to be fabricated, the proposed high figure of merit and sensitivity sensor indicates a competitive candidate for applications in sensing or detecting fields.
Abstract: A perfect ultra-narrow band infrared metamaterial absorber based on the all-metal-grating structure is proposed. The absorber presents a perfect absorption efficiency of over 98% with an ultra-narrow bandwidth of 0.66 nm at normal incidence. This high efficient absorption is contributed to the surface plasmon resonance. Moreover, the surface plasmon resonance-induced strong surface electric field enhancement is favorable for application in biosensing system. When operated as a plasmonic refractive index sensor, the ultra-narrow band absorber has a wavelength sensitivity 2400 nm/RIU and an ultra-high figure of merit 3640, which are much better than those of most reported similar plasmonic sensors. Besides, we also comprehensively investigate the influences of structural parameters on the sensing properties. Due to the simplicity of its geometry structure and its easiness to be fabricated, the proposed high figure of merit and sensitivity sensor indicates a competitive candidate for applications in sensing or detecting fields.

Journal ArticleDOI
TL;DR: This paper comprehensively presents a tutorial on three typical edge computing technologies, namely mobile edge computing, cloudlets, and fog computing, and the standardization efforts, principles, architectures, and applications of these three technologies are summarized and compared.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: According to signal transmission characteristic, the non-orthogonal multiple access schemes are classified into four categories: scrambling based NOMA, interleaving based N OMA, spreading based NomsA and coding based NomA, where the scheme with superior performance is given.
Abstract: Compared to the traditional orthogonal multiple access (MA), non-orthogonal multiple access (NOMA) technology can achieve higher capacity gain and higher spectrum efficiency, support larger massive connectivity In this article, according to signal transmission characteristic, the NOMA schemes are classified into four categories: scrambling based NOMA, interleaving based NOMA, spreading based NOMA and coding based NOMA Furthermore, the process and characteristics of different schemes are summarized The performance of the NOMA schemes is evaluated According to the evaluation results, the scheme with superior performance is given By analyzing and comparing features of these technologies, a research guiding is given for future 5G multiple access

Proceedings ArticleDOI
30 Mar 2017
TL;DR: In this paper, a tube convolutional neural network (T-CNN) is proposed to recognize and localize action based on 3D convolution features for action detection in videos.
Abstract: Deep learning has been demonstrated to achieve excellent results for image classification and object detection. However, the impact of deep learning on video analysis has been limited due to complexity of video data and lack of annotations. Previous convolutional neural networks (CNN) based video action detection approaches usually consist of two major steps: frame-level action proposal generation and association of proposals across frames. Also, most of these methods employ two-stream CNN framework to handle spatial and temporal feature separately. In this paper, we propose an end-to-end deep network called Tube Convolutional Neural Network (T-CNN) for action detection in videos. The proposed architecture is a unified deep network that is able to recognize and localize action based on 3D convolution features. A video is first divided into equal length clips and next for each clip a set of tube proposals are generated based on 3D Convolutional Network (ConvNet) features. Finally, the tube proposals of different clips are linked together employing network flow and spatio-temporal action detection is performed using these linked video proposals. Extensive experiments on several video datasets demonstrate the superior performance of T-CNN for classifying and localizing actions in both trimmed and untrimmed videos compared to state-of-the-arts.

Journal ArticleDOI
TL;DR: System level simulation results show that PDMA can support more simultaneous connections than that of conventional and at least improve in spectrum efficiency over orthogonal frequency division multiple access.
Abstract: In this paper, pattern division multiple access (PDMA), which is a novel nonorthogonal multiple access scheme, is proposed for fifth-generation (5G) radio networks. The PDMA pattern defines the mapping of transmitted data to a resource group that can consist of time, frequency, and spatial resources or any combination of these resources. The pattern is introduced to differentiate signals of users sharing the same resources, and the pattern is designed with disparate diversity order and sparsity so that PDMA can take the advantage of the joint design of transmitter and receiver to improve system performance while maintaining detection complexity to a reasonable level. System level simulation results show that PDMA can support $\text{six}$ times simultaneous connections than that of conventional and at least $\text{30}\%$ improvement in spectrum efficiency over orthogonal frequency division multiple access.

Journal ArticleDOI
TL;DR: This paper proposes a heuristic offloading decision algorithm (HODA), which is semidistributed and jointly optimizes the offload decision, and communication and computation resources to maximize system utility, a measure of quality of experience based on task completion time and energy consumption of a mobile device.
Abstract: Proximate cloud computing enables computationally intensive applications on mobile devices, providing a rich user experience. However, remote resource bottlenecks limit the scalability of offloading, requiring optimization of the offloading decision and resource utilization. To this end, in this paper, we leverage the variability in capabilities of mobile devices and user preferences. Our system utility metric is a measure of quality of experience (QoE) based on task completion time and energy consumption of a mobile device. We propose a heuristic offloading decision algorithm (HODA), which is semidistributed and jointly optimizes the offloading decision, and communication and computation resources to maximize system utility. Our main contribution is to reduce the problem to a submodular maximization problem and prove its NP-hardness by decomposing it into two subproblems: 1) optimization of communication and computation resources solved by quasiconvex and convex optimization and 2) offloading decision solved by submodular set function optimization. HODA reduces the complexity of finding the local optimum to $O(K^{3})$ , where $K$ is the number of mobile users. Simulation results show that HODA performs within 5% of the optimal on average. Compared with other solutions, HODA's performance is significantly superior as the number of users increases.

Journal ArticleDOI
TL;DR: The star-shaped 16-ary quadrature amplitude modulation scheme shows superiority over the PS-Square-16QAM in terms of the BER improvement.
Abstract: We investigate and compare the performance of star-shaped 16-ary quadrature amplitude modulation (Star-16QAM) and square-shaped 16QAM (Square-16QAM) in the probabilistic shaping (PS) and uniform schemes with coherent detection. With the help of PS technology, the bit error ratio (BER) improvement achieved in the PS-Star-16QAM scheme is greater than that of the PS-Square-16QAM when compared with the uniform schemes in our numerical simulation and experiment. Therefore, the PS-Star-16QAM shows superiority over the PS-Square-16QAM in terms of the BER improvement.

Journal ArticleDOI
TL;DR: In this article, the effect of nanoparticle fraction on the microstructure and dielectric properties of composite films is investigated, which confirms that these ultimate sized nanocrystals can perform as superior high permittivity fillers in the nanocomposites for energy storage applications.

Journal ArticleDOI
TL;DR: A facile approach is demonstrated to generate atomically dispersed platinum via photochemical reduction of frozen chloroplatinic acid solution using ultraviolet light to obtain atomically-dispersed platinum catalysts with high electrocatalytic performance.
Abstract: Photochemical solution-phase reactions have been widely applied for the syntheses of nanocrystals. In particular, tuning of the nucleation and growth of solids has been a major area of focus. Here we demonstrate a facile approach to generate atomically dispersed platinum via photochemical reduction of frozen chloroplatinic acid solution using ultraviolet light. Using this iced-photochemical reduction, the aggregation of atoms is prevented, and single atoms are successfully stabilized. The platinum atoms are deposited on various substrates, including mesoporous carbon, graphene, carbon nanotubes, titanium dioxide nanoparticles, and zinc oxide nanowires. The atomically dispersed platinum on mesoporous carbon exhibits efficient catalytic activity for the electrochemical hydrogen evolution reaction, with an overpotential of only 65 mV at a current density of 100 mA cm−2 and long-time durability (>10 h), superior to state-of-the-art platinum/carbon. This iced-photochemical reduction may be extended to other single atoms, for example gold and silver, as demonstrated in this study. Photochemical synthesis is a popular approach to fabricate metallic nanoparticles, however stabilizing individually-dispersed atoms by this method remains challenging. Here, the authors freeze their precursor solution prior to UV irradiation to obtain atomically-dispersed platinum catalysts with high electrocatalytic performance.

Journal ArticleDOI
TL;DR: The results strongly suggest that a photodetector based on β-Ga2O3 thin-film heterojunction structure can be practically used to detect weak solar-blind signals because of its high photoconductive gain.
Abstract: A solar-blind photodetector based on β-Ga2O3/NSTO (NSTO = Nb:SrTiO3) heterojunctions were fabricated for the first time, and its photoelectric properties were investigated. The device presents a typical positive rectification in the dark, while under 254 nm UV light illumination, it shows a negative rectification, which might be caused by the generation of photoinduced electron–hole pairs in the β-Ga2O3 film layer. With zero bias, that is, zero power consumption, the photodetector shows a fast photoresponse time (decay time τd = 0.07 s) and the ratio Iphoto/Idark ≈ 20 under 254 nm light illumination with a light intensity of 45 μW/cm2. Such behaviors are attributed to the separation of photogenerated electron–hole pairs driven by the built-in electric field in the depletion region of β-Ga2O3 and the NSTO interface, and the subsequent transport toward corresponding electrodes. The photocurrent increases linearly with increasing the light intensity and applied bias, while the response time decreases with th...

Journal ArticleDOI
TL;DR: A nonheated roll-to-roll process is developed for the continuous production of flexible, extralarge, and transparent silver nanofiber (AgNF) network electrodes that are comparable with those AgNF networks produced via high-temperature sintering.
Abstract: Electrochromic smart windows (ECSWs) are considered as the most promising alternative to traditional dimming devices. However, the electrode technology in ECSWs remains stagnant, wherein inflexible indium tin oxide and fluorine-doped tin oxide are the main materials being used. Although various complicated production methods, such as high-temperature calcination and sputtering, have been reported, the mass production of flexible and transparent electrodes remains challenging. Here, a nonheated roll-to-roll process is developed for the continuous production of flexible, extralarge, and transparent silver nanofiber (AgNF) network electrodes. The optical and mechanical properties, as well as the electrical conductivity of these products (i.e., 12 Ω sq-1 at 95% transmittance) are comparable with those AgNF networks produced via high-temperature sintering. Moreover, the as-prepared AgNF network is successfully assembled into an A4-sized ECSW with short switching time, good coloration efficiency, and flexibility.

Proceedings ArticleDOI
01 May 2017
TL;DR: A hybrid deep learning model for spatiotemporal prediction, which includes a novel autoencoder-based deep model for spatial modeling and Long Short-Term Memory units (LSTMs) for temporal modeling is presented.
Abstract: In this paper, we propose to leverage the emerging deep learning techniques for spatiotemporal modeling and prediction in cellular networks, based on big system data. First, we perform a preliminary analysis for a big dataset from China Mobile, and use traffic load as an example to show non-zero temporal autocorrelation and non-zero spatial correlation among neighboring Base Stations (BSs), which motivate us to discover both temporal and spatial dependencies in our study. Then we present a hybrid deep learning model for spatiotemporal prediction, which includes a novel autoencoder-based deep model for spatial modeling and Long Short-Term Memory units (LSTMs) for temporal modeling. The autoencoder-based model consists of a Global Stacked AutoEncoder (GSAE) and multiple Local SAEs (LSAEs), which can offer good representations for input data, reduced model size, and support for parallel and application-aware training. Moreover, we present a new algorithm for training the proposed spatial model. We conducted extensive experiments to evaluate the performance of the proposed model using the China Mobile dataset. The results show that the proposed deep model significantly improves prediction accuracy compared to two commonly used baseline methods, ARIMA and SVR. We also present some results to justify effectiveness of the autoencoder-based spatial model.

Posted Content
TL;DR: In this article, the authors provide a comprehensive tutorial on the main concepts of machine learning, in general, and artificial neural networks (ANNs), in particular, and their potential applications in wireless communications.
Abstract: Next-generation wireless networks must support ultra-reliable, low-latency communication and intelligently manage a massive number of Internet of Things (IoT) devices in real-time, within a highly dynamic environment. This need for stringent communication quality-of-service (QoS) requirements as well as mobile edge and core intelligence can only be realized by integrating fundamental notions of artificial intelligence (AI) and machine learning across the wireless infrastructure and end-user devices. In this context, this paper provides a comprehensive tutorial that introduces the main concepts of machine learning, in general, and artificial neural networks (ANNs), in particular, and their potential applications in wireless communications. For this purpose, we present a comprehensive overview on a number of key types of neural networks that include feed-forward, recurrent, spiking, and deep neural networks. For each type of neural network, we present the basic architecture and training procedure, as well as the associated challenges and opportunities. Then, we provide an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, ranging from communication using unmanned aerial vehicles to virtual reality and edge caching.For each individual application, we present the main motivation for using ANNs along with the associated challenges while also providing a detailed example for a use case scenario and outlining future works that can be addressed using ANNs. In a nutshell, this article constitutes one of the first holistic tutorials on the development of machine learning techniques tailored to the needs of future wireless networks.

Journal ArticleDOI
TL;DR: This paper forms the energy consumption minimization problem as a mixed interger nonlinear programming (MINLP) problem, which is subject to specific application latency constraints, and proposes a reformulation-linearization-technique-based Branch-and-Bound (RLTBB) method, which can obtain the optimal result or a suboptimal result by setting the solving accuracy.
Abstract: Mobile edge computing (MEC) providing information technology and cloud-computing capabilities within the radio access network is an emerging technique in fifth-generation networks MEC can extend the computational capacity of smart mobile devices (SMDs) and economize SMDs’ energy consumption by migrating the computation-intensive task to the MEC server In this paper, we consider a multi-mobile-users MEC system, where multiple SMDs ask for computation offloading to a MEC server In order to minimize the energy consumption on SMDs, we jointly optimize the offloading selection, radio resource allocation, and computational resource allocation coordinately We formulate the energy consumption minimization problem as a mixed interger nonlinear programming (MINLP) problem, which is subject to specific application latency constraints In order to solve the problem, we propose a reformulation-linearization-technique-based Branch-and-Bound (RLTBB) method, which can obtain the optimal result or a suboptimal result by setting the solving accuracy Considering the complexity of RTLBB cannot be guaranteed, we further design a Gini coefficient-based greedy heuristic (GCGH) to solve the MINLP problem in polynomial complexity by degrading the MINLP problem into the convex problem Many simulation results demonstrate the energy saving enhancements of RLTBB and GCGH

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the many-body effect, carrier mobility, and device performance of monolayer (ML) hexagonal arsenene and antimonene based on accurate ab initio methods.
Abstract: Two-dimensional (2D) semiconductors are very promising channel materials in next-generation field effect transistors (FETs) due to the enhanced gate electrostatics and smooth surface. Two new 2D materials, arsenene and antimonene (As and Sb analogues of graphene), have been fabricated very recently. Here, we provide the first investigation of the many-body effect, carrier mobility, and device performance of monolayer (ML) hexagonal arsenene and antimonene based on accurate ab initio methods. The quasi-particle band gaps of ML arsenene and antimonene by using the GW approximation are 2.47 and 2.38 eV, respectively. The optical band gaps of ML arsenene and antimonene from the GW-Bethe–Salpeter equation are 1.6 and 1.5 eV, with exciton binding energies of 0.9 and 0.8 eV, respectively. The carrier mobility is found to be considerably low in ML arsenene (21/66 cm2/V·s for electron/hole) and moderate in ML antimonene (150/510 cm2/V·s for electron/hole). In terms of the ab initio quantum transport simulations, t...

Journal ArticleDOI
TL;DR: This paper generates asymptotically optimal schedules tolerant to out-of-date network knowledge, thereby relieving stringent requirements on feedbacks and able to dramatically reduce feedbacks at no cost of optimality.
Abstract: Mobile edge computing is of particular interest to Internet of Things (IoT), where inexpensive simple devices can get complex tasks offloaded to and processed at powerful infrastructure. Scheduling is challenging due to stochastic task arrivals and wireless channels, congested air interface, and more prominently, prohibitive feedbacks from thousands of devices. In this paper, we generate asymptotically optimal schedules tolerant to out-of-date network knowledge, thereby relieving stringent requirements on feedbacks. A perturbed Lyapunov function is designed to stochastically maximize a network utility balancing throughput and fairness. A knapsack problem is solved per slot for the optimal schedule, provided up-to-date knowledge on the data and energy backlogs of all devices. The knapsack problem is relaxed to accommodate out-of-date network states. Encapsulating the optimal schedule under up-to-date network knowledge, the solution under partial out-of-date knowledge preserves asymptotic optimality, and allows devices to self-nominate for feedback. Corroborated by simulations, our approach is able to dramatically reduce feedbacks at no cost of optimality. The number of devices that need to feed back is reduced to less than 60 out of a total of 5000 IoT devices.

Journal ArticleDOI
TL;DR: In this paper, the authors provide an overview of the features of 5G wireless communication systems for use in the mmWave frequency bands, and the channel modeling efforts of many international groups for both licensed and unlicensed applications are described.
Abstract: This paper provides an overview of the features of fifth generation (5G) wireless communication systems now being developed for use in the millimeter wave (mmWave) frequency bands. Early results and key concepts of 5G networks are presented, and the channel modeling efforts of many international groups for both licensed and unlicensed applications are described here. Propagation parameters and channel models for understanding mmWave propagation, such as line-of-sight (LOS) probabilities, large-scale path loss, and building penetration loss, as modeled by various standardization bodies, are compared over the 0.5-100 GHz range.

Proceedings ArticleDOI
19 Aug 2017
TL;DR: A multimodal depressive dictionary learning model is proposed to detect the depressed users on Twitter and a series of experiments are conducted to validate this model, which outperforms (+3% to +10%) several baselines.
Abstract: Depression is a major contributor to the overall global burden of diseases. Traditionally, doctors diagnose depressed people face to face via referring to clinical depression criteria. However, more than 70% of the patients would not consult doctors at early stages of depression, which leads to further deterioration of their conditions. Meanwhile, people are increasingly relying on social media to disclose emotions and sharing their daily lives, thus social media have successfully been leveraged for helping detect physical and mental diseases. Inspired by these, our work aims to make timely depression detection via harvesting social media data. We construct well-labeled depression and non-depression dataset on Twitter, and extract six depression-related feature groups covering not only the clinical depression criteria, but also online behaviors on social media. With these feature groups, we propose a multimodal depressive dictionary learning model to detect the depressed users on Twitter. A series of experiments are conducted to validate this model, which outperforms (+3% to +10%) several baselines. Finally, we analyze a large-scale dataset on Twitter to reveal the underlying online behaviors between depressed and non-depressed users.

Journal ArticleDOI
TL;DR: Policy recommendations for improving ICT are suggested with the focus on economic growth, trade openness and facilitation of foreign investment in BRICS countries, and the long-run elasticities between ICT and economic growth are suggested.