scispace - formally typeset
Search or ask a question
Author

Wen Chen

Bio: Wen Chen is an academic researcher from Shanghai Jiao Tong University. The author has contributed to research in topics: Relay & MIMO. The author has an hindex of 33, co-authored 292 publications receiving 3824 citations. Previous affiliations of Wen Chen include University of New South Wales & University of Alberta.


Papers
More filters
Journal ArticleDOI
TL;DR: The proposed distributed BP algorithm has a near-optimal delay performance, approaching that of the high-complexity exhaustive search method; the modified BP offers a good delay performance at low communication complexity; both the average degree distribution and the outage upper bound analysis relying on stochastic geometry match well with the authors' Monte-Carlo simulations.
Abstract: Heterogeneous cellular networks (HCNs) with embedded small cells are considered, where multiple mobile users wish to download network content of different popularity. By caching data into the small-cell base stations, we will design distributed caching optimization algorithms via belief propagation (BP) for minimizing the downloading latency. First, we derive the delay-minimization objective function and formulate an optimization problem. Then, we develop a framework for modeling the underlying HCN topology with the aid of a factor graph. Furthermore, a distributed BP algorithm is proposed based on the network's factor graph. Next, we prove that a fixed point of convergence exists for our distributed BP algorithm. In order to reduce the complexity of the BP, we propose a heuristic BP algorithm. Furthermore, we evaluate the average downloading performance of our HCN for different numbers and locations of the base stations and mobile users, with the aid of stochastic geometry theory. By modeling the nodes distributions using a Poisson point process, we develop the expressions of the average factor graph degree distribution, as well as an upper bound of the outage probability for random caching schemes. We also improve the performance of random caching. Our simulations show that 1) the proposed distributed BP algorithm has a near-optimal delay performance, approaching that of the high-complexity exhaustive search method; 2) the modified BP offers a good delay performance at low communication complexity; 3) both the average degree distribution and the outage upper bound analysis relying on stochastic geometry match well with our Monte-Carlo simulations; and 4) the optimization based on the upper bound provides both a better outage and a better delay performance than the benchmarks.

142 citations

Posted Content
TL;DR: In this article, a modified artificial bee colony (ABC-PTS) algorithm is proposed to search the better combination of phase factors, which can significantly reduce the computational complexity for larger PTS subblocks and offer lower peak to average power ratio at the same time.
Abstract: One of the major drawbacks of orthogonal frequency division multiplexing (OFDM) signals is the high peak to average power ratio (PAPR) of the transmitted signal. Many PAPR reduction techniques have been proposed in the literature, among which, partial transmit sequence (PTS) technique has been taken considerable investigation.However, PTS technique requires an exhaustive search over all combinations of allowed phase factors, whose complexity increases exponentially with the number of sub-blocks. In this paper, a newly suboptimal method based on modified artificial bee colony (ABC-PTS) algorithm is proposed to search the better combination of phase factors. The ABC-PTS algorithm can significantly reduce the computational complexity for larger PTS subblocks and offers lower PAPR at the same time. Simulation results show that the ABC-PTS algorithm is an efficient method to achieve significant PAPR reduction.

126 citations

Journal ArticleDOI
TL;DR: By exploring the lattice structure of SCMA code words, a low-complexity decoding algorithm based on list sphere decoding (LSD) is proposed that can reduce the decoding complexity substantially while the performance loss compared with the existing algorithm is negligible.
Abstract: Sparse code multiple access (SCMA) is one of the most promising methods among all the non-orthogonal multiple access techniques in the future 5G communication. Compared with some other non-orthogonal multiple access techniques, such as low density signature, SCMA can achieve better performance due to the shaping gain of the SCMA code words. However, despite the sparsity of the code words, the decoding complexity of the current message passing algorithm utilized by SCMA is still prohibitively high. In this paper, by exploring the lattice structure of SCMA code words, we propose a low-complexity decoding algorithm based on list sphere decoding (LSD). The LSD avoids the exhaustive search for all possible hypotheses and only considers signal within a hypersphere. As LSD can be viewed a depth-first tree search algorithm, we further propose several methods to prune the redundancy-visited nodes in order to reduce the size of the search tree. Simulation results show that the proposed algorithm can reduce the decoding complexity substantially while the performance loss compared with the existing algorithm is negligible.

123 citations

Journal ArticleDOI
TL;DR: This paper establishes a holistic power dissipation model for OFDMA systems, including the transmission power, signal processing power, and circuit power from both the transmitter and receiver sides, while existing works only consider the one side power consumption and also fail to capture the impact of subcarriers and users on the system EE.
Abstract: This paper investigates the joint transmitter and receiver optimization for the energy efficiency (EE) in orthogonal frequency-division multiple-access (OFDMA) systems. We first establish a holistic power dissipation model for OFDMA systems, including the transmission power, signal processing power, and circuit power from both the transmitter and receiver sides, while existing works only consider the one side power consumption and also fail to capture the impact of subcarriers and users on the system EE. The EE maximization problem is formulated as a combinatorial fractional problem that is NP-hard. To make it tractable, we transform the problem of fractional form into a subtractive-form one by using the Dinkelbach transformation and then propose a joint optimization method, which leads to the asymptotically optimal solution. To reduce the computational complexity, we decompose the joint optimization into two consecutive steps, where the key idea lies in exploring the inherent fractional structure of the introduced individual EE and the system EE. In addition, we provide a sufficient condition under which our proposed two-step method is optimal. Numerical results demonstrate the effectiveness of proposed methods, and the effect of imperfect channel state information is also characterized.

108 citations

Journal ArticleDOI
TL;DR: This paper considers wireless powered communication networks where multiple users harvest energy from a dedicated power station and then communicate with an information receiving station in a time-division manner and transforms the resulting non-convex optimization problem into a two-layer subtractive-form optimization problem, which leads to an efficient approach for obtaining the optimal solution.
Abstract: In this paper, we consider wireless powered communication networks (WPCNs) where multiple users harvest energy from a dedicated power station and then communicate with an information receiving station in a time-division manner. Thereby, our goal is to maximize the weighted sum of the user energy efficiencies (WSUEEs). In contrast to the existing system-centric approaches, the choice of the weights provides flexibility for balancing the individual user EEs via joint time allocation and power control. We first investigate the WSUEE maximization problem without the quality of service constraints. Closed-form expressions for the WSUEE as well as the optimal time allocation and power control are derived. Based on this result, we characterize the EE tradeoff between the users in the WPCN. Subsequently, we study the WSUEE maximization problem in a generalized WPCN where each user is equipped with an initial amount of energy and also has a minimum throughput requirement. By exploiting the sum-of-ratios structure of the objective function, we transform the resulting non-convex optimization problem into a two-layer subtractive-form optimization problem, which leads to an efficient approach for obtaining the optimal solution. The simulation results verify our theoretical findings and demonstrate the effectiveness of the proposed approach.

105 citations


Cited by
More filters
Journal ArticleDOI
01 May 1975
TL;DR: The Fundamentals of Queueing Theory, Fourth Edition as discussed by the authors provides a comprehensive overview of simple and more advanced queuing models, with a self-contained presentation of key concepts and formulae.
Abstract: Praise for the Third Edition: "This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented."IIE Transactions on Operations EngineeringThoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than presenting a narrow focus on the subject, this update illustrates the wide-reaching, fundamental concepts in queueing theory and its applications to diverse areas such as computer science, engineering, business, and operations research.This update takes a numerical approach to understanding and making probable estimations relating to queues, with a comprehensive outline of simple and more advanced queueing models. Newly featured topics of the Fourth Edition include:Retrial queuesApproximations for queueing networksNumerical inversion of transformsDetermining the appropriate number of servers to balance quality and cost of serviceEach chapter provides a self-contained presentation of key concepts and formulae, allowing readers to work with each section independently, while a summary table at the end of the book outlines the types of queues that have been discussed and their results. In addition, two new appendices have been added, discussing transforms and generating functions as well as the fundamentals of differential and difference equations. New examples are now included along with problems that incorporate QtsPlus software, which is freely available via the book's related Web site.With its accessible style and wealth of real-world examples, Fundamentals of Queueing Theory, Fourth Edition is an ideal book for courses on queueing theory at the upper-undergraduate and graduate levels. It is also a valuable resource for researchers and practitioners who analyze congestion in the fields of telecommunications, transportation, aviation, and management science.

2,562 citations

Book
01 Jan 1996
TL;DR: A review of the collected works of John Tate can be found in this paper, where the authors present two volumes of the Abel Prize for number theory, Parts I, II, edited by Barry Mazur and Jean-Pierre Serre.
Abstract: This is a review of Collected Works of John Tate. Parts I, II, edited by Barry Mazur and Jean-Pierre Serre. American Mathematical Society, Providence, Rhode Island, 2016. For several decades it has been clear to the friends and colleagues of John Tate that a “Collected Works” was merited. The award of the Abel Prize to Tate in 2010 added impetus, and finally, in Tate’s ninety-second year we have these two magnificent volumes, edited by Barry Mazur and Jean-Pierre Serre. Beyond Tate’s published articles, they include five unpublished articles and a selection of his letters, most accompanied by Tate’s comments, and a collection of photographs of Tate. For an overview of Tate’s work, the editors refer the reader to [4]. Before discussing the volumes, I describe some of Tate’s work. 1. Hecke L-series and Tate’s thesis Like many budding number theorists, Tate’s favorite theorem when young was Gauss’s law of quadratic reciprocity. When he arrived at Princeton as a graduate student in 1946, he was fortunate to find there the person, Emil Artin, who had discovered the most general reciprocity law, so solving Hilbert’s ninth problem. By 1920, the German school of algebraic number theorists (Hilbert, Weber, . . .) together with its brilliant student Takagi had succeeded in classifying the abelian extensions of a number field K: to each group I of ideal classes in K, there is attached an extension L of K (the class field of I); the group I determines the arithmetic of the extension L/K, and the Galois group of L/K is isomorphic to I. Artin’s contribution was to prove (in 1927) that there is a natural isomorphism from I to the Galois group of L/K. When the base field contains an appropriate root of 1, Artin’s isomorphism gives a reciprocity law, and all possible reciprocity laws arise this way. In the 1930s, Chevalley reworked abelian class field theory. In particular, he replaced “ideals” with his “idèles” which greatly clarified the relation between the local and global aspects of the theory. For his thesis, Artin suggested that Tate do the same for Hecke L-series. When Hecke proved that the abelian L-functions of number fields (generalizations of Dirichlet’s L-functions) have an analytic continuation throughout the plane with a functional equation of the expected type, he saw that his methods applied even to a new kind of L-function, now named after him. Once Tate had developed his harmonic analysis of local fields and of the idèle group, he was able prove analytic continuation and functional equations for all the relevant L-series without Hecke’s complicated theta-formulas. Received by the editors September 5, 2016. 2010 Mathematics Subject Classification. Primary 01A75, 11-06, 14-06. c ©2017 American Mathematical Society

2,014 citations

Journal ArticleDOI
TL;DR: This work presents a comprehensive survey of the advances with ABC and its applications and it is hoped that this survey would be very beneficial for the researchers studying on SI, particularly ABC algorithm.
Abstract: Swarm intelligence (SI) is briefly defined as the collective behaviour of decentralized and self-organized swarms. The well known examples for these swarms are bird flocks, fish schools and the colony of social insects such as termites, ants and bees. In 1990s, especially two approaches based on ant colony and on fish schooling/bird flocking introduced have highly attracted the interest of researchers. Although the self-organization features are required by SI are strongly and clearly seen in honey bee colonies, unfortunately the researchers have recently started to be interested in the behaviour of these swarm systems to describe new intelligent approaches, especially from the beginning of 2000s. During a decade, several algorithms have been developed depending on different intelligent behaviours of honey bee swarms. Among those, artificial bee colony (ABC) is the one which has been most widely studied on and applied to solve the real world problems, so far. Day by day the number of researchers being interested in ABC algorithm increases rapidly. This work presents a comprehensive survey of the advances with ABC and its applications. It is hoped that this survey would be very beneficial for the researchers studying on SI, particularly ABC algorithm.

1,645 citations

Journal ArticleDOI
01 Apr 2000
TL;DR: The standard sampling paradigm is extended for a presentation of functions in the more general class of "shift-in-variant" function spaces, including splines and wavelets, and variations of sampling that can be understood from the same unifying perspective are reviewed.
Abstract: This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefitted from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we reinterpret Shannon's sampling procedure as an orthogonal projection onto the subspace of band-limited functions. We then extend the standard sampling paradigm for a presentation of functions in the more general class of "shift-in-variant" function spaces, including splines and wavelets. Practically, this allows for simpler-and possibly more realistic-interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) prefilters that are not necessarily ideal low-pass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonbandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned.

1,461 citations

Journal ArticleDOI
TL;DR: A comprehensive review of the domain of physical layer security in multiuser wireless networks, with an overview of the foundations dating back to the pioneering work of Shannon and Wyner on information-theoretic security and observations on potential research directions in this area.
Abstract: This paper provides a comprehensive review of the domain of physical layer security in multiuser wireless networks. The essential premise of physical layer security is to enable the exchange of confidential messages over a wireless medium in the presence of unauthorized eavesdroppers, without relying on higher-layer encryption. This can be achieved primarily in two ways: without the need for a secret key by intelligently designing transmit coding strategies, or by exploiting the wireless communication medium to develop secret keys over public channels. The survey begins with an overview of the foundations dating back to the pioneering work of Shannon and Wyner on information-theoretic security. We then describe the evolution of secure transmission strategies from point-to-point channels to multiple-antenna systems, followed by generalizations to multiuser broadcast, multiple-access, interference, and relay networks. Secret-key generation and establishment protocols based on physical layer mechanisms are subsequently covered. Approaches for secrecy based on channel coding design are then examined, along with a description of inter-disciplinary approaches based on game theory and stochastic geometry. The associated problem of physical layer message authentication is also briefly introduced. The survey concludes with observations on potential research directions in this area.

1,294 citations