scispace - formally typeset
Search or ask a question

Showing papers on "Base station published in 2015"


Journal ArticleDOI
TL;DR: This paper surveys the state-of-the-art literature on C-RAN and can serve as a starting point for anyone willing to understand C- RAN architecture and advance the research on the network.
Abstract: Cloud Radio Access Network (C-RAN) is a novel mobile network architecture which can address a number of challenges the operators face while trying to support growing end-user's needs. The main idea behind C-RAN is to pool the Baseband Units (BBUs) from multiple base stations into centralized BBU Pool for statistical multiplexing gain, while shifting the burden to the high-speed wireline transmission of In-phase and Quadrature (IQ) data. C-RAN enables energy efficient network operation and possible cost savings on baseband resources. Furthermore, it improves network capacity by performing load balancing and cooperative processing of signals originating from several base stations. This paper surveys the state-of-the-art literature on C-RAN. It can serve as a starting point for anyone willing to understand C-RAN architecture and advance the research on C-RAN.

1,516 citations


Proceedings ArticleDOI
08 Jun 2015
TL;DR: In this article, the authors proposed a novel association algorithm and proved its superiority w.r.t. prior art by means of simulations that are based on Vodafone's small cell trial network and employing a high resolution pathloss prediction and realistic user distributions.
Abstract: Until the 4th Generation (4G) cellular 3GPP systems, a user equipment's (UE) cell association has been based on the downlink received power from the strongest base station. Recent work has shown that - with an increasing degree of heterogeneity in emerging 5G systems - such an approach is dramatically suboptimal, advocating for an independent association of the downlink and uplink where the downlink is served by the macro cell and the uplink by the nearest small cell. In this paper, we advance prior art by explicitly considering the cell-load as well as the available backhaul capacity during the association process. We introduce a novel association algorithm and prove its superiority w.r.t. prior art by means of simulations that are based on Vodafone's small cell trial network and employing a high resolution pathloss prediction and realistic user distributions. We also study the effect that different power control settings have on the performance of our algorithm.

756 citations


Journal ArticleDOI
TL;DR: Analytical results demonstrate that the use of SWIPT will not jeopardize the diversity gain compared to the conventional NOMA and confirm that the opportunistic use of node locations for user selection can achieve low outage probability and deliver superior throughput in comparison to the random selection scheme.
Abstract: In this paper, the application of simultaneous wireless information and power transfer (SWIPT) to non-orthogonal multiple access (NOMA) networks in which users are spatially randomly located is investigated. A new cooperative SWIPT NOMA protocol is proposed, in which near NOMA users that are close to the source act as energy harvesting relays to help far NOMA users. Since the locations of users have a significant impact on the performance, three user selection schemes based on the user distances from the base station are proposed. To characterize the performance of the proposed selection schemes, closed-form expressions for the outage probability and system throughput are derived. These analytical results demonstrate that the use of SWIPT will not jeopardize the diversity gain compared to the conventional NOMA. The proposed results confirm that the opportunistic use of node locations for user selection can achieve low outage probability and deliver superior throughput in comparison to the random selection scheme.

595 citations


Journal ArticleDOI
TL;DR: NOMA can be expected to efficiently exploit the near-far effect experienced in cellular environments and offer a better tradeoff between system efficiency and user fairness than orthogonal multiple access (OMA), which is widely used in 3.9 and 4G mobile communication systems.
Abstract: SUMMARY This paper presents our investigation of non-orthogonal multiple access (NOMA) as a novel and promising power-domain user multiplexing scheme for future radio access. Based on information theory, we can expect that NOMA with a successive interference canceller (SIC) applied to the receiver side will offer a better tradeoff between system efficiency and user fairness than orthogonal multiple access (OMA), which is widely used in 3.9 and 4G mobile communication systems. This improvement becomes especially significant when the channel conditions among the non-orthogonally multiplexed users are significantly different. Thus, NOMA can be expected to efficiently exploit the near-far effect experienced in cellular environments. In this paper, we describe the basic principle of NOMA in both the downlink and uplink and then present our proposed NOMA scheme for the scenario where the base station is equipped with multiple antennas. Simulation results show the potential system-level

518 citations


Journal ArticleDOI
TL;DR: This paper provides an integrated view on MAC layer issues for cellular networks, identifies new challenges and tradeoffs, and provides novel insights and solution approaches.
Abstract: The millimeter-wave (mmWave) frequency band is seen as a key enabler of multigigabit wireless access in future cellular networks. In order to overcome the propagation challenges, mmWave systems use a large number of antenna elements both at the base station and at the user equipment, which leads to high directivity gains, fully directional communications, and possible noise-limited operations. The fundamental differences between mmWave networks and traditional ones challenge the classical design constraints, objectives, and available degrees of freedom. This paper addresses the implications that highly directional communication has on the design of an efficient medium access control (MAC) layer. The paper discusses key MAC layer issues, such as synchronization, random access, handover, channelization, interference management, scheduling, and association. This paper provides an integrated view on MAC layer issues for cellular networks, identifies new challenges and tradeoffs, and provides novel insights and solution approaches.

430 citations


Journal ArticleDOI
TL;DR: A novel logical structure of C-RAN that consists of a physical plane, a control plane, and a service plane is presented that facilitates the utilization of new communication and computer techniques.
Abstract: In the era of mobile Internet, mobile operators are facing pressure on ever-increasing capital expenditures and operating expenses with much less growth of income. Cloud Radio Access Network (C-RAN) is expected to be a candidate of next generation access network techniques that can solve operators' puzzle. In this article, on the basis of a general survey of C-RAN, we present a novel logical structure of C-RAN that consists of a physical plane, a control plane, and a service plane. Compared to traditional architecture, the proposed C-RAN architecture emphasizes the notion of service cloud, service-oriented resource scheduling and management, thus it facilitates the utilization of new communication and computer techniques. With the extensive computation resource offered by the cloud platform, a coordinated user scheduling algorithm and parallel optimum precoding scheme are proposed, which can achieve better performance. The proposed scheme opens another door to design new algorithms matching well with C-RAN architecture, instead of only migrating existing algorithms from traditional architecture to C-RAN.

426 citations


Journal ArticleDOI
TL;DR: A new software-defined architecture, called SoftAir, for next generation (5G) wireless systems, is introduced, where the novel ideas of network function cloudification and network virtualization are exploited to provide a scalable, flexible and resilient network architecture.

269 citations


Patent
Irwin Gerszberg1
16 Sep 2015
TL;DR: In this article, the authors describe a wireless communication node that receives instructions in a control channel directing it to utilize a spectral segment at a first carrier frequency to communicate with a mobile communication device.
Abstract: Aspects of the subject disclosure may include, for example, a wireless communication node that receives instructions in a control channel directing it to utilize a spectral segment at a first carrier frequency to communicate with a mobile communication device. Responsive to the instructions, the wireless communication node receives a modulated signal in the spectral segment at a second carrier frequency from the base station, the modulated signal including communications data provided by the base station. The wireless communication node down-shifts the modulated signal at the second carrier frequency to the first carrier frequency, and wirelessly transmits the modulated signal at the first carrier frequency to the mobile communication device. Other embodiments are disclosed.

250 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of energy-efficient resource allocation in the downlink of a cellular orthogonal frequency division multiple access system and shows that the maximization of the energy efficiency is approximately equivalent to the maximizations of the spectral efficiency for small values of the maximum transmit power.
Abstract: This paper addresses the problem of energy-efficient resource allocation in the downlink of a cellular orthogonal frequency division multiple access system. Three definitions of energy efficiency are considered for system design, accounting for both the radiated and the circuit power. User scheduling and power allocation are optimized across a cluster of coordinated base stations with a constraint on the maximum transmit power (either per subcarrier or per base station). The asymptotic noise-limited regime is discussed as a special case. Results show that the maximization of the energy efficiency is approximately equivalent to the maximization of the spectral efficiency for small values of the maximum transmit power, while there is a wide range of values of the maximum transmit power for which a moderate reduction of the data rate provides large savings in terms of dissipated energy. In addition, the performance gap among the considered resource allocation strategies is reduced as the out-of-cluster interference increases.

245 citations


Posted Content
TL;DR: In this paper, the cache-based content delivery in a three-tier heterogeneous network (HetNet), where base stations (BSs), relays and device-to-device (D2D) pairs are included, is investigated.
Abstract: Caching the popular multimedia content is a promising way to unleash the ultimate potential of wireless networks. In this paper, we contribute to proposing and analyzing the cache-based content delivery in a three-tier heterogeneous network (HetNet), where base stations (BSs), relays and device-to-device (D2D) pairs are included. We advocate to proactively cache the popular contents in the relays and parts of the users with caching ability when the network is off-peak. The cached contents can be reused for frequent access to offload the cellular network traffic. The node locations are first modeled as mutually independent Poisson Point Processes (PPPs) and the corresponding content access protocol is developed. The average ergodic rate and outage probability in the downlink are then analyzed theoretically. We further derive the throughput and the delay based on the \emph{multiclass processor-sharing queue} model and the continuous-time Markov process. According to the critical condition of the steady state in the HetNet, the maximum traffic load and the global throughput gain are investigated. Moreover, impacts of some key network characteristics, e.g., the heterogeneity of multimedia contents, node densities and the limited caching capacities, on the system performance are elaborated to provide a valuable insight.

240 citations


Journal ArticleDOI
TL;DR: This paper proposes a directional cell discovery procedure where base stations periodically transmit synchronization signals, potentially in time-varying random directions, to scan the angular space and reveals two key findings: 1) digital beamforming can significantly outperform analog beamforming even whendigital beamforming uses very low quantization to compensate for the additional power requirements and 2) omnidirectional transmissions of the synchronization signals from the base station generally outperform random directional scanning.
Abstract: The acute disparity between increasing bandwidth demand and available spectrum has brought millimeter wave (mmWave) bands to the forefront of candidate solutions for the next-generation cellular networks. Highly directional transmissions are essential for cellular communication in these frequencies to compensate for higher isotropic path loss. This reliance on directional beamforming, however, complicates initial cell search since mobiles and base stations must jointly search over a potentially large angular directional space to locate a suitable path to initiate communication. To address this problem, this paper proposes a directional cell discovery procedure where base stations periodically transmit synchronization signals, potentially in time-varying random directions, to scan the angular space. Detectors for these signals are derived based on a Generalized Likelihood Ratio Test (GLRT) under various signal and receiver assumptions. The detectors are then simulated under realistic design parameters and channels based on actual experimental measurements at 28 GHz in New York City. The study reveals two key findings: 1) digital beamforming can significantly outperform analog beamforming even when digital beamforming uses very low quantization to compensate for the additional power requirements and 2) omnidirectional transmissions of the synchronization signals from the base station generally outperform random directional scanning.

Journal ArticleDOI
TL;DR: A new algorithm for optimizing the traffic offloading process in D2D communications is developed and the Chernoff bound and approximated cumulative distribution function (cdf) of the offloaded traffic are derived and the validity of the bound and cdf is proven.
Abstract: Device-to-device (D2D) communication is seen as a major technology to overcome the imminent wireless capacity crunch and to enable new application services. In this paper, a novel social-aware approach for optimizing D2D communication by exploiting two layers, namely the social network layer and the physical wireless network layer, is proposed. In particular, the physical layer D2D network is captured via the users' encounter histories. Subsequently, an approach, based on the so-called Indian Buffet Process, is proposed to model the distribution of contents in the users' online social networks. Given the social relations collected by the base station, a new algorithm for optimizing the traffic offloading process in D2D communications is developed. In addition, the Chernoff bound and approximated cumulative distribution function (cdf) of the offloaded traffic are derived and the validity of the bound and cdf is proven. Simulation results based on real traces demonstrate the effectiveness of our model and show that the proposed approach can offload the network's traffic successfully.

Proceedings ArticleDOI
11 May 2015
TL;DR: An advanced power model which supports a broad range of network scenarios and base station types, features and configurations is presented, and the power consumption evolution over different technology generations is quantified.
Abstract: The power efficiency of cellular base stations is a crucial element to maintain sustainability of future mobile networks. To investigate future network concepts, a good power model is required which is highly flexible to evaluate the diversity of power saving options. This paper presents an advanced power model which supports a broad range of network scenarios and base station types, features and configurations. In addition to the power consumption, the model also provides values on the hardware sleep capabilities (sleep depths, transition times, power savings). The paper also discusses the technology trends and scaling factors which are used to predict the power consumption of base stations up to the year 2020. Two use cases are described, illustrating the power savings over different sleep depths, and quantifying the power consumption evolution over different technology generations.

Patent
Feng-Seng Chu1
09 Jan 2015
TL;DR: In this article, a method of payment for wireless charging service for a mobile device includes detecting wireless power supplied by a power base station, sending a first message to the powerbase station to initiate an electronic payment (e-payment) procedure if the wireless powered supplied by the power base stations is detected, receiving a second message from the power BS to indicate that the BS is ready for the e-payment procedure, performing the epayment procedure with the BS, and receiving the wireless power from the BS if the e payment procedure is succeeded, or not receiving the BS's wireless power
Abstract: A method of payment for wireless charging service for a mobile device includes detecting wireless power supplied by a power base station, sending a first message to the power base station to initiate an electronic payment (e-payment) procedure if the wireless power supplied by the power base station is detected, receiving a second message from the power base station to indicate that the power base station is ready for the e-payment procedure, performing the e-payment procedure with the power base station; and receiving the wireless power from the power base station if the e-payment procedure is succeeded, or not receiving the wireless power from the power base station if the e-payment procedure is failed.

Posted Content
TL;DR: It is demonstrated that decoupling can lead to significant gains in network throughput, outage, and power consumption at a much lower cost compared to other solutions that provide comparable or lower gains.
Abstract: Ever since the inception of mobile telephony, the downlink and uplink of cellular networks have been coupled, i.e. mobile terminals have been constrained to associate with the same base station (BS) in both the downlink and uplink directions. New trends in network densification and mobile data usage increase the drawbacks of this constraint, and suggest that it should be revisited. In this paper we identify and explain five key arguments in favor of Downlink/Uplink Decoupling (DUDe) based on a blend of theoretical, experimental, and logical arguments. We then overview the changes needed in current (LTE-A) mobile systems to enable this decoupling, and then look ahead to fifth generation (5G) cellular standards. We believe the introduced paradigm will lead to significant gains in network throughput, outage and power consumption at a much lower cost compared to other solutions providing comparable or lower gains.

Journal ArticleDOI
TL;DR: This work proposes and investigates a cross-layer resource allocation model for C-RAN to minimize the overall system power consumption in the BBU pool, fiber links and the remote radio heads (RRHs), and proposes a low-complexity Shaping-and-Pruning algorithm to obtain a sparse solution for the active RRH set.
Abstract: Cloud radio access network (C-RAN) aims to improve spectrum and energy efficiency of wireless networks by migrating conventional distributed base station functionalities into a centralized cloud baseband unit (BBU) pool. We propose and investigate a cross-layer resource allocation model for C-RAN to minimize the overall system power consumption in the BBU pool, fiber links and the remote radio heads (RRHs). We characterize the cross-layer resource allocation problem as a mixed-integer nonlinear programming (MINLP), which jointly considers elastic service scaling, RRH selection, and joint beamforming. The MINLP is however a combinatorial optimization problem and NP-hard. We relax the original MINLP problem into an extended sum-utility maximization (ESUM) problem, and propose two different solution approaches. We also propose a low-complexity Shaping-and-Pruning (SP) algorithm to obtain a sparse solution for the active RRH set. Simulation results suggest that the average sparsity of the solution given by our SP algorithm is close to that obtained by a recently proposed greedy selection algorithm, which has higher computational complexity. Furthermore, our proposed cross-layer resource allocation is more energy efficient than the greedy selection and successive selection algorithms.

Patent
09 Nov 2015
TL;DR: In this paper, a method for a self-calibrating and self-adjusting network is presented, comprising of obtaining a signal strength parameter for a mobile device at a base station, obtaining a position of the mobile device, and associating the position and the signal strength parameters in a database.
Abstract: Systems and methods for a self-calibrating and self-adjusting network are disclosed. In one embodiment, a method is disclosed, comprising: obtaining a signal strength parameter for a mobile device at a base station; obtaining a position of the mobile device at the base station; and associating the position and the signal strength parameter in a database. The method may further comprise one or more of: adjusting transmission power for the mobile device at the base station based on the associated position and signal strength parameter; computing the position of the mobile device at the base station; calculating an average of the signal strength parameter over a time window, and storing the average associated with the position. The signal strength parameter may include at least one of a block error rate (BLER) and a radio signal strength indicator (RSSI), and the position may be a global positioning system (GPS) position.

Patent
Shin Cheolkyu1, Hyojin Lee1, Yong-Jun Kwak1, Younsun Kim1, Young-Bum Kim1, Ju-Ho Lee1, Ji Hyoung Ju1 
20 Mar 2015
TL;DR: In this paper, a method for transmitting interference related control information in order to improve a reception performance of a terminal that receives a downlink in a cellular mobile communication system based on the LTE-A system is presented.
Abstract: A method for transmitting interference related control information in order to improve a reception performance of a terminal that receives a downlink in a cellular mobile communication system based on the LTE-A system includes receiving a higher layer control message including probability information of a modulation scheme for an interference signal, from a base station, and performing error-correcting coding using a probability value of a modulation scheme for an interference signal, which is included in the higher layer control message. A base station in a mobile communication system, the base station includes a controller configured to generate probability information of a modulation scheme for an interference signal, and transmit, to the terminal, a higher layer control message comprising the probability information of the modulation scheme for the interference signal.

Journal ArticleDOI
TL;DR: This work studies a two-tier heterogeneous cellular network where the macro tier and small cell tier operate according to a dynamic TDD scheme on orthogonal frequency bands and provides guidelines for the optimal design of D2D network access.
Abstract: Over the last decade, the growing amount of uplink (UL) and downlink (DL) mobile data traffic has been characterized by substantial asymmetry and time variations. Dynamic time-division duplex (TDD) has the capability to accommodate to the traffic asymmetry by adapting the UL/DL configuration to the current traffic demands. In this work, we study a two-tier heterogeneous cellular network (HCN) where the macro tier and small cell tier operate according to a dynamic TDD scheme on orthogonal frequency bands. To offload the network infrastructure, mobile users in proximity can engage in device-to-device (D2D) communications, whose activity is determined by a carrier sensing multiple access (CSMA) scheme to protect the ongoing infrastructure-based and D2D transmissions. We present an analytical framework for evaluating the network performance in terms of load-aware coverage probability and network throughput. The proposed framework allows quantification of the effect on the coverage probability of the most important TDD system parameters, such as the UL/DL configuration, the base station density, and the bias factor. In addition, we evaluate how the bandwidth partition and the D2D network access scheme affect the total network throughput. Through the study of the tradeoff between coverage probability and D2D user activity, we provide guidelines for the optimal design of D2D network access.

Proceedings ArticleDOI
Navid Nikaein1
11 Sep 2015
TL;DR: This paper investigates three critical issues for the cloudification of the current LTE/LTE-A radio access network and proposes an accurate model to compute the total uplink and downlink processing load as a function of bandwidth, modulation and coding scheme, and virtualization platforms.
Abstract: Commoditization and virtualization of wireless networks are changing the economics of mobile networks to help network providers (e.g., MNO, MVNO) move from proprietary and bespoke hardware and software platforms toward an open, cost-effective, and flexible cellular ecosystem. Cloud radio access network is a novel architecture that perform the required base band and protocol processing on a centralized computing resources or a cloud infrastructure. This replaces traditional base stations with distributed (passive) radio elements with much smaller footprints than the traditional base station and a remote pool of base band units allowing for simpler network densification. This paper investigates three critical issues for the cloudification of the current LTE/LTE-A radio access network. Extensive experimentations have been performed based on the OpenAirInterface simulators to characterise the base band processing time under different conditions. Based on the results, an accurate model is proposed to compute the total uplink and downlink processing load as a function of bandwidth, modulation and coding scheme, and virtualization platforms. The results also reveal the feasible virtualization approach towards a cloud-native radio access network.

Journal ArticleDOI
TL;DR: A cost-effective hybrid RF/free-space optical (FSO) solution to combine the advantages of RF backhauls (low cost, NLOS applications) and FSO backhauling (high-rate, low latency) is proposed.
Abstract: The rapid pace of demand for mobile data services and the limited supply of capacity in the current wireless access networks infrastructure are leading network operators to increase the density of base station deployments to improve network performance. This densification, made possible by small-cell deployment, also brings a novel set of challenges, specifically related to the cost of ownership, in which backhaul is of primary concern. This article proposes a cost-effective hybrid RF/free-space optical (FSO) solution to combine the advantages of RF backhauls (low cost, NLOS applications) and FSO backhauls (high-rate, low latency). To first illustrate the cost advantages of the RF backhaul solution, the first part of this article presents a business case of NLOS wireless RF backhaul, which has a low cost of ownership as compared to other backhaul candidates. RF backhaul, however, is limited by latency problems. On the other side, an FSO solution, which offers better latency and higher data rate than RF backhauls, remains sensitive to weather and nature conditions (e.g., rain, fog). To combine RF and FSO advantages, the second part of this article proposes a lowcost hybrid RF/FSO solution, wherein base stations are connected to each other using either optical fiber or hybrid RF/FSO links. This part addresses the problem of minimizing the cost of backhaul planning under reliability, connectivity, and data rate constraints, and proposes choosing the appropriate cost-effective backhaul connection between BSs (i.e., either OF or hybrid RF/FSO) using graph theory techniques.

Journal ArticleDOI
TL;DR: A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains.

Patent
Sangkyu Baek1, Chang Young Bin1, Kwon Sang Wook1, Mok Young Joong1, Hwang June1 
25 Nov 2015
TL;DR: In this paper, a communication method and apparatus using beamforming are provided for supporting higher data rates beyond 4 th-generation (4G) communication system such as long term evolution (LTE).
Abstract: The present disclosure relates to a pre-5 th -generation (5G) or 5G communication system to be provided for supporting higher data rates beyond 4 th -generation (4G) communication system such as long term evolution (LTE). A communication method and apparatus using beamforming are provided. The method includes acquiring transmission beam specific, measurement information of a base station (BS) and measuring a reference signal (RS) transmitted through transmission beams of the BS according to the transmission beam specific, measurement information. The measurement information on each transmission beam is determined according to at least one of an elevation angle of the corresponding transmission beam, an azimuth of the corresponding transmission beam, a handover urgency, information on a handover failure, and information on a radio link failure (RLF). A mobile station (MS) may perform a measurement report or a handover process according to a result of the measurement.

Journal ArticleDOI
TL;DR: In this paper, a digitally-controlled phase shifter network (DPSN) based hybrid precoding/combining scheme for mmWave massive MIMO was proposed to reduce the required cost and complexity of transceiver with a negligible performance loss.
Abstract: Ultra-dense network (UDN) has been considered as a promising candidate for future 5G network to meet the explosive data demand. To realize UDN, a reliable, Gigahertz bandwidth, and cost-effective backhaul connecting ultra-dense small-cell base stations (BSs) and macro-cell BS is prerequisite. Millimeter-wave (mmWave) can provide the potential Gbps traffic for wireless backhaul. Moreover, mmWave can be easily integrated with massive MIMO for the improved link reliability. In this article, we discuss the feasibility of mmWave massive MIMO based wireless backhaul for 5G UDN, and the benefits and challenges are also addressed. Especially, we propose a digitally-controlled phase-shifter network (DPSN) based hybrid precoding/combining scheme for mmWave massive MIMO, whereby the low-rank property of mmWave massive MIMO channel matrix is leveraged to reduce the required cost and complexity of transceiver with a negligible performance loss. One key feature of the proposed scheme is that the macro-cell BS can simultaneously support multiple small-cell BSs with multiple streams for each smallcell BS, which is essentially different from conventional hybrid precoding/combining schemes typically limited to single-user MIMO with multiple streams or multi-user MIMO with single stream for each user. Based on the proposed scheme, we further explore the fundamental issues of developing mmWave massive MIMO for wireless backhaul, and the associated challenges, insight, and prospect to enable the mmWave massive MIMO based wireless backhaul for 5G UDN are discussed.

Journal ArticleDOI
TL;DR: Various heterogeneous routing protocols for WSNs are categorized based upon various predefined parameters to give insights to various users to select one of the protocols from different categories based upon its merits over the others.

Journal ArticleDOI
TL;DR: A new system model is formulated that couples a cellular network in licensed bands and a device-to-device (D2D) network in unlicensed bands and demonstrates that assisted offloading of cellular user sessions onto the D2D links improves the degree of spatial reuse and reduces the impact of interference.
Abstract: For the past years, the analysts have been predicting a tremendous and continuous increase in mobile traffic, causing much of industry and academia to seek out any and all methods to increase wireless network capacity. In this paper, we investigate one such method, cellular data offloading onto direct connections between proximate user devices, which has been shown to provide significant wireless capacity gains. To do so, we formulate a new system model that couples a cellular network in licensed bands and a device-to-device (D2D) network in unlicensed bands. We propose that devices be continually associated with the cellular base station and use this connectivity to help manage their direct connections in unlicensed spectrum. In particular, we demonstrate that assisted offloading of cellular user sessions onto the D2D links improves the degree of spatial reuse and reduces the impact of interference. In this study, a session is a real-time flow of data from one user to another, which adheres to a Poisson point process (PPP). By contrast to a throughput- or capacity-centric system view, the application of PPP enables formulations where entire user sessions, rather than singular data packets, are arriving at random and leaving the system after being served. The proposed methodology is flexible enough to accommodate practical offloading scenarios, network selection algorithms, quality of service measures, and advanced wireless technologies. In this study, we are primarily interested in evaluating the data session blocking probability in dynamically loaded cellular and D2D networks, but given the importance of energy efficiency for mobile devices, we are also interested in characterizing the energy expenditure of a typical data session in these different networks. First with our advanced analytical methodology and then with our detailed system-level simulator, we evaluate the performance of network-assisted data session offloading from cellular to D2D connections under a variety of conditions. This analysis represents a useful tool in the development of practical offloading schemes and ongoing standardization efforts.

Journal ArticleDOI
TL;DR: This paper develops an iterative solution that achieves local Pareto optimality in typical scenarios and empirically achieves near-optimal performance and outperforms other resource allocation schemes designed for half-duplex networks.
Abstract: Recent advances in the physical layer have demonstrated the feasibility of in-band wireless full-duplex which enables a node to transmit and receive simultaneously on the same frequency band. While the full-duplex operation can ideally double the spectral efficiency, the network-level gain of full-duplex in large-scale networks remains unclear due to the complicated resource allocation in multi-carrier and multi-user environments. In this paper, we consider a single-cell full-duplex OFDMA network which consists of one full-duplex base station (BS) and multiple full-duplex mobile nodes. Our goal is to maximize the sum-rate performance by jointly optimizing subcarrier assignment and power allocation considering the characteristics of full-duplex transmissions. We develop an iterative solution that achieves local Pareto optimality in typical scenarios. Through extensive simulations, we demonstrate that our solution empirically achieves near-optimal performance and outperforms other resource allocation schemes designed for half-duplex networks. Also, we reveal the impact of various factors such as the channel correlation, the residual self-interference, and the distance between the BS and nodes on the full-duplex gain.

Journal ArticleDOI
TL;DR: The traffic semantic concept to extract traffic commuters’ origins and destinations information from the mobile phone CDR data and then use the extracted data for traffic zone division is proposed and a traffic zone attribute-index is proposed to measure tendency of traffic zones to be residential or working.
Abstract: Call detail record (CDR) data from mobile communication carriers offer an emerging and promising source of information for analysis of traffic problems. To date, research on insights and information to be gleaned from CDR data for transportation analysis has been slow, and there has been little progress on development of specific applications. This paper proposes the traffic semantic concept to extract traffic commuters’ origins and destinations information from the mobile phone CDR data and then use the extracted data for traffic zone division. A K-means clustering method was used to classify a cell-area (the area covered by a base stations) and tag a certain land use category or traffic semantic attribute (such as working, residential, or urban road) based on four feature data (including real-time user volume, inflow, outflow, and incremental flow) extracted from the CDR data. By combining the geographic information of mobile phone base stations, the roadway network within Beijing’s Sixth Ring Road was divided into a total of 73 traffic zones using another K-means clustering algorithm. Additionally, we proposed a traffic zone attribute-index to measure tendency of traffic zones to be residential or working. The calculated attribute-index values of 73 traffic zones in Beijing were consistent with the actual traffic and land-use data. The case study demonstrates that effective traffic and travel data can be obtained from mobile phones as portable sensors and base stations as fixed sensors, providing an opportunity to improve the analysis of complex travel patterns and behaviors for travel demand modeling and transportation planning.

Journal ArticleDOI
TL;DR: This paper model the interference relationships among different D2D and cellular communication links as a novel interference graph with unique attributes and proposes a corresponding joint resource-allocation scheme that can effectively lead to a near-optimal solution at the base station, with low computational complexity.
Abstract: Device-to-device (D2D) communications underlaying cellular networks have been recently considered as a promising means to enhance resource utilization of the cellular network and local user throughput among devices in proximity to each other. In this paper, we investigate the joint resource block assignment and transmit power allocation problem to optimize the network performance in such a scenario. Specifically, we model the interference relationships among different D2D and cellular communication links as a novel interference graph with unique attributes and propose a corresponding joint resource-allocation scheme that can effectively lead to a near-optimal solution at the base station, with low computational complexity. Simulation results confirm that, with markedly reduced complexity, our proposed scheme achieves a network throughput that approaches the one corresponding to the optimal resource-sharing scheme obtained via exhaustive search.

Patent
23 Jun 2015
TL;DR: In this paper, a base station for automated battery pack or payload exchange and methods for using the same is presented. The base station provides a landing surface for receiving a mobile platform and includes a manipulator controlled by manipulator compartment for accessing resource storage.
Abstract: A base station for automated battery pack or payload exchange and methods for using the same The base station provides a landing surface for receiving a mobile platform and includes a manipulator controlled by a manipulator compartment for accessing resource storage The base station is operable to ascertain a location of the mobile platform on the landing surface and move the manipulator to the mobile platform Thereby, the base station system advantageously accommodates low-accuracy landing of the mobile platform and further enables extended and autonomous operation of the mobile platform without the need for user intervention for exchanging battery packs and payloads