scispace - formally typeset
Search or ask a question
Book

3G Evolution : HSPA and LTE for Mobile Broadband

TL;DR: In this paper, the authors present a very up-to-date and practical book, written by engineers working closely in 3GPP, gives insight into the newest technologies and standards adopted by threeGPP with detailed explanations of the specific solutions chosen and their implementation in HSPA and LTE.
Abstract: This very up-to-date and practical book, written by engineers working closely in 3GPP, gives insight into the newest technologies and standards adopted by 3GPP, with detailed explanations of the specific solutions chosen and their implementation in HSPA and LTE. The key technologies presented include multi-carrier transmission, advanced single-carrier transmission, advanced receivers, OFDM, MIMO and adaptive antenna solutions, advanced radio resource management and protocols, and different radio network architectures. Their role and use in the context of mobile broadband access in general is explained. Both a high-level overview and more detailed step-by-step explanations of HSPA and LTE implementation are given. An overview of other related systems such as TD SCDMA, CDMA2000, and WIMAX is also provided.This is a 'must-have' resource for engineers and other professionals working with cellular or wireless broadband technologies who need to know how to utilize the new technology to stay ahead of the competition.The authors of the book all work at Ericsson Research and are deeply involved in 3G development and standardisation since the early days of 3G research. They are leading experts in the field and are today still actively contributing to the standardisation of both HSPA and LTE within 3GPP. * Gives the first explanation of the radio access technologies and key international standards for moving to the next stage of 3G evolution: fully operational mobile broadband* Describes the new technologies selected by the 3GPP to realise High Speed Packet Access (HSPA) and Long Term Evolution (LTE) for mobile broadband * Gives both higher-level overviews and detailed explanations of HSPA and LTE as specified by 3GPP
Citations
More filters
Proceedings ArticleDOI
01 Dec 2011
TL;DR: It is very clear from the trial that MIMO works very well and gives a substantial performance improvement at the tested carrier frequency if the antenna design of the hand-held is well made with respect to MIMo.
Abstract: MIMO is one of the techniques used in LTE Release 8 to achieve very high data rates. A field trial was performed in a pre-commercial LTE network. The objective is to investigate how well MIMO works with realistically designed handhelds in band 13 (746-756 MHz in downlink). In total, three different handheld designs were tested using antenna mockups. In addition to the mockups, a reference antenna design with less stringent restrictions on physical size and excellent properties for MIMO was used. The trial comprised test drives in areas with different characteristics and with different network load levels. The effects of hands holding the devices and the effect of using the device inside a test vehicle were also investigated. In general, it is very clear from the trial that MIMO works very well and gives a substantial performance improvement at the tested carrier frequency if the antenna design of the hand-held is well made with respect to MIMO. In fact, the best of the handhelds performed similar to the reference antenna.

17 citations

Proceedings ArticleDOI
04 Mar 2010
TL;DR: In this paper, a general comparison of both conventional OFDMA and Single Frequency Division Multiplex (SC-FDMA) is presented, and the authors draw important conclusions from this comparison.
Abstract: New evolving standards of the cellular systems, like e.g. Long Term Evolution (LTE), WiMax and Advanced LTE, consider Orthogonal Frequency Division Multiplex Access (OFDMA) as a mature and suitable solution to cope with the Inter Symbol Interference (ISI) due to the multi-path propagation. However, a major drawback of OFDMA which largely limited its application in the real environment is large envelope fluctuation which results in strong nonlinear distortion due to the nonlinear characteristic of power amplifier (PA). One possible solution to tackle this problem has been introduced by the the 3rd Generation Partnership Project (3GPP) consortium. This solution is based on spreading the base-band modulated signal before the application of OFDMA by using Discrete Fourier Transformation (DFT) that leads eventually to lower envelope fluctuation in comparison with that of OFDMA. The presented method is widely recognized as Single Frequency Division Multiplex (SC-FDMA). This paper aims to provide general comparison of both conventional OFDMA and SC-FDMA and draws important conclusions from this comparison.

17 citations


Cites background or methods from "3G Evolution : HSPA and LTE for Mob..."

  • ...LTE employes OFDMA in downlink scenario, which is important resemblance to WiMax, however SC-FDMA is used in the uplink [3]....

    [...]

  • ...The latter approach has been introduced by 3GPP that firmed up specifications for Long-Term Evolution (LTE) [3]....

    [...]

  • ...Commonly used baseband modulation schemes in upcoming LTE standard include QPSK, 16-QAM and 64-QAM....

    [...]

  • ...These modulations are being used in LTE and characterised by large sensitivity to nonlinear distortion....

    [...]

  • ...Since only LFDMA concept is proposed to use in the 3GPP LTE specifications, we will be focused on this approach exclusively further in this paper....

    [...]

Proceedings ArticleDOI
26 Apr 2009
TL;DR: The impact on downlink performance of ACK/NACK bundling on link-level for LTE TDD systems both for SIMO and MIMO systems is studied and it is found that the expected loss is rather small and hence bundling can be a good solution in many scenarios.
Abstract: In LTE TDD there is typically no one-to-one association between UL and DL subframes, and for DL heavy asymmetries with more DL than UL subframes, ACK/NACK reports for multiple DL subframes need to be transmitted in an UL subframe. To improve uplink control channel performance, ACK/NACK bundling, where multiple ACK- NACKs are combined to a single ACK/NACK response for several DL subframes, is supported. In this paper, we study the impact on downlink performance of ACK/NACK bundling on link-level for LTE TDD systems both for SIMO and MIMO systems. It is found that the expected loss due to ACK/NACK bundling is rather small and hence bundling can be a good solution in many scenarios.

17 citations


Cites background from "3G Evolution : HSPA and LTE for Mob..."

  • ...LTE as defined in 3GPP is a flexible radio-interface which already in its first relase offers peak rate of 300 Mbits/s in the downlink, low delays and increase in spectral efficiency [1], [2], [3]....

    [...]

DissertationDOI
01 Jan 2012
TL;DR: Experimental evaluation of PRRT in real Internet scenarios demonstrates that predictably reliable transport meets the strict QoS constraints of high-quality, audio-visual streaming applications.
Abstract: Reliable transport layer Internet protocols do not satisfy the requirements of packetized, real-time multimedia streams. First, major limitations result from their primary design objective of serving total reliability without tolerating residual packet loss. This property leads to unpredictable delivery delay on lossy network paths and conflicts with the strict rendering deadlines of multimedia services that explicitly prefer timeliness over reliability. Second, the strict layering of the ISO/OSI network stack prevents applications to communicate their specific quality of service (QoS) requirements to the transport layer. Consequently, transport protocols do not provide an interface for the negotiation of constraints on packet loss and delivery delay. Third, as the provision of scalable one-to-many transport requires careful design – especially under combination with error control – it is insufficiently supported by reliable protocols. Yet broadcast or multicast distribution of digital media is efficient and not unusual. As of today these issues are clearly unsolved in the prevalently HTTP/TCP-based media streaming such that the available Internet bandwidth is significantly underutilized and the presentation quality suffers severely. The available thesis motivates and defines predictable reliability as a novel, capacityapproaching transport paradigm, supporting an application-specific level of reliability under a strict delay constraint. This paradigm is being implemented into a new protocol design – the Predictably Reliable Real-time Transport protocol (PRRT). The protocol combines the fundamental concepts of proactive and reactive packet-level error control into an adaptive hybrid error coding architecture. The flexibility of the hybrid scheme enables the protocol to adaptively follow the dynamic capacity of the packet-erasure channels generated by a wide range of Internet protocol infrastructures. Combined with packet loss notifications via negative acknowledgments, it provides capacity-approaching coding efficiency in point-to-point as well as one-to-many transmission scenarios. In order to predictably achieve the desired level of reliability, proactive and reactive error control must be optimized under the application’s delay constraint. Hence, predictably reliable error control relies on stochastic modeling of the protocol response to the network path’s packet loss behavior. A block-erasure model captures the characteristics of the packet loss process. Further, a protocol performance model is being developed that predicts the protocol’s residual packet loss rate as well as its coding overhead based on the statistical representation of the network state. The performance model reflects the efficiency of one-to-many error control and incorporates the impact of unreliable delivery of the negative acknowledgments. The result of the joined modeling is periodically evaluated by a reliability control policy that validates the protocol configuration under the application constraints and under consideration of the available network bandwidth. The adaptation of the protocol parameters is formulated into a combinatorial optimization problem that is solved by a fast search algorithm incorporating explicit knowledge about the search space. Experimental evaluation of PRRT in real Internet scenarios demonstrates that predictably reliable transport meets the strict QoS constraints of high-quality, audio-visual streaming applications. In particular, broadcast services over Internet Protocol require packet streams to be delivered at a residual loss rate of 10−6 to 10−5 under a delay constraint of few hundred milliseconds, depending on their degree of interactivity. Within different experiments, the protocol implementation is evaluated at the transport of highquality broadcast TV via Internet Protocol. Especially wired wide area network paths

17 citations


Cites background from "3G Evolution : HSPA and LTE for Mob..."

  • ...The physical layer of the High Speed Downlink Packet Access (HSDPA) standard for data transmission in third generation mobile networks implements incremental redundancy based on convolutional codes [44]....

    [...]

Journal ArticleDOI
TL;DR: A cross-layer error control scheme that exploits priority-aware block interleaving (PBI) in the MAC layer for video broadcasting in CDMA2000 systems is proposed and analyzed and the extent to which it can improve the perceived quality of scalable video is demonstrated.
Abstract: Scalable video transmission over a network is easily adaptable to different types of mobile experiencing different network conditions. However the transmission of differentiated video packets in an error-prone wireless environment remains problematic. We propose and analyze a cross-layer error control scheme that exploits priority-aware block interleaving (PBI) in the MAC layer for video broadcasting in CDMA2000 systems. The PBI scheme allocates a higher priority to protecting the data which are more critical to the decoding of a video stream, and therefore has more effect on picture quality in the application layer. The use of Reed-Solomon coding in conjunction with PBI in the MAC layer can handle error bursts more effectively if its implementation takes account of underlying error distributions in the physical layer, and differentiates between different types of video packets in the application layer. We also calculate the maximum jitter from the variability of the Reed-Solomon decoding delay and determine the size of jitter buffer needed to prevent interruptions due to buffer underrun. Simulations demonstrate the extent to which we can improve the perceived quality of scalable video.

17 citations


Additional excerpts

  • ...Ç...

    [...]