scispace - formally typeset
Search or ask a question

Showing papers by "NTT DoCoMo published in 2010"


Proceedings ArticleDOI
20 Oct 2010
TL;DR: The greatest value of Mininet will be supporting collaborative network research, by enabling self-contained SDN prototypes which anyone with a PC can download, run, evaluate, explore, tweak, and build upon.
Abstract: Mininet is a system for rapidly prototyping large networks on the constrained resources of a single laptop The lightweight approach of using OS-level virtualization features, including processes and network namespaces, allows it to scale to hundreds of nodes Experiences with our initial implementation suggest that the ability to run, poke, and debug in real time represents a qualitative change in workflow We share supporting case studies culled from over 100 users, at 18 institutions, who have developed Software-Defined Networks (SDN) Ultimately, we think the greatest value of Mininet will be supporting collaborative network research, by enabling self-contained SDN prototypes which anyone with a PC can download, run, evaluate, explore, tweak, and build upon

1,890 citations


Journal ArticleDOI
TL;DR: System-level simulation evaluations show that the CoMP transmission and reception schemes have a significant effect in terms of improving the cell edge user throughput based on LTE-Advanced simulation conditions.
Abstract: This article presents an elaborate coordination technique among multiple cell sites called coordinated multipoint transmission and reception in the Third Generation Partnership Project for LTE-Advanced. After addressing major radio access techniques in the LTE Release 8 specifications, system requirements and applied radio access techniques that satisfy the requirements for LTE-Advanced are described including CoMP transmission and reception. Then CoMP transmission and reception schemes and the related radio interface, which were agreed upon or are currently being discussed in the 3GPP, are presented. Finally, system-level simulation evaluations show that the CoMP transmission and reception schemes have a significant effect in terms of improving the cell edge user throughput based on LTE-Advanced simulation conditions.

694 citations


Journal ArticleDOI
TL;DR: In this article, a holistic approach for energy efficient mobile radio networks is presented and the matter of having appropriate metrics and evaluation methods that allow assessing the energy efficiency of the entire system is discussed.
Abstract: Mobile communications are increasingly contributing to global energy consumption. In this article, a holistic approach for energy efficient mobile radio networks is presented. The matter of having appropriate metrics and evaluation methods that allow assessing the energy efficiency of the entire system is discussed. The mutual supplementary saving concepts comprise component, link and network levels. At the component level the power amplifier complemented by a transceiver and a digital platform supporting advanced power management are key to efficient radio implementations. Discontinuous transmission by base stations, where hardware components are switched off, facilitate energy efficient operation at the link level. At the network level, the potential for reducing energy consumption is in the layout of networks and their management, that take into account slowly changing daily load patterns, as well as highly dynamic traffic fluctuations. Moreover, research has to analyze new disruptive architectural approaches, including multi-hop transmission, ad-hoc meshed networks, terminal-to-terminal communications, and cooperative multipoint architectures.

621 citations


Patent
02 Feb 2010
TL;DR: In this article, different methods for determining whether or not a synchronization state is established in the uplink, are used depending on whether a dedicated uplink dedicated resource is set up.
Abstract: In a mobile communication method by which a mobile station (UE) transmits a control signal to a radio base station (eNB) in an uplink by using an uplink dedicated resource, different methods for determining whether or not a synchronization state is established in the uplink, are used depending on whether or not the uplink dedicated resource is set up.

553 citations


Journal ArticleDOI
Mikio Iwamura1, Kamran Etemad2, Mo-Han Fong3, R Nory4, R Love4 
TL;DR: Support for carrier aggregation requires enhancement to the LTE Release 8/9 PHY, MAC, and RRC layers while ensuring that LTE Release 10 maintains backward compatibility to LTE Release8/9.
Abstract: Carrier aggregation is one of the most distinct features of 4G systems including LTEAdvanced, which is being standardized in 3GPP as part of LTE Release 10. This feature allows scalable expansion of effective bandwidth delivered to a user terminal through concurrent utilization of radio resources across multiple carriers. These carriers may be of different bandwidths, and may be in the same or different bands to provide maximum flexibility in utilizing the scarce radio spectrum available to operators. Support for this feature requires enhancement to the LTE Release 8/9 PHY, MAC, and RRC layers while ensuring that LTE Release 10 maintains backward compatibility to LTE Release 8/9. This article provides an overview of carrier aggregation use cases and the framework, and their impact on LTE Release 8/9 protocol layers.

382 citations


Journal ArticleDOI
TL;DR: The authors developed various prototypes to explore novel ways for human-computer interaction enabled by the Internet of Things and related technologies and derive a set of guidelines for embedding interfaces into people's daily lives.
Abstract: The Internet of Things assumes that objects have digital functionality and can be identified and tracked automatically. The main goal of embedded interaction is to look at new opportunities that arise for interactive systems and the immediate value users gain. The authors developed various prototypes to explore novel ways for human-computer interaction (HCI), enabled by the Internet of Things and related technologies. Based on these experiences, they derive a set of guidelines for embedding interfaces into people's daily lives.

312 citations


Proceedings Article
22 Jun 2010
TL;DR: A propagation model is proposed that predicts which users are likely to mention which URLs in the social network of Twitter, a popular microblogging site, and correctly accounts for more than half of the URL mentions in the data set.
Abstract: Microblogging sites are a unique and dynamic Web 2.0 communication medium. Understanding the information flow in these systems can not only provide better insights into the underlying sociology, but is also crucial for applications such as content ranking, recommendation and filtering, spam detection and viral marketing. In this paper, we characterize the propagation of URLs in the social network of Twitter, a popular microblogging site. We track 15 million URLs exchanged among 2.7 million users over a 300 hour period. Data analysis uncovers several statistical regularities in the user activity, the social graph, the structure of the URL cascades and the communication dynamics. Based on these results we propose a propagation model that predicts which users are likely to mention which URLs. The model correctly accounts for more than half of the URL mentions in our data set, while maintaining a false positive rate lower than 15%.

285 citations


Proceedings ArticleDOI
19 Apr 2010
TL;DR: This work uses the recently developed technique of Lyapunov Optimization to design an online admission control, routing, and resource allocation algorithm for a virtualized data center that maximizes a joint utility of the average application throughput and energy costs of the data center.
Abstract: We investigate optimal resource allocation and power management in virtualized data centers with time-varying workloads and heterogeneous applications. Prior work in this area uses prediction based approaches for resource provisioning. In this work, we take an alternate approach that makes use of the queueing information available in the system to make online control decisions. Specifically, we use the recently developed technique of Lyapunov Optimization to design an online admission control, routing, and resource allocation algorithm for a virtualized data center. This algorithm maximizes a joint utility of the average application throughput and energy costs of the data center. Our approach is adaptive to unpredictable changes in the workload and does not require estimation and prediction of its statistics.

262 citations


Proceedings ArticleDOI
03 Sep 2010
TL;DR: In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers, which can be circumvented if backup resources are pooled and shared across multiple virtual inf infrastructure, and intelligently embedded in the physical infrastructure.
Abstract: In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.

144 citations


Proceedings ArticleDOI
30 Dec 2010
TL;DR: This paper shows how the existing Web infrastructure can be leveraged to support publishing of sensor and entity data and presents a real-time search engine for the Web of Things.
Abstract: The increasing penetration of the real world with embedded and globally networked sensors leads to the formation of the Internet of Things, offering global online access to the current state of the real world. We argue that on top of this realtime data, a Web of Things is needed, a software infrastructure that allows the construction of applications involving sensor-equipped real-world entities living in the Internet of Things. A key service for such an infrastructure is a search engine that supports lookup of real-world entities that exhibit a certain current state as perceived by sensors. In contrast to existing Web search engines, such a real-world search engine has to support searching for rapidly changing state information generated by sensors. In this paper, we show how the existing Web infrastructure can be leveraged to support publishing of sensor and entity data. Based on this we present a real-time search engine for the Web of Things.

143 citations


Posted Content
TL;DR: In this paper, backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure to reduce the physical footprint of virtual backups while guaranteeing reliability.
Abstract: In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.

Journal ArticleDOI
TL;DR: This article motivates future research in this emerging field by presenting a ringside view of the recent developments and trends favoring this technology and the challenges facing the next generation of telemedicine.
Abstract: The paradigm of wellness mobiles will enable health-care professionals to have access to comprehensive real-time patient data at the point of care and anywhere there is cellular network coverage. More importantly, users can continuously and frequently track their health on the go and receive real-time user assistance when needed to alter their lifestyles. Recently, there has been a growing interest in developing proactive wellness products and health-related smartphone applications. However, developing quantifiable measures of wellness for continuous tracking and designing compliant-monitoring systems is quite challenging. This article motivates future research in this emerging field by presenting a ringside view of the recent developments and trends favoring this technology and the challenges facing the next generation of telemedicine.

Proceedings ArticleDOI
Sadayuki Abeta1
01 Nov 2010
TL;DR: The plan for LTE commercial launch in NTT DOCOMO and future plans for LTE Rel.
Abstract: As a promising radio access technology for next generation mobile communication systems, LTE (Long-Term Evolution) is being standardized by the 3rd Generation Partnership Project (3GPP) international standardization organization. LTE Release 8 has many advantages to the other systems, e.g., the peak throughput is 300Mbps in Downlink (DL) and 75Mbps in Uplink (UL), 2–3 time higher spectrum efficiency than Rel. 6 HSPA (High Speed Packet Access), very low latency around 5msec in RAN (Radio Access Network) and 100msec for connection setup time. With Release 8, the first version for LTE specification, being completed in March 2009, the LTE standard is now being developed towards commercialization in various countries in the world. This paper addresses the plan for LTE commercial launch in NTT DOCOMO and future plan for LTE Rel. 9 and LTE-Advanced (LTE Rel. 10 and beyond).

Patent
24 Mar 2010
TL;DR: In this article, a resource assignment unit is configured to assign the resource candidate for transmitting a semi-persistent scheduling transmission acknowledgement signal to the first mobile station based on a number of assignments of predetermined resources formed by a combination of a frequency direction resource and a code direction resource.
Abstract: A radio base station includes a resource assignment unit for assigning a resource candidate for transmitting a semi-persistent scheduling transmission acknowledgement signal to a first mobile station during a semi-persistent scheduling bearer setting process. The resource candidate for transmitting a semi-persistent scheduling transmission acknowledgement signal is formed by a combination of a frequency direction resource and a code direction resource by which the first mobile station transmits a transmission acknowledgement signal after a predetermined timing from a timing of receiving downlink data, to the downlink data that has been scheduled by semi-persistent scheduling and has been transmitted via a downlink data channel. The resource assignment unit is configured to assign the resource candidate for transmitting a semi-persistent scheduling transmission acknowledgement signal to the first mobile station based on a number of assignments of predetermined resources formed by a combination of a frequency direction resource and a code direction resource.

Proceedings ArticleDOI
26 Apr 2010
TL;DR: This analysis, in agreement with a number of recent simulation results, shows that conventional MU-MIMO cellular architectures may outperform schemes based on coordinated transmission from base stations, at the negligible cost of a few extra antennas per station.
Abstract: We compare the downlink throughput of various cellular architectures with multi-antenna base stations and multiple single-antenna users per cell, by considering a number of inherent physical layer issues such as path-loss and time and frequency selective fading. In particular, we focus on Multiuser MIMO (MU-MIMO) downlink techniques that require channel state information at the transmitter (CSIT). Our analysis takes explicit account of the cost of CSIT estimation and illuminates the tradeoffs between CSIT, estimation error, and system resource dedicated to training. This tradeoff shows that the number of antennas that can be jointly coordinated (either on the same base station or across multiple base stations) is intrinsically limited not just by “external factors,” such as complexity and rate of the backbone wired network, but by the inherent time and frequency variability of the fading channels. Our analysis, in agreement with a number of recent simulation results, shows that conventional MU-MIMO cellular architectures may outperform schemes based on coordinated transmission from base stations (referred to as Network MIMO schemes, NW-MIMO), at the negligible cost of a few extra antennas per station. In light of these results, it appears that the inherent bottleneck of NW-MIMO systems is not the backbone network (which here is assumed ideal with infinite capacity) but the intrinsic dimensional limitation of estimating the channels.

Proceedings Article
23 Jun 2010
TL;DR: LiteGreen is presented, a system to save desktop energy by virtualizing the user's desktop computing environment as a virtual machine (VM) and then migrating it between theuser's physical desktop machine and a VM server, depending on whether the desktop Computing environment is being actively used or is idle.
Abstract: To reduce energy wastage by idle desktop computers in enterprise environments, the typical approach is to put a computer to sleep during long idle periods (e.g., overnight), with a proxy employed to reduce user disruption by maintaining the computer's network presence at some minimal level. However, the Achilles' heel of the proxy-based approach is the inherent trade-off between the functionality of maintaining network presence and the complexity of application-specific customization. We present LiteGreen, a system to save desktop energy by virtualizing the user's desktop computing environment as a virtual machine (VM) and then migrating it between the user's physical desktop machine and a VM server, depending on whether the desktop computing environment is being actively used or is idle. Thus, the user's desktop environment is "always on", maintaining its network presence fully even when the user's physical desktop machine is switched off and thereby saving energy. This seamless operation allows LiteGreen to save energy during short idle periods as well (e.g., coffee breaks), which is shown to be significant according to our analysis of over 65,000 hours of data gathered from 120 desktop machines. We have prototyped LiteGreen on the Microsoft Hyper-V hypervisor. Our findings from a small-scale deployment comprising over 3200 user-hours of the system as well as from laboratory experiments and simulation analysis are very promising, with energy savings of 72-74% with LiteGreen compared to 32% with existing Windows and manual power management.

Journal ArticleDOI
TL;DR: In this article, the authors considered a multicell scenario with partial cell cooperation, where each cell optimizes its transmitter by taking into account interference constraints on specific users in adjacent cells.
Abstract: The transmitter optimization (i.e., steering vectors and power allocation) for a MISO broadcast channel subject to general linear constraints is considered. Such constraints include, as special cases, the sum-power, per-antenna or per-group-of-antennas power, and “forbidden interference direction” constraints. We consider both the optimal dirty-paper coding and simple suboptimal linear zero-forcing beamforming strategies, and provide numerically efficient algorithms that solve the problem in its most general form. As an application, we consider a multicell scenario with partial cell cooperation, where each cell optimizes its transmitter by taking into account interference constraints on specific users in adjacent cells. The effectiveness of the proposed method is evaluated in a simple system scenario including two adjacent cells and distance-dependent pathloss, under different fairness criteria that emphasize the bottleneck effect of users near the cell “edge.” Our results show that this “active” Intercell Interference (ICI) mitigation outperforms the conventional “static” ICI mitigation based on fractional frequency reuse.

Journal ArticleDOI
TL;DR: The model was developed using data from high encoding-rate videos, and designed for high-quality video transported over a mostly reliable network; however, the experiments show the model is applicable to different encoding rates.
Abstract: In this paper, we propose a generalized linear model for video packet loss visibility that is applicable to different group-of-picture structures. We develop the model using three subjective experiment data sets that span various encoding standards (H.264 and MPEG-2), group-of-picture structures, and decoder error concealment choices. We consider factors not only within a packet, but also in its vicinity, to account for possible temporal and spatial masking effects. We discover that the factors of scene cuts, camera motion, and reference distance are highly significant to the packet loss visibility. We apply our visibility model to packet prioritization for a video stream; when the network gets congested at an intermediate router, the router is able to decide which packets to drop such that visual quality of the video is minimally impacted. To show the effectiveness of our visibility model and its corresponding packet prioritization method, experiments are done to compare our perceptual-quality-based packet prioritization approach with existing Drop-Tail and Hint-Track-inspired cumulative-MSE-based prioritization methods. The result shows that our prioritization method produces videos of higher perceptual quality for different network conditions and group-of-picture structures. Our model was developed using data from high encoding-rate videos, and designed for high-quality video transported over a mostly reliable network; however, the experiments show the model is applicable to different encoding rates.

Journal ArticleDOI
TL;DR: This paper focuses on mitigating downlink femto-cell to macro-cell interference through dynamic resource partitioning, in the way that HeNBs are denied access to downlink resources that are assigned to macro UEs in their vicinity.
Abstract: Femto-cells consist of user-deployed Home Evolved NodeBs (HeNBs) that promise substantial gains in system spectral efficiency, coverage, and data rates due to an enhanced reuse of radio resources. However, reusing radio resources in an uncoordinated, random fashion introduces potentially destructive interference to the system, both, in the femto and macro layers. An especially critical scenario is a closed-access femto-cell, cochannel deployed with a macro-cell, which imposes strong downlink interference to nearby macro user equipments (UEs) that are not permitted to hand over to the femto-cell. In order to maintain reliable service of macro-cells, it is imperative to mitigate the destructive femto-cell to macro-cell interference. The contribution in this paper focuses on mitigating downlink femto-cell to macro-cell interference through dynamic resource partitioning, in the way that HeNBs are denied access to downlink resources that are assigned to macro UEs in their vicinity. By doing so, interference to the most vulnerable macro UEs is effectively controlled at the expense of a modest degradation in femto-cell capacity. The necessary signaling is conveyed through downlink high interference indicator (DL-HII) messages over the wired backbone. Extensive system level simulations demonstrate that by using resource partitioning, for a sacrifice of 4% of overall femto downlink capacity, macro UEs exposed to high HeNB interference experience a tenfold boost in capacity.

Journal ArticleDOI
TL;DR: The five papers in this special section were among those submitted in response to the joint call for proposals on high efficiency video coding (HEVC) standardization and cover most of the promising tools and technologies that seem likely to be included in the standard.
Abstract: The five papers in this special section were among those submitted in response to the joint call for proposals on high efficiency video coding (HEVC) standardization. Although at this point of development it is still unclear which specific elements the final HEVC standard will contain, the selection of the papers was made such that together they would cover most of the promising tools and technologies that seem likely to be included in the standard.

Journal ArticleDOI
TL;DR: This paper presents a biologically inspired approach for distributed slot synchronization in wireless networks by modifying and extending a synchronization model based on the theory of pulse-coupled oscillators, which multiplexes synchronization words with data packets and adapts local clocks upon the reception of synchronization words from neighboring nodes.
Abstract: This paper presents a biologically inspired approach for distributed slot synchronization in wireless networks. This is facilitated by modifying and extending a synchronization model based on the theory of pulse-coupled oscillators. The proposed Meshed Emergent Firefly Synchronization (MEMFIS) multiplexes synchronization words with data packets and adapts local clocks upon the reception of synchronization words from neighboring nodes. In this way, a dedicated synchronization phase is mitigated, as a network-wide slot structure emerges seamlessly over time as nodes exchange data packets. Simulation results demonstrate that synchronization is accomplished regardless of the arbitrary initial situation. There is no need for the selection of master nodes, as all nodes cooperate in a completely self-organized manner to achieve slot synchrony. Moreover, the algorithm is shown to scale with the number of nodes, works in meshed networks, and is robust against interference and collisions in dense networks.

Patent
18 Mar 2010
TL;DR: In this paper, a mobile terminal checks a priority level of a traffic and judges a type of the traffic, and transmits a reservation signal for a transmission request to the base station when the type of traffic is a high priority level or real-time type, and does not transmits it when the traffic is low-priority level or non-realtime type.
Abstract: In packet communications between a mobile terminal and a base station, the mobile terminal checks a priority level of a traffic and judges a type of the traffic, and transmits a reservation signal for a transmission request to the base station when the type of the traffic is a high priority level or realtime type, and does not transmits it when the type of the traffic is a low priority level or non-realtime type, while the base station determines a resource amount to be reserved for packet transmission according to a resource utilization state and the reservation signal for the traffic of the high priority level or realtime type, or an average transmission interval or transmission rate for the traffic of the low priority level or non-realtime type according to margins in remaining resources, and notifies the resource amount or the average transmission interval or transmission rate to the mobile terminal.

Journal ArticleDOI
13 Sep 2010
TL;DR: The underlying problem is defined, the design space of possible solutions are outlined, and relevant existing approaches are surveyed by classifying them according to their design space.
Abstract: We are observing an increasing trend of connecting embedded sensors and sensor networks to the Internet and publishing their output on the Web. We believe that this development is a precursor of a Web of Things, which gives real-world objects and places a Web presence that not only contains a static description of these entities, but also their real-time state. Just as document searches have become one of the most popular services on the Web, we argue that the search for real-world entities (i.e., people, places, and things) will become equally important. However, in contrast to the mostly static documents on the current Web, the state of real-world entities as captured by sensors is highly dynamic. Thus, searching for real-world entities with a certain state is a challenging problem. In this paper, we define the underlying problem, outline the design space of possible solutions, and survey relevant existing approaches by classifying them according to their design space. We also present a case study of a real-world search engine called Dyser designed by the authors.

Proceedings ArticleDOI
14 Mar 2010
TL;DR: This paper takes a fresh and comprehensive approach that addresses simultaneously three aspects: security, scalability and adaptability to changing network conditions and achieves up to two times higher packet delivery rates, particularly in large and highly volatile networks, while incurring no or only limited additional overhead.
Abstract: Wireless ad hoc networks are inherently vulnerable, as any node can disrupt the communication of potentially any other node in the network. Many solutions to this problem have been proposed. In this paper, we take a fresh and comprehensive approach that addresses simultaneously three aspects: security, scalability and adaptability to changing network conditions. Our communication protocol, Castor, occupies a unique point in the design space: it does not use any control messages except simple packet acknowledgments, and each node makes routing decisions locally and independently without exchanging any routing state with other nodes. Its novel design makes Castor resilient to a wide range of attacks and allows the protocol to scale to large network sizes and to remain efficient under high mobility. We compare Castor against four representative protocols from the literature. Our protocol achieves up to two times higher packet delivery rates, particularly in large and highly volatile networks, while incurring no or only limited additional overhead. At the same time, Castor is able to survive more severe attacks and recovers from them faster.

Proceedings ArticleDOI
14 Mar 2010
TL;DR: It is proved that there exists one Nash equilibrium in the conjectural prices and there are enough incentives for NO to advertise such a conjectural price and SPs to follow this advice, and this Nash equilibrium results in efficient rate allocation in the virtualized wireless network.
Abstract: We propose a virtualization framework to separate the network operator (NO) who focuses on wireless resource management and service providers (SP) who target distinct objectives with different constraints. Within the proposed framework, we model the interactions among SPs and NO as a stochastic game, each stage of which is played by SPs (on behalf of the end users) and is regulated by the NO through the Vickrey-Clarke-Groves (VCG) mechanism. Due to the strong coupling between the future decisions of SPs and lack of global information at each SP, the stochastic game is notoriously hard. Instead, we introduce conjectural prices to represent the future congestion levels the end users potentially will experience, via which the future interactions between SPs are decoupled. Then, the policy to play the dynamic rate allocation game becomes selecting the conjectural prices and announcing a strategic value function (i.e., the preference on the rate) at each time. We prove that there exists one Nash equilibrium in the conjectural prices and, given the conjectural prices, the SPs have to truthfully reveal their own value function. We further prove that this Nash equilibrium results in efficient rate allocation in our virtualized wireless network. In other words, there are enough incentives for NO to advertise such a conjectural price and SPs to follow this advice.

Proceedings ArticleDOI
Juejia Zhou1, Mingju Li1, Liu Liu1, Xiaoming She1, Lan Chen1 
18 Dec 2010
TL;DR: The research focuses on guiding more power consumption into green source energy, which implying that the UEs (User Equipment), especially the cell edge UEs, will have preferential access to the BSs (Base Station) with natural energy supply.
Abstract: The spread of mobile connectivity is generating major social and economic benefits around the world, while along with the rapid growth of new telecommunication technologies like mobile broadband communication and M2M (Machine-to-Machine) networks, larger number of various base stations will be employed into the network, which will greatly increase the power expense and CO2 emission. In order to degrade the system power expense, variety of researches on new energy and novel transmission technology are put in agenda. In this paper, instead of reducing the absolute power expense, the research focuses on guiding more power consumption into green source energy, which implying that the UEs (User Equipment), especially the cell edge UEs, will have preferential access to the BSs (Base Station) with natural energy supply. To realize the tendentious connection, two detailed approaches are proposed, the HO (Hand Over) parameter tuning for target cell selection and power control for coverage optimization. The system evaluation shows that, by proper setting of parameters in HO and power control, both of the two approaches can achieve good balance between energy saving effect and system throughput impact.

Patent
24 Sep 2010
TL;DR: In this paper, a method and apparatus for motion vector prediction and coding is described, which consists of deriving N motion vector predictors for a first block that has N motion vectors corresponding to N lists of reference frames and a current frame, including constructing one of the predictors when a second block that neighbors the first block and is used for prediction has at least one invalid motion vector, where N is an integer greater than 1.
Abstract: A method and apparatus is disclosed herein for motion vector prediction and coding. In one embodiment, the method comprises: deriving N motion vector predictors for a first block that has N motion vectors corresponding to N lists of reference frames and a current frame, including constructing one of the N motion vector predictors when a second block that neighbors the first block and is used for prediction has at least one invalid motion vector, where N is an integer greater than 1; generating N differential motion vectors based on the N motion vectors and N motion vector predictors; and encoding the N differential motion vectors.

Proceedings ArticleDOI
16 May 2010
TL;DR: An enhanced DCS with muting method is proposed here to further improve the frequency and power efficiency, by using adaptive muting mode selection based on capacity calculation and flexible power allocation based on mutingmode selection status.
Abstract: CoMP JP (Coordinated Multi-Point Joint Processing) is regarded as a promising technique to improve both cell edge user throughput and cell average user throughput in LTE-A downlink. CoMP JP can be categorized as CoMP JT (Joint Transmission) and COMP DCS (Dynamic Cell Selection). Compared with CoMP JT schemes, CoMP DCS provides a good trade-off between the transmission algorithm complexity, backhaul overhead and system performance. Conventional DCS scheme can benefit cell edge users by instantaneous selecting the best severed eNB for cell edge users based on limited signaling overhead in uplink radio interface. DCS with muting scheme further apply muting to the strongest neighbor cell for decreasing interference to cell edge users. However the system spectrum efficiency may be degraded, since frequency reuse one is not maintained and radio resource (resource block and power) may not be fully used. An enhanced DCS with muting method is proposed here to further improve the frequency and power efficiency, by using adaptive muting mode selection based on capacity calculation and flexible power allocation based on muting mode selection status. Performance evaluation shows the proposed algorithm can provide about 5.5% cell average throughput gain and about 10% cell edge throughput gain respectively compared with conventional DCS schemes.

Journal ArticleDOI
TL;DR: In this article, a path shadowing model for indoor populated environments was developed based on computer simulations, in which the propagation paths between the transmitting and receiving points in an empty rectangular space were determined using the ray-tracing method, and intersections of the paths with the bodies were counted.
Abstract: This paper presents a path-shadowing model for indoor populated environments that has been developed based on computer simulations. The propagation paths between the transmitting and receiving points in an empty rectangular space are determined using the ray-tracing method, in which moving quasi-human bodies that are modeled as cylinders with a finite height are generated in the space, and intersections of the paths with the bodies are counted. From the results, the shadowing probabilities, durations, and intervals are evaluated for each propagation path, and this shadowing process is characterized as a Markov process. This paper proposes a method that individually generates the shadowing effects on each propagation path. The measurement results of the path-shadowing characteristics using a 5.2-GHz high-resolution channel sounder are presented, and the validity of this model is confirmed. Similar measurement results using a photoelectric sensor are also presented to reinforce the channel-sounding measurement results.

Patent
02 Aug 2010
TL;DR: In this paper, a mobile terminal apparatus performs spreading processing for multiplying a data signal by a code varying for each user, and transmits the spreading-processed data signal to the radio base station apparatus on the uplink shared channel.
Abstract: To provide a mobile terminal apparatus, radio base station apparatus and radio communication method capable of supporting user multiplexing methods for enabling more users to be efficiently multiplexed on an uplink shared channel, the mobile terminal apparatus performs spreading processing for multiplying a data signal by a code varying for each user, and transmits the spreading-processed data signal to the radio base station apparatus on the uplink shared channel, and the radio base station apparatus receives the data signal, and user-separates a reception signal in which is mixed a plurality of users multiplied by spreading codes varying for each user into desired user signals to be data signals for each user.