scispace - formally typeset
Search or ask a question

Showing papers by "NTT DoCoMo published in 2006"


Journal ArticleDOI
10 Jan 2006
TL;DR: This paper explains what network coding does and how it does it and discusses the implications of theoretical results on network coding for realistic settings and shows how network coding can be used in practice.
Abstract: Network coding is a new research area that may have interesting applications in practical networking systems. With network coding, intermediate nodes may send out packets that are linear combinations of previously received information. There are two main benefits of this approach: potential throughput improvements and a high degree of robustness. Robustness translates into loss resilience and facilitates the design of simple distributed algorithms that perform well, even if decisions are based only on partial information. This paper is an instant primer on network coding: we explain what network coding does and how it does it. We also discuss the implications of theoretical results on network coding for realistic settings and show how network coding can be used in practice

858 citations


Journal ArticleDOI
TL;DR: It is shown that the maximum likelihood estimator (MLE) using only LOS estimates and the maximum a posteriori probability (MAP) estimator using both LOS and NLOS data can asymptotically achieve the CRLB and the G-CRLB, respectively.
Abstract: We present an analysis of the time-of-arrival (TOA), time-difference-of-arrival (TDOA), angle-of-arrival (AOA) and signal strength (SS) based positioning methods in a non-line-of-sight (NLOS) environment. Single path (line-of-sight (LOS) or NLOS) propagation is assumed. The best geolocation accuracy is evaluated in terms of the Cramer-Rao lower bound (CRLB) or the generalized CRLB (G-CRLB), depending on whether prior statistics of NLOS induced errors are unavailable or available. We then show that the maximum likelihood estimator (MLE) using only LOS estimates and the maximum a posteriori probability (MAP) estimator using both LOS and NLOS data can asymptotically achieve the CRLB and the G-CRLB, respectively. Hybrid schemes that adopt more than one type of position-pertaining data and the relationship among the four methods in terms of their positioning accuracy are also investigated.

428 citations


Journal ArticleDOI
TL;DR: This letter investigates a new method for sidelobe suppression characterized by the insertion of a few so-called cancellation carriers at both sides of the OFDM spectrum that achieves a significant reduction of out-of-band radiation.
Abstract: Orthogonal frequency-division multiplexing (OFDM) suffers from high out-of-band radiation. In this letter, we investigate a new method for sidelobe suppression characterized by the insertion of a few so-called cancellation carriers (CCs) at both sides of the OFDM spectrum. These special carriers are modulated with complex weighting factors which are optimized such that the sidelobes of the CCs cancel the sidelobes of the transmit signal. With this technique a significant reduction of out-of-band radiation is achieved at the cost of a small degradation in system performance

313 citations


Book ChapterDOI
24 Apr 2006
TL;DR: In this paper, identity-based aggregate signature schemes are developed that are secure in the random oracle model under the computational Diffie-Hellman assumption over pairing-friendly groups against an adversary that chooses its messages and its target identities adaptively.
Abstract: An aggregate signature is a single short string that convinces any verifier that, for all 1 ≤ i ≤ n, signer Si signed message Mi, where the n signers and n messages may all be distinct. The main motivation of aggregate signatures is compactness. However, while the aggregate signature itself may be compact, aggregate signature verification might require potentially lengthy additional information – namely, the (at most) n distinct signer public keys and the (at most) n distinct messages being signed. If the verifier must obtain and/or store this additional information, the primary benefit of aggregate signatures is largely negated. This paper initiates a line of research whose ultimate objective is to find a signature scheme in which the total information needed to verify is minimized. In particular, the verification information should preferably be as close as possible to the theoretical minimum: the complexity of describing which signer(s) signed what message(s). We move toward this objective by developing identity-based aggregate signature schemes. In our schemes, the verifier does not need to obtain and/or store various signer public keys to verify; instead, the verifier only needs a description of who signed what, along with two constant-length “tags”: the short aggregate signature and the single public key of a Private Key Generator. Our scheme is secure in the random oracle model under the computational Diffie-Hellman assumption over pairing-friendly groups against an adversary that chooses its messages and its target identities adaptively.

307 citations


Journal ArticleDOI
TL;DR: This article proposes a cross-layer optimization strategy that jointly optimizes the application layer, data link layer, and physical layer of the protocol stack using an application-oriented objective function in order to maximize user satisfaction.
Abstract: Mobile multimedia applications require networks that optimally allocate resources and adapt to dynamically changing environments. Cross-layer design (CLD) is a new paradigm that addresses this challenge by optimizing communication network architectures across traditional layer boundaries. In this article we discuss the relevant technical challenges of CLD and focus on application-driven CLD for video streaming over wireless networks. We propose a cross-layer optimization strategy that jointly optimizes the application layer, data link layer, and physical layer of the protocol stack using an application-oriented objective function in order to maximize user satisfaction. In our experiments we demonstrate the performance gain achievable with this approach. We also explore the trade-off between performance gain and additional computation and communication cost introduced by cross-layer optimization. Finally, we outline future research challenges in CLD.

282 citations


Journal ArticleDOI
TL;DR: The proposed method is based on the multiplication of the used subcarriers with subcarrier weights, which reduces OFDM sidelobes by more than 10 dB in the average without requiring the transmission of any side information.
Abstract: In this letter, a method for sidelobe suppression in OFDM systems is proposed and investigated. The proposed method is based on the multiplication of the used subcarriers with subcarrier weights. The subcarrier weights are determined in such a way that the sidelobes of the transmission signal are minimized according to an optimization algorithm which allows several optimization constraints. As a result, sidelobe suppression by subcarrier weighting reduces OFDM sidelobes by more than 10 dB in the average without requiring the transmission of any side information

271 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: In this paper, a new method for sample predictor creation by template matching in a region of reconstructed pixels is presented and Improvements in coding efficiency by more than 11% in bitrate were achieved.
Abstract: Intra prediction is an effective method for reducing the coded information of an image or an intra frame within a video sequence. The conventional method today is to create a sample predictor block by extrapolating the reconstructed pixels surrounding the target block to be coded. The sample predictor block is subtracted from the target block and the resulting residual coded using transformation, quantization and entropy coding. This is an effective method for sample predictor block creation in most sequences. However the extrapolation method is not able to represent sample prediction blocks with complex texture. Furthermore, pixels that are far from the surrounding pixels are usually badly predicted. In this paper, a new method for sample predictor creation by template matching in a region of reconstructed pixels is presented. Improvements in coding efficiency by more than 11 % in bitrate were achieved.

246 citations


Proceedings ArticleDOI
23 Apr 2006
TL;DR: It is shown that network coding allows to realize significant energy savings in a wireless ad-hoc network, when each node of the network is a source that wants to transmit information to all other nodes, and an implementable method for performing network coding is proposed.
Abstract: We show that network coding allows to realize significant energy savings in a wireless ad-hoc network, when each node of the network is a source that wants to transmit information to all other nodes. Energy efficiency directly affects battery life and thus is a critical design parameter for wireless ad-hoc networks. We propose an implementable method for performing network coding in such a setting. We analyze theoretical cases in detail, and use the insights gained to propose a practical, fully distributed method for realistic wireless adhoc scenarios. We address practical issues such as setting the forwarding factor, managing generations, impact of transmission range and mobility. We use theoretical analysis and packet level simulation.

232 citations


Journal ArticleDOI
Onur G. Guleryuz1
TL;DR: The robust estimation of missing regions in images and video using adaptive, sparse reconstructions using constructed estimators and how these estimators relate to the utilized transform and its sparsity over regions of interest is shown.
Abstract: We study the robust estimation of missing regions in images and video using adaptive, sparse reconstructions. Our primary application is on missing regions of pixels containing textures, edges, and other image features that are not readily handled by prevalent estimation and recovery algorithms. We assume that we are given a linear transform that is expected to provide sparse decompositions over missing regions such that a portion of the transform coefficients over missing regions are zero or close to zero. We adaptively determine these small magnitude coefficients through thresholding, establish sparsity constraints, and estimate missing regions in images using information surrounding these regions. Unlike prevalent algorithms, our approach does not necessitate any complex preconditioning, segmentation, or edge detection steps, and it can be written as a sequence of denoising operations. We show that the region types we can effectively estimate in a mean-squared error sense are those for which the given transform provides a close approximation using sparse nonlinear approximants. We show the nature of the constructed estimators and how these estimators relate to the utilized transform and its sparsity over regions of interest. The developed estimation framework is general, and can readily be applied to other nonstationary signals with a suitable choice of linear transforms. Part I discusses fundamental issues, and Part II is devoted to adaptive algorithms with extensive simulation examples that demonstrate the power of the proposed techniques.

227 citations


Journal ArticleDOI
TL;DR: In this article, specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, Sub-Committee 2, and Working Group 2.
Abstract: The specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, Sub-Committee 2, and Working Group 2, and carried out by an international task force comprising 14 government, academic, and industrial research institutions. The detailed study protocol defined the computational head and mobile phone models. The participants used different finite-difference time-domain software and independently positioned the mobile phone and head models in accordance with the protocol. The results show that when the pinna SAR is calculated separately from the head SAR, SAM produced a higher SAR in the head than the anatomically correct head models. Also the larger (adult) head produced a statistically significant higher peak SAR for both the 1- and 10-g averages than did the smaller (child) head for all conditions of frequency and position.

207 citations


Journal ArticleDOI
TL;DR: It is shown that prior statistics of non-line-sight (NLOS) induced errors are critical to the accuracy improvement when the multipath delays are processed and the degree of accuracy enhancement depends on two major factors: the strength of multipath components and the variance of NLOS induced errors.
Abstract: Wireless geolocation in a multipath environment is of particular interest for wideband communications and fast-developing ultrawideband (UWB) technologies. Conventional methods are solely based on first arriving signals. In this paper, it is investigated whether and under what conditions processing delay estimates of multipath components in addition to first arrivals can enhance the positioning accuracy. It is shown that the enhancement depends on two principal factors: strength of multipath components and variance of nonline-of-sight (NLOS) delays. Analytical results, which are derived as an extension of a single-path propagation case (IEEE Trans. Wireless Commun., Feb. 2006), are presented first. Their practical implications are then discussed by examining several numerical examples. Finally, modified schemes of practical interest in a multipath environment are proposed

Proceedings ArticleDOI
Markus Herdin1
11 Dec 2006
TL;DR: Simulations show that the proposed relaying scheme achieves significant SNR gains over conventional OFDM relaying and a signaling scheme is developed that allows for an efficient transfer of the necessary information.
Abstract: Amplify-and-Forward (AF) is a simple but effective relaying concept for multihop networks that combines transparency regarding modulation format and coding scheme with ease of implementation. Conventional AF, however, does not take into account the transfer function of the first and the second hop channels. For OFDM based systems, this appears to be sub-optimum. In this paper an AF relaying scheme is proposed that adapts to the transfer functions of both channels. The relay estimates the transfer functions and rearranges the subcarriers in each OFDM packet such that an optimum coupling between subcarriers of the first and the second hop channels occurs. Additionally, a signaling scheme is developed that allows for an efficient transfer of the necessary information. Simulations show that the proposed relaying scheme achieves significant SNR gains over conventional OFDM relaying.

Book ChapterDOI
03 Nov 2006
TL;DR: This paper proposes an RFID security method that achieves all requirements based on a hash function and a symmetric key cryptosystem and provides not only high-security but also high-efficiency.
Abstract: Radio Frequency Identification (RFID) has come under the spotlight as technology supporting ubiquitous society. But now, we face several security problems and challenges in RFID systems. Recent papers have reported that RFID systems have to achieve the following requirements: (1) Indistinguishability, (2) Forward Security, (3) Replay Attack, (4) Tag Killing, and (5) Ownership Transfer. We have to design RFID system that achieves the above-mentioned requirements. The previous methods achieve only some of them individually, and no RFID system has been constructed that achieves all requirements. In this paper, we propose an RFID security method that achieves all requirements based on a hash function and a symmetric key cryptosystem. In addition, our proposed method provides not only high-security but also high-efficiency.

Journal ArticleDOI
TL;DR: It is shown in this paper that a greedy user can substantially increase his share of bandwidth, at the expense of the other users, by slightly modifying the driver of his network adapter.
Abstract: IEEE 802.11 works properly only if the stations respect the MAC protocol. We show in this paper that a greedy user can substantially increase his share of bandwidth, at the expense of the other users, by slightly modifying the driver of his network adapter. We explain how easily this can be performed, in particular, with the new generation of adapters. We then present DOMINO (detection of greedy behavior in the MAC layer of IEEE 802.11 public networks), a piece of software to be installed in or near the access point. DOMINO can detect and identify greedy stations without requiring any modification of the standard protocol. We illustrate these concepts by simulation results and by the description of a prototype that we have recently implemented

Journal ArticleDOI
Onur G. Guleryuz1
TL;DR: This work shows that constructing estimates based on nonlinear approximants is fundamentally a nonconvex problem and proposes a progressive algorithm that is designed to deal with this issue directly and is applied to images through an extensive set of simulation examples.
Abstract: We combine the main ideas introduced in Part I with adaptive techniques to arrive at a powerful algorithm that estimates missing data in nonstationary signals. The proposed approach operates automatically based on a chosen linear transform that is expected to provide sparse decompositions over missing regions such that a portion of the transform coefficients over missing regions are zero or close to zero. Unlike prevalent algorithms, our method does not necessitate any complex preconditioning, segmentation, or edge detection steps, and it can be written as a progression of denoising operations. We show that constructing estimates based on nonlinear approximants is fundamentally a nonconvex problem and we propose a progressive algorithm that is designed to deal with this issue directly. The algorithm is applied to images through an extensive set of simulation examples, primarily on missing regions containing textures, edges, and other image features that are not readily handled by established estimation and recovery methods. We discuss the properties required of good transforms, and in conjunction, show the types of regions over which well-known transforms provide good predictors. We further discuss extensions of the algorithm where the utilized transforms are also chosen adaptively, where unpredictable signal components in the progressions are identified and not predicted, and where the prediction scenario is more general.

Patent
13 Jun 2006
TL;DR: In this paper, a disclosed transmission apparatus includes a multiplexing portion that multiplexes a common pilot channel, a shared control channel, and a shared data channel; a symbol generation portion that performs an inverse Fourier transformation on the multiplexed signal so as to generate a symbol; and a transmission portion that transmits the generated symbol.
Abstract: A disclosed transmission apparatus includes a multiplexing portion that multiplexes a common pilot channel, a shared control channel, and a shared data channel; a symbol generation portion that performs an inverse Fourier transformation on the multiplexed signal so as to generate a symbol; and a transmission portion that transmits the generated symbol. The multiplexing portion multiplexes the shared control channel including control information necessary for demodulation of the shared data channel including a payload and the common pilot channel to be used by plural users in a frequency direction, and the shared data channel in a time direction with respect to the common pilot channel and the shared control channel. Even when the number of symbols composing a transmission time interval (TTI) is reduced, transmission efficiency of channels excluding the common pilot channel can be maintained by reducing insertion intervals of the common pilot channel accordingly.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a technique to convert a large class of existing honest-verifier zero-knowledge protocols into ones with stronger properties in the common reference string model, such as non-malleability and universal composability.
Abstract: Recently there has been an interest in zero-knowledge protocols with stronger properties, such as concurrency, simulation soundness, non-malleability, and universal composability. In this paper we show a novel technique to convert a large class of existing honest-verifier zero-knowledge protocols into ones with these stronger properties in the common reference string model. More precisely, our technique utilizes a signature scheme existentially unforgeable against adaptive chosen-message attacks, and transforms any Σ-protocol (which is honest-verifier zero-knowledge) into a simulation sound concurrent zero-knowledge protocol. We also introduce Ω-protocols, a variant of Σ-protocols for which our technique further achieves the properties of non-malleability and/or universal composability. In addition to its conceptual simplicity, a main advantage of this new technique over previous ones is that it avoids the Cook-Levin theorem, which tends to be rather inefficient. Indeed, our technique allows for very efficient instantiation based on the security of some efficient signature schemes and standard number-theoretic assumptions. For instance, one instantiation of our technique yields a universally composable zero-knowledge protocol under the Strong RSA assumption, incurring an overhead of a small constant number of exponentiations, plus the generation of two signatures.

Patent
Ajay Chander1, Dachuan Yu1
08 Nov 2006
TL;DR: In this article, a static analysis of a script program based on a first safety policy is proposed to detect unsafe behavior of the scrip program and prevent execution of the program if a violation of the safety policy would occur when the script program is executed.
Abstract: A method and apparatus is disclosed herein for detecting and preventing unsafe behavior of script programs In one embodiment, a method comprises performing static analysis of a script program based on a first safety policy to detect unsafe behavior of the scrip program and preventing execution of the script program if a violation of the safety policy would occur when the script program is executed

Patent
23 Jun 2006
TL;DR: In this paper, a method and apparatus for video encoding and decoding using adaptive interpolation is described, in which the decoding method comprises decoding a reference index, decoding a motion vector, selecting a reference frame according to the reference index; selecting a filter, and filtering a set of samples of the reference frame using the filter to obtain the predicted block.
Abstract: A method and apparatus is disclosed herein for video encoding and/or decoding using adaptive interpolation is described. In one embodiment, the decoding method comprises decoding a reference index; decoding a motion vector; selecting a reference frame according to the reference index; selecting a filter according to the reference index; and filtering a set of samples of the reference frame using the filter to obtain the predicted block, wherein the set of samples of the reference frame is determined by the motion vector.

Journal ArticleDOI
TL;DR: The design, implementation, and evaluation of EScheduler are presented, an energy-efficient soft real-time CPU scheduler for multimedia applications running on a mobile device that delivers soft performance guarantees to these codecs by bounding their deadline miss ratio under the application-specific performance requirements.
Abstract: This article presents the design, implementation, and evaluation of EScheduler, an energy-efficient soft real-time CPU scheduler for multimedia applications running on a mobile device. EScheduler seeks to minimize the total energy consumed by the device while meeting multimedia timing requirements. To achieve this goal, EScheduler integrates dynamic voltage scaling into the traditional soft real-time CPU scheduling: It decides at what CPU speed to execute applications in addition to when to execute what applications. EScheduler makes these scheduling decisions based on the probability distribution of cycle demand of multimedia applications and obtains their demand distribution via online profiling.We have implemented EScheduler in the Linux kernel and evaluated it on a laptop with a variable-speed CPU and typical multimedia codecs. Our experimental results show four findings: first, the cycle demand distribution of our studied codecs is stable or changes slowly. This stability implies the feasibility to perform our proposed energy-efficient scheduling with low overhead. Second, EScheduler delivers soft performance guarantees to these codecs by bounding their deadline miss ratio under the application-specific performance requirements. Third, EScheduler reduces the total energy of the laptop by 14.4p to 37.2p relative to the scheduling algorithm without voltage scaling and by 2p to 10.5p relative to voltage scaling algorithms without considering the demand distribution. Finally, EScheduler saves energy by 2p to 5p by explicitly considering the discrete CPU speeds and the corresponding total power of the whole laptop, rather than assuming continuous speeds and cubic speed-power relationship.

Journal ArticleDOI
TL;DR: This paper proposes adaptive control of the number of surviving symbol replica candidates, Sm, based on the minimum accumulated branch metric of each stage in maximum-likelihood detection employing QR decomposition and the M-algorithm in orthogonal frequency-division multiplexing with multiple-input-multiple-output (MIMO)multiplexing.
Abstract: This paper proposes adaptive control of the number of surviving symbol replica candidates, Sm (m denotes the stage index), based on the minimum accumulated branch metric of each stage in maximum-likelihood detection employing QR decomposition and the M-algorithm (QRM-MLD) in orthogonal frequency-division multiplexing with multiple-input-multiple-output (MIMO) multiplexing. In the proposed algorithm, Sm at the mth stage (1lesmlesNt, N t is the number of transmission antenna branches) is independently controlled using the threshold value calculated from the minimum accumulated branch metric at that stage and the estimated noise power. We compared the computational complexity of QRM-MLD employing the proposed algorithm with that of conventional methods at the same average packet error rate assuming the information bit rate of 1.048 Gb/s in a 100-MHz channel bandwidth (i.e., frequency efficiency of approximately 10 bit/s/Hz) using 16QAM modulation and turbo coding with the coding rate of 8/9 in 4-by-4 MIMO multiplexing. Computer simulation results show that the average computational complexity of the branch metrics, i.e., squared Euclidian distances, of the proposed adaptive independent Sm control method is decreased to approximately 38% that of the conventional adaptive common Sm control and to approximately 30% that of the fixed Sm method (Sm=M=16), assuming fair conditions such that the maximum number of surviving symbol replicas at each stage is set to Mcirc=16

Proceedings ArticleDOI
Libo Song1, U. Deshpande1, Ulas C. Kozat2, David Kotz1, Ravi Jain2 
23 Apr 2006
TL;DR: This work evaluates a series of predictors that reflect possible dependencies across time and space while benefiting from either individual or group mobility behaviors, and examines voice applications and the use of handoff prediction for advance bandwidth reservation.
Abstract: Wireless local area networks (WLANs) are emerging as a popular technology for access to the Internet and enterprise networks. In the long term, the success of WLANs depends on services that support mobile network clients. Although other researchers have explored mobility prediction in hypothetical scenarios, evaluating their predictors analytically or with synthetic data, few studies have been able to evaluate their predictors with real user mobility data. As a first step towards filling this fundamental gap, we work with a large data set collected from the Dartmouth College campus-wide wireless network that hosts more than 500 access points and 6,000 users. Extending our earlier work that focuses on predicting the next-visited access point (i.e., location), in this work we explore the predictability of the time of user mobility. Indeed, our contributions are two-fold. First, we evaluate a series of predictors that reflect possible dependencies across time and space while benefiting from either individual or group mobility behaviors. Second, as a case study we examine voice applications and the use of handoff prediction for advance bandwidth reservation. Using application-specific performance metrics such as call drop and call block rates, we provide a picture of the potential gains of prediction. Our results indicate that it is difficult to predict handoff time accurately, when applied to real campus WLAN data. However, the findings of our case study also suggest that application performance can be improved significantly even with predictors that are only moderately accurate. The gains depend on the applications’ ability to use predictions and tolerate inaccurate predictions. In the case study, we combine the real mobility data with synthesized traffic data. The results show that intelligent prediction can lead to significant reductions in the rate at which active calls are dropped due to handoffs with marginal increments in the rate at which new calls are blocked.

Proceedings ArticleDOI
12 Sep 2006
TL;DR: A fuss-free gait analyzer based on a single three-axis accelerometer mounted on a cell phone for health care and presence services and can identify gaits such as walking, running, going up/down stairs, and walking fast with an accuracy of about 80%.
Abstract: We propose a fuss-free gait analyzer based on a single three-axis accelerometer mounted on a cell phone for health care and presence services. It is not necessary for users not to wear sensors on any part of their bodies; all they need to do is to carry the cell phone. Our algorithm has two main functions; one is to extract feature vectors by analyzing sensor data in detail using wavelet packet decomposition. The other is to flexibly cluster personal gaits by combining a self-organizing algorithm with Bayesian theory. Not only does the three-axis accelerometer realize low cost personal devices, but we can track aging or situation changes through on-line learning. A prototype that implements the algorithm is constructed. Experiments on the prototype show that the algorithm can identify gaits such as walking, running, going up/down stairs, and walking fast with an accuracy of about 80%.

Journal ArticleDOI
TL;DR: A driver identification method based on the driving behavior signals that are observed while the driver is following another vehicle is proposed, and the driver's operation signals were found to be better than road environment signals and car behavior signals.
Abstract: In this paper, we propose a driver identification method that is based on the driving behavior signals that are observed while the driver is following another vehicle. Driving behavior signals, such as the use of the accelerator pedal, brake pedal, vehicle velocity, and distance from the vehicle in front, were measured using a driving simulator. We compared the identification rate obtained using different identification models. As a result, we found the Gaussian Mixture Model to be superior to the Helly model and the optimal velocity model. Also, the driver's operation signals were found to be better than road environment signals and car behavior signals for the Gaussian Mixture Model. The identification rate for thirty driver using actual vehicle driving in a city area was 73%.

Journal ArticleDOI
Gerhard Bauch1, J.S. Malik1
TL;DR: This work addresses the problem of choosing the cyclic delays and proposes a new robust design rule which enables to pick up the full spatial and frequency diversity which is inherent in a frequency-selective MIMO channel.
Abstract: We consider cyclic delay diversity in OFDMA. Cyclic delay diversity is an elegant way to obtain spatial diversity in an FEC coded OFDM system without exceeding the guard interval. We first address the problem of choosing the cyclic delays and propose a new robust design rule which enables to pick up the full spatial and frequency diversity which is inherent in a frequency-selective MIMO channel. Our choice of cyclic delays has consequences for the interleaving and multiple access scheme since the spatial diversity appears to be transformed into frequency diversity between neighbouring subcarriers. Therefore, a system with a conventional block frequency interleaver will fail to exploit the spatial diversity. We propose an interleaving and multiple access strategy which guarantees that all users obtain the maximum possible diversity advantage using FEC codes with a limited constraint length. Furthermore, we provide a performance comparison to transmit diversity from orthogonal designs

Journal ArticleDOI
TL;DR: Numerical results show that with MCS approach OFDM sidelobes can be reduced significantly while requiring only a small amount of signalling information to be sent from transmitter to receiver.
Abstract: In this paper, we consider the problem of out-of-band radiation in orthogonal frequency-division multiplexing (OFDM) systems caused by high sidelobes of the OFDM transmission signal. Suppression of high sidelobes in OFDM systems enables higher spectral efficiency and/or co-existence with legacy systems in the case of OFDM spectrum sharing systems. To reduce sidelobes, we propose a method termed multiple-choice sequences (MCS). It is based on the idea that transforming the original transmit sequence into a set of sequences and choosing that sequence out of the set with the lowest power in the sidelobes allows to reduce the out-of-band radiation. We describe the general principle of MCS and out of it we derive and compare several practical MCS algorithms. In addition, we shortly consider the combination of MCS sidelobe suppression method with existing sidelobe suppression methods. Numerical results show that with MCS approach OFDM sidelobes can be reduced significantly while requiring only a small amount of signalling information to be sent from transmitter to receiver. For example, in an OFDM overlay scenario sidelobes power is reduced by around 10 dB with a signalling overhead of only 14%. Copyright © 2006 AEIT.

Patent
01 Mar 2006
TL;DR: In this article, the authors describe a device authentication apparatus, including a device identification information acquisition unit, a connection protection unit, and an identifier generation unit, which combine all or some of the device-specific identification information.
Abstract: A device authentication apparatus (30), including: a device identification information acquisition unit (31) configured to acquire identification information specific to a device; a connection protection unit (34) configured to protect a connection with the device; and an identifier generation unit (33) configured to combine all or some of the device-specific identification information, a device identification information type representing a type of the device-specific identification information, and a protection method type representing a type of a protection method used by the connection protection unit (34) to generate an identifier for a pair of the connected device and a connection environment.

Proceedings ArticleDOI
01 Aug 2006
TL;DR: In this article, the potential deployment of ultra-wideband (UWB) radio technology for next generation wireless communications is discussed and the current status of worldwide regulatory efforts and industrial standardization activities is discussed.
Abstract: This paper discusses the potential deployment of ultra-wideband (UWB) radio technology for next generation wireless communications. Firstly, the state-of-art in UWB technology is reviewed. Then, the current status of worldwide regulatory efforts and industrial standardization activities is discussed. Various technical challenges that remain to be solved prior to the successful deployment of UWB systems as well as the possible technical approaches are also reported. Specifically, we envisioned the potential of location awareness capabilities to provide new applications and usage models for future mobile terminals. An overview of the existing ranging and localization techniques is presented and some technical aspects as well as design trade-offs in terms of device complexity and ranging accuracy are highlighted. Finally, since UWB systems operate as overlay systems, issues of coexistence and interference with existing narrowband systems are presented.

Journal ArticleDOI
TL;DR: GRACE-1 as discussed by the authors is a cross-layer adaptation framework that coordinates the adaptation of the CPU hardware, OS scheduling, and multimedia quality based on users' preferences to balance the benefits and overhead of cross layer adaptation.
Abstract: Mobile devices primarily processing multimedia data need to support multimedia quality with limited battery energy. To address this challenging problem, researchers have introduced adaptation into multiple system layers, ranging from hardware to applications. Given these adaptive layers, a new challenge is how to coordinate them to fully exploit the adaptation benefits. This paper presents a novel cross-layer adaptation framework, called GRACE-1, that coordinates the adaptation of the CPU hardware, OS scheduling, and multimedia quality based on users' preferences. To balance the benefits and overhead of cross-layer adaptation, GRACE-1 takes a hierarchical approach: It globally adapts all three layers to large system changes, such as application entry or exit, and internally adapts individual layers to small changes in the processed multimedia data. We have implemented GRACE-1 on an HIP laptop with the adaptive Athlon CPU, Linux-based OS, and video codecs. Our experimental results show that, compared to schemes that adapt only some layers or adapt only to large changes, GRACE-1 reduces the laptop's energy consumption up to 31.4 percent while providing better or the same video quality.

Patent
14 Apr 2006
TL;DR: An apparatus for controlling an operation of a plurality of communication layers in a layered communication system comprises means for providing a property of the communication channel, a storage element for storing a first plurality of sets of parameters defining different operation modes of a first communication layer of the plurality of communications layers and a selector for selecting a first set of parameters from the first plurality, and a second set of parameter selection from the second plurality, in dependence on the channel property and an optimization goal.
Abstract: An apparatus for controlling an operation of a plurality of communication layers in a layered communication system comprises means for providing a property of the communication channel, a storage element for storing a first plurality of sets of parameters defining different operation modes of a first communication layer of the plurality of communication layers and for providing a second plurality of sets of parameters defining different operation modes of a second communication layer of the plurality of communication layers, a selector for selecting a first set of parameters from the first plurality of sets of parameters and for selecting a second set of parameters from the second plurality of sets of parameters in dependence on the channel property and an optimization goal and means for providing the first set of parameters to the first communication layer and the second set of parameters to the second communication layer. Therefore, an efficient exploitation of communication resources can be achieved.