scispace - formally typeset
Search or ask a question

Showing papers by "Alcatel-Lucent published in 2006"


Proceedings ArticleDOI
30 Oct 2006
TL;DR: In this paper, the authors proposed a searchable symmetric encryption (SSE) scheme for the multi-user setting, where queries to the server can be chosen adaptively during the execution of the search.
Abstract: Searchable symmetric encryption (SSE) allows a party to outsource the storage of its data to another party (a server) in a private manner, while maintaining the ability to selectively search over it. This problem has been the focus of active research in recent years. In this paper we show two solutions to SSE that simultaneously enjoy the following properties: Both solutions are more efficient than all previous constant-round schemes. In particular, the work performed by the server per returned document is constant as opposed to linear in the size of the data. Both solutions enjoy stronger security guarantees than previous constant-round schemes. In fact, we point out subtle but serious problems with previous notions of security for SSE, and show how to design constructions which avoid these pitfalls. Further, our second solution also achieves what we call adaptive SSE security, where queries to the server can be chosen adaptively (by the adversary) during the execution of the search; this notion is both important in practice and has not been previously considered.Surprisingly, despite being more secure and more efficient, our SSE schemes are remarkably simple. We consider the simplicity of both solutions as an important step towards the deployment of SSE technologies.As an additional contribution, we also consider multi-user SSE. All prior work on SSE studied the setting where only the owner of the data is capable of submitting search queries. We consider the natural extension where an arbitrary group of parties other than the owner can submit search queries. We formally define SSE in the multi-user setting, and present an efficient construction that achieves better performance than simply using access control mechanisms.

1,673 citations


Journal ArticleDOI
TL;DR: The optimal detector in the Neyman–Pearson sense is developed and analyzed for the statistical MIMO radar and it is shown that the optimal detector consists of noncoherent processing of the receiver sensors' outputs and that for cases of practical interest, detection performance is superior to that obtained through coherent processing.
Abstract: Inspired by recent advances in multiple-input multiple-output (MIMO) communications, this proposal introduces the statistical MIMO radar concept To the authors' knowledge, this is the first time that the statistical MIMO is being proposed for radar The fundamental difference between statistical MIMO and other radar array systems is that the latter seek to maximize the coherent processing gain, while statistical MIMO radar capitalizes on the diversity of target scattering to improve radar performance Coherent processing is made possible by highly correlated signals at the receiver array, whereas in statistical MIMO radar, the signals received by the array elements are uncorrelated Radar targets generally consist of many small elemental scatterers that are fused by the radar waveform and the processing at the receiver, to result in echoes with fluctuating amplitude and phase It is well known that in conventional radar, slow fluctuations of the target radar cross section (RCS) result in target fades that degrade radar performance By spacing the antenna elements at the transmitter and at the receiver such that the target angular spread is manifested, the MIMO radar can exploit the spatial diversity of target scatterers opening the way to a variety of new techniques that can improve radar performance This paper focuses on the application of the target spatial diversity to improve detection performance The optimal detector in the Neyman–Pearson sense is developed and analyzed for the statistical MIMO radar It is shown that the optimal detector consists of noncoherent processing of the receiver sensors' outputs and that for cases of practical interest, detection performance is superior to that obtained through coherent processing An optimal detector invariant to the signal and noise levels is also developed and analyzed In this case as well, statistical MIMO radar provides great improvements over other types of array radars

1,413 citations


Journal ArticleDOI
TL;DR: This paper relates the general Volterra representation to the classical Wiener, Hammerstein, Wiener-Hammerstein, and parallel Wiener structures, and describes some state-of-the-art predistortion models based on memory polynomials, and proposes a new generalizedMemory polynomial that achieves the best performance to date.
Abstract: Conventional radio-frequency (RF) power amplifiers operating with wideband signals, such as wideband code-division multiple access (WCDMA) in the Universal Mobile Telecommunications System (UMTS) must be backed off considerably from their peak power level in order to control out-of-band spurious emissions, also known as "spectral regrowth." Adapting these amplifiers to wideband operation therefore entails larger size and higher cost than would otherwise be required for the same power output. An alternative solution, which is gaining widespread popularity, is to employ digital baseband predistortion ahead of the amplifier to compensate for the nonlinearity effects, hence allowing it to run closer to its maximum output power while maintaining low spectral regrowth. Recent improvements to the technique have included memory effects in the predistortion model, which are essential as the bandwidth increases. In this paper, we relate the general Volterra representation to the classical Wiener, Hammerstein, Wiener-Hammerstein, and parallel Wiener structures, and go on to describe some state-of-the-art predistortion models based on memory polynomials. We then propose a new generalized memory polynomial that achieves the best performance to date, as demonstrated herein with experimental results obtained from a testbed using an actual 30-W, 2-GHz power amplifier

1,305 citations


Proceedings ArticleDOI
23 Apr 2006
TL;DR: This paper presents an interference-aware channel assignment algorithm and protocol for multi-radio wireless mesh networks that address this interference problem and demonstrates its practicality through the evaluation of a prototype implementation in a IEEE 802.11 testbed.
Abstract: The capacity problem in wireless mesh networks can be alleviated by equipping the mesh routers with multiple radios tuned to non-overlapping channels However, channel assignment presents a challenge because co-located wireless networks are likely to be tuned to the same channels The resulting increase in interference can adversely affect performance This paper presents an interference-aware channel assignment algorithm and protocol for multi-radio wireless mesh networks that address this interference problem The proposed solution intelligently assigns channels to radios to minimize interference within the mesh network and between the mesh network and co-located wireless networks It utilizes a novel interference estimation technique implemented at each mesh router An extension to the conflict graph model, the multi-radio conflict graph, is used to model the interference between the routers We demonstrate our solution’s practicality through the evaluation of a prototype implementation in a IEEE 80211 testbed We also report on an extensive evaluation via simulations In a sample multi-radio scenario, our solution yields performance gains in excess of 40% compared to a static assignment of channels

861 citations


Journal ArticleDOI
TL;DR: A solution is developed that optimizes the overall network throughput subject to fairness constraints on allocation of scarce wireless capacity among mobile clients, and the performance of the algorithms is within a constant factor of that of any optimal algorithm for the joint channel assignment and routing problem.
Abstract: Multihop infrastructure wireless mesh networks offer increased reliability, coverage, and reduced equipment costs over their single-hop counterpart, wireless local area networks. Equipping wireless routers with multiple radios further improves the capacity by transmitting over multiple radios simultaneously using orthogonal channels. Efficient channel assignment and routing is essential for throughput optimization of mesh clients. Efficient channel assignment schemes can greatly relieve the interference effect of close-by transmissions; effective routing schemes can alleviate potential congestion on any gateways to the Internet, thereby improving per-client throughput. Unlike previous heuristic approaches, we mathematically formulate the joint channel assignment and routing problem, taking into account the interference constraints, the number of channels in the network, and the number of radios available at each mesh router. We then use this formulation to develop a solution for our problem that optimizes the overall network throughput subject to fairness constraints on allocation of scarce wireless capacity among mobile clients. We show that the performance of our algorithms is within a constant factor of that of any optimal algorithm for the joint channel assignment and routing problem. Our evaluation demonstrates that our algorithm can effectively exploit the increased number of channels and radios, and it performs much better than the theoretical worst case bounds

679 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the CDW to superconductivity transition in a layered dichalcogenides and showed that on controlled intercalation of TiSe2 with Cu, a new superconducting state emerges near x=0.04, with a maximum transition temperature Tc of 4.15
Abstract: Charge density waves (CDWs) are periodic modulations of the density of conduction electrons in solids. They are collective states that arise from intrinsic instabilities often present in low-dimensional electronic systems. The most well-studied examples are the layered dichalcogenides–an example of which is TiSe2, one of the first CDW-bearing materials to be discovered. At low temperatures, a widely held belief is that the CDW competes with another collective electronic state, superconductivity. But despite much exploration, a detailed study of this competition is lacking. Here we report how, on controlled intercalation of TiSe2 with Cu to yield CuxTiSe2, the CDW transition can be continuously suppressed, and a new superconducting state emerges near x=0.04, with a maximum transition temperature Tc of 4.15 K at x=0.08. CuxTiSe2 thus provides the first opportunity to study the CDW to superconductivity transition in detail through an easily controllable chemical parameter, and will provide fundamental insight into the behaviour of correlated electron systems.

678 citations


Journal ArticleDOI
TL;DR: The quantum Hall (QH) effect in two-dimensional electrons and holes in high quality graphene samples is studied in strong magnetic fields up to 45 T and can be attributed to lifting of the spin degeneracy of the n = 1 Landau level.
Abstract: The quantum Hall (QH) effect in two-dimensional electrons and holes in high quality graphene samples is studied in strong magnetic fields up to 45 T. QH plateaus at filling factors nu = 0, +/-1, +/-4 are discovered at magnetic fields B > 20 T, indicating the lifting of the fourfold degeneracy of the previously observed QH states at nu = +/-4(absolute value(n) + 1/2), where n is the Landau-level index. In particular, the presence of the nu = 0, +/-1 QH plateaus indicates that the Landau level at the charge neutral Dirac point splits into four sublevels, lifting sublattice and spin degeneracy. The QH effect at nu = +/-4 is investigated in a tilted magnetic field and can be attributed to lifting of the spin degeneracy of the n = 1 Landau level.

636 citations


Proceedings ArticleDOI
01 Jan 2006
TL;DR: A new interference aware routing metric - iAWARE - is presented that aids in finding paths that are better in terms of reduced interflow and intra-flow interference and which delivers increased throughput in single radio and two radio mesh networks compared to similar protocol with WCETT and MIC routing metrics.
Abstract: We address the problem of interference aware routing in multi-radio infrastructure mesh networks wherein each mesh node is equipped with multiple radio interfaces and a subset of nodes serve as Internet gateways. We present a new interference aware routing metric - iAWARE that aids in finding paths that are better in terms of reduced interflow and intra-flow interference. We incorporate this metric and new support for multi-radio networks in the well known AODV routing protocol to design an enhanced AODV-MR routing protocol. We study the performance of our new routing metric by implementing it in our wireless testbed consisting of 12 mesh nodes. We show that iAWARE tracks changes in interfering traffic far better than existing well known link metrics such as ETT and IRU. We also demonstrate that our AODV-MR protocol delivers increased throughput in single radio and two radio mesh networks compared to similar protocol with WCETT and MIC routing metrics. We also show that in the case of two radio mesh networks, our metric achieves good intra-path channel diversity.

569 citations


Journal ArticleDOI
TL;DR: This paper studies the quantitative performance behavior of the Wiener filter in the context of noise reduction and shows that in the single-channel case the a posteriori signal-to-noise ratio (SNR) is greater than or equal to the a priori SNR (defined before theWiener filter), indicating that the Wieners filter is always able to achieve noise reduction.
Abstract: The problem of noise reduction has attracted a considerable amount of research attention over the past several decades. Among the numerous techniques that were developed, the optimal Wiener filter can be considered as one of the most fundamental noise reduction approaches, which has been delineated in different forms and adopted in various applications. Although it is not a secret that the Wiener filter may cause some detrimental effects to the speech signal (appreciable or even significant degradation in quality or intelligibility), few efforts have been reported to show the inherent relationship between noise reduction and speech distortion. By defining a speech-distortion index to measure the degree to which the speech signal is deformed and two noise-reduction factors to quantify the amount of noise being attenuated, this paper studies the quantitative performance behavior of the Wiener filter in the context of noise reduction. We show that in the single-channel case the a posteriori signal-to-noise ratio (SNR) (defined after the Wiener filter) is greater than or equal to the a priori SNR (defined before the Wiener filter), indicating that the Wiener filter is always able to achieve noise reduction. However, the amount of noise reduction is in general proportional to the amount of speech degradation. This may seem discouraging as we always expect an algorithm to have maximal noise reduction without much speech distortion. Fortunately, we show that speech distortion can be better managed in three different ways. If we have some a priori knowledge (such as the linear prediction coefficients) of the clean speech signal, this a priori knowledge can be exploited to achieve noise reduction while maintaining a low level of speech distortion. When no a priori knowledge is available, we can still achieve a better control of noise reduction and speech distortion by properly manipulating the Wiener filter, resulting in a suboptimal Wiener filter. In case that we have multiple microphone sensors, the multiple observations of the speech signal can be used to reduce noise with less or even no speech distortion

563 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe the CryoSat satellite mission, due for launch in 2005, whose aim is to accurately determine the trends in Earth's continental and marine ice fields.

539 citations


Proceedings ArticleDOI
03 Dec 2006
TL;DR: In this article, the authors proposed regular expression rewrite techniques that can effectively reduce memory usage and developed a grouping scheme that can strategically compile a set of regular expressions into several engines, resulting in remarkable improvement of regular expression matching speed without much increase in memory usage.
Abstract: Packet content scanning at high speed has become extremely important due to its applications in network security, network monitoring, HTTP load balancing, etc. In content scanning, the packet payload is compared against a set of patterns specified as regular expressions. In this paper, we first show that memory requirements using traditional methods are prohibitively high for many patterns used in packet scanning applications. We then propose regular expression rewrite techniques that can effectively reduce memory usage. Further, we develop a grouping scheme that can strategically compile a set of regular expressions into several engines, resulting in remarkable improvement of regular expression matching speed without much increase in memory usage. We implement a new DFA-based packet scanner using the above techniques. Our experimental results using real-world traffic and patterns show that our implementation achieves a factor of 12 to 42 performance improvement over a commonly used DFA- based scanner. Compared to the state-of-art NFA-based implementation, our DFA-based packet scanner achieves 50 to 700 times speedup.

Proceedings ArticleDOI
29 Sep 2006
TL;DR: The time needed to estimate the number of tags in the system for a given accuracy is much better than schemes presented in related work, and it is shown that one can estimate the cardinality of tag-sets of any size in near-constant time.
Abstract: RFID tags are being used in many diverse applications in increasingly large numbers. These capabilities of these tags span from very dumb passive tags to smart active tags, with the cost of these tags correspondingly ranging from a few pennies to many dollars. One of the common problems that arise in any RFID deployment is the problem of quick estimation of the number of tags in the field up to a desired level of accuracy. Prior work in this area has focused on the identification of tags, which needs more time, and is unsuitable for many situations, especially where the tag set is dense. We take a different, more practical approach, and provide very fast and reliable estimation mechanisms. In particular, we analyze our estimation schemes and show that the time needed to estimate the number of tags in the system for a given accuracy is much better than schemes presented in related work. We show that one can estimate the cardinality of tag-sets of any size in near-constant time, for a given accuracy of estimation.

Proceedings ArticleDOI
23 Jul 2006
TL;DR: K-resilient Nash equilibria, joint strategies where no member of a coalition C of size up to k can do better, even if the whole coalition defects, exist for secret sharing and multiparty computation, provided that players prefer to get the information than not to get it.
Abstract: We study k-resilient Nash equilibria, joint strategies where no member of a coalition C of size up to k can do better, even if the whole coalition defects. We show that such k-resilient Nash equilibria exist for secret sharing and multiparty computation, provided that players prefer to get the information than not to get it. Our results hold even if there are only 2 players, so we can do multiparty computation with only two rational agents. We extend our results so that they hold even in the presence of up to t players with "unexpected" utilities. Finally, we show that our techniques can be used to simulate games with mediators by games without mediators.

Journal ArticleDOI
TL;DR: The proposed centralized algorithm uses the dual decomposition method to optimize spectra in an efficient and computationally tractable way and shows significant performance gains over existing dynamics spectrum management techniques.
Abstract: Crosstalk is a major issue in modern digital subscriber line (DSL) systems such as ADSL and VDSL. Static spectrum management, which is the traditional way of ensuring spectral compatibility, employs spectral masks that can be overly conservative and lead to poor performance. This paper presents a centralized algorithm for optimal spectrum balancing in DSL. The algorithm uses the dual decomposition method to optimize spectra in an efficient and computationally tractable way. The algorithm shows significant performance gains over existing dynamics spectrum management (DSM) techniques, e.g., in one of the cases studied, the proposed centralized algorithm leads to a factor-of-four increase in data rate over the distributed DSM algorithm iterative waterfilling.

Journal ArticleDOI
TL;DR: A systematic overview of the state-of-the-art of time-delay-estimation algorithms ranging from the simple cross-correlation method to the advanced blind channel identification based techniques is presented.
Abstract: Time delay estimation has been a research topic of significant practical importance in many fields (radar, sonar, seismology, geophysics, ultrasonics, hands-free communications, etc.). It is a first stage that feeds into subsequent processing blocks for identifying, localizing, and tracking radiating sources. This area has made remarkable advances in the past few decades, and is continuing to progress, with an aim to create processors that are tolerant to both noise and reverberation. This paper presents a systematic overview of the state-of-the-art of time-delay-estimation algorithms ranging from the simple cross-correlation method to the advanced blind channel identification based techniques. We discuss the pros and cons of each individual algorithm, and outline their inherent relationships. We also provide experimental results to illustrate their performance differences in room acoustic environments where reverberation and noise are commonly encountered.

Journal ArticleDOI
TL;DR: In this article, a frustrated triangular antiferromagnetic lattice was investigated and it was shown that the noncollinear spin structure plays an important role in inducing electric polarization.
Abstract: Magnetoelectric and magnetoelastic phenomena have been investigated on a frustrated triangular antiferromagnetic lattice in $\mathrm{Cu}\mathrm{Fe}{\mathrm{O}}_{2}$. Inversion-symmetry breaking, manifested as a finite electric polarization, was observed in noncollinear (helical) magnetic phases and not in collinear magnetic phases. This result demonstrates that the noncollinear spin structure plays an important role in inducing electric polarization. Based on these results we suggest that frustrated magnets (often favoring noncollinear configurations) are favorable candidates for a new class of magnetoelectric materials.

Journal ArticleDOI
TL;DR: It is shown that a random linear coding (RLC) based protocol disseminates all messages to all nodes in time ck+/spl Oscr/(/spl radic/kln(k)ln(n), where c<3.46 using pull-based dissemination and c<5.96 using push-based dissemination, and simulations suggest that c<2 might be a tighter bound.
Abstract: The problem of simultaneously disseminating k messages in a large network of n nodes, in a decentralized and distributed manner, where nodes only have knowledge about their own contents, is studied. In every discrete time-step, each node selects a communication partner randomly, uniformly among all nodes and only one message can be transmitted. The goal is to disseminate rapidly, with high probability, all messages to all nodes. It is shown that a random linear coding (RLC) based protocol disseminates all messages to all nodes in time ck+/spl Oscr/(/spl radic/kln(k)ln(n)), where c<3.46 using pull-based dissemination and c<5.96 using push-based dissemination. Simulations suggest that c<2 might be a tighter bound. Thus, if k/spl Gt/(ln(n))/sup 3/, the time for simultaneous dissemination RLC is asymptotically at most ck, versus the /spl Omega/(klog/sub 2/(n)) time of sequential dissemination. Furthermore, when k/spl Gt/(ln(n))/sup 3/, the dissemination time is order optimal. When k/spl Lt/(ln(n))/sup 2/, RLC reduces dissemination time by a factor of /spl Omega/(/spl radic/k/lnk) over sequential dissemination. The overhead of the RLC protocol is negligible for messages of reasonable size. A store-and-forward mechanism without coding is also considered. It is shown that this approach performs no better than a sequential approach when k=/spl prop/n. Owing to the distributed nature of the system, the proof requires analysis of an appropriate time-varying Bernoulli process.

Journal ArticleDOI
TL;DR: In this paper, the authors pointed out that optical networks will have to carry vastly increased amounts of Internet traffic, foretelling the end of the so-called "Optical Moore's Law."
Abstract: Ten to 20 years from now, optical networks will have to carry vastly increased amounts of Internet traffic. Today's knowledge (2006) already points to ultimate technology limits in the physical layer, foretelling the end of the so-called "Optical Moore's Law." Such an observation is discordant with the generic and optimistic view of a "virtually infinite" optical bandwidth combined with unlimited Internet-traffic growth. In order to meet long-term needs and challenges, therefore, basic research in wideband optical components and subsystems must be urgently revived today

Journal ArticleDOI
TL;DR: An analytical model for large system mean mutual information values and the impact of elevation spectrum on MI is presented and a composite channel impulse model for the cross-polarized channel that takes into account both azimuth and elevation spectrum is proposed.
Abstract: Fourth-generation (4G) systems are expected to support data rates of the order of 100 Mb/s in the outdoor environment and 1 Gb/s in the indoor/stationary environment. In order to support such large payloads, the radio physical layer must employ receiver algorithms that provide a significant increase in spectrum efficiency (and, hence, capacity) over current wireless systems. Recently, an explosion of multiple-input-multiple-output (MIMO) studies have appeared with many journals presenting special issues on this subject. This has occurred due to the potential of MIMO to provide a linear increase in capacity with antenna numbers. Environmental considerations and tower loads will often restrict the placing of large antenna spans on base stations (BSs). Similarly, customer device form factors also place a limit on the antenna numbers that can be placed with a mutual spacing of 0.5 wavelength. The use of cross-polarized antennas is widely used in modern cellular installations as it reduces spacing needs and tower loads on BSs. Hence, this approach is also receiving considerable attention in MIMO systems. In order to study and compare various receiver architectures that are based on MIMO techniques, one needs to have an accurate knowledge of the MIMO channel. However, very few studies have appeared that characterize the cross-polarized MIMO channel. Recently, the third-generation partnership standards bodies (3GPP/3GPP2) have defined a cross-polarized channel model for MIMO systems but this model neglects the elevation spectrum. In this paper, we provide a deeper understanding of the channel model for cross-polarized systems for different environments and propose a composite channel impulse model for the cross-polarized channel that takes into account both azimuth and elevation spectrum. We use the resulting channel impulse response to derive closed-form expressions for the spatial correlation. We also present models to describe the dependence of cross-polarization discrimination (XPD) on distance, azimuth and elevation and delay spread. In addition, we study the impact of array width, signal-to-noise ratio, and antenna slant angle on the mutual information (MI) of the system. In particular, we present an analytical model for large system mean mutual information values and consider the impact of elevation spectrum on MI. Finally, the impact of multipath delays on XPD and MI is also explored.

Patent
Emanuele Jones1
17 Jul 2006
TL;DR: In this paper, a DNS-based enforcement system for confinement and detection of network malicious activities requires that every connection toward a resource located outside the local network is blocked by default by the local enforcement box, e.g. a firewall or a proxy.
Abstract: Malicious network activities do not make use of the Domain Name System (DNS) protocol to reach remote targets outside a local network. This DNS-based enforcement system for confinement and detection of network malicious activities requires that every connection toward a resource located outside the local network is blocked by default by the local enforcement box, e.g. a firewall or a proxy. Outbound connections are allowed to leave the local network only when authorized directly by an entity called the DNS Gatekeeper.

Journal ArticleDOI
TL;DR: The capacity-achieving input covariance for multi-antenna channels known instantaneously at the receiver and in distribution at the transmitter is characterized and an iterative algorithm that exhibits remarkable properties is presented: universal applicability, robustness and rapid convergence.
Abstract: We characterize the capacity-achieving input covariance for multi-antenna channels known instantaneously at the receiver and in distribution at the transmitter. Our characterization, valid for arbitrary numbers of antennas, encompasses both the eigenvectors and the eigenvalues. The eigenvectors are found for zero-mean channels with arbitrary fading profiles and a wide range of correlation and keyhole structures. For the eigenvalues, in turn, we present necessary and sufficient conditions as well as an iterative algorithm that exhibits remarkable properties: universal applicability, robustness and rapid convergence. In addition, we identify channel structures for which an isotropic input achieves capacity.

Journal ArticleDOI
TL;DR: In this paper, a geometry-driven facial expression synthesis system is proposed to automatically synthesize a corresponding expression image that includes photorealistic and natural looking expression details such as wrinkles due to skin deformation.
Abstract: Expression mapping (also called performance driven animation) has been a popular method for generating facial animations. A shortcoming of this method is that it does not generate expression details such as the wrinkles due to skin deformations. In this paper, we provide a solution to this problem. We have developed a geometry-driven facial expression synthesis system. Given feature point positions (the geometry) of a facial expression, our system automatically synthesizes a corresponding expression image that includes photorealistic and natural looking expression details. Due to the difficulty of point tracking, the number of feature points required by the synthesis system is, in general, more than what is directly available from a performance sequence. We have developed a technique to infer the missing feature point motions from the tracked subset by using an example-based approach. Another application of our system is expression editing where the user drags feature points while the system interactively generates facial expressions with skin deformation details.

Journal ArticleDOI
TL;DR: Deterministic algorithms are proposed to specify the coding operations at network nodes without the knowledge of the overall network topology, to derive the smallest code alphabet size sufficient to code any network configuration with two sources as a function of the number of receivers in the network.
Abstract: We propose a method to identify structural properties of multicast network configurations, by decomposing networks into regions through which the same information flows. This decomposition allows us to show that very different networks are equivalent from a coding point of view, and offers a means to identify such equivalence classes. It also allows us to divide the network coding problem into two almost independent tasks: one of graph theory and the other of classical channel coding theory. This approach to network coding enables us to derive the smallest code alphabet size sufficient to code any network configuration with two sources as a function of the number of receivers in the network. But perhaps the most significant strength of our approach concerns future network coding practice. Namely, we propose deterministic algorithms to specify the coding operations at network nodes without the knowledge of the overall network topology. Such decentralized designs facilitate the construction of codes that can easily accommodate future changes in the network, e.g., addition of receivers and loss of links

Journal ArticleDOI
TL;DR: A breadboard of a three-layer printed reflectarray for dual polarization with a different coverage in each polarization has been designed, manufactured, and tested as discussed by the authors, which consists of three layers of rectangular patch arrays separated by a honeycomb and backed by a ground plane.
Abstract: A breadboard of a three-layer printed reflectarray for dual polarization with a different coverage in each polarization has been designed, manufactured, and tested. The reflectarray consists of three layers of rectangular patch arrays separated by a honeycomb and backed by a ground plane. The beam shaping for each polarization is achieved by adjusting the phase of the reflection coefficient at each reflective element independently for each linear polarization. The phase shift for each polarization is controlled by varying either the x or y patch dimensions. The dimensions of the rectangular patches are optimized to achieve the required phase shift for each beam at central and extreme frequencies in the working band. The reflectarray has been designed to produce a contoured beam for a European coverage in H-polarization in a 10% bandwidth, and a pencil beam to illuminate the East Coast in North America in V-polarization. The measured radiation patterns show that gain requirements are practically fulfilled in a 10% bandwidth for both coverages, and the electrical performances of the breadboard are close to those of a classical dual gridded reflector

01 Jan 2006
TL;DR: An in-depth measurement study of one of the most popular IPTV systems, namely, PPLive, using a dedicated PPLiv crawler, which enables the study of the global characteristics of the mesh-pull P2P IPTV system.
Abstract: With over 100,000 simultaneous users (typically), PPLive is the most popular IPTV application today. PPLive uses a peer-to-peer design, in which peers download and redistribute live television content from and to other peers. Although PPLive is paving the way for an important new class of bandwidth intensive applica- tions, little is known about it due to the proprietary nature of its protocol. In this paper we undertake a preliminary measurement study of PPLive, reporting results from passive packet sniffing of residential and campus peers. We report results for streaming per- formance, workload characteristics, and overlay properties.

Journal ArticleDOI
01 Nov 2006
TL;DR: A model-based method is proposed for assessing the level of discriminability of a system, given a set of sensors, the number of faults that can be discriminated, and its degree of diagnosability, i.e., the discrim inability level related to the total number of anticipated faults.
Abstract: It is commonly accepted that the requirements for maintenance and diagnosis should be considered at the earliest stages of design. For this reason, methods for analyzing the diagnosability of a system and determining which sensors are needed to achieve the desired degree of diagnosability are highly valued. This paper clarifies the different diagnosability properties of a system and proposes a model-based method for: 1) assessing the level of discriminability of a system, i.e., given a set of sensors, the number of faults that can be discriminated, and its degree of diagnosability, i.e., the discriminability level related to the total number of anticipated faults; and 2) characterizing and determining the minimal additional sensors that guarantee a specified degree of diagnosability. The method takes advantage of the concept of component-supported analytical redundancy relation, which considers recent results crossing over the fault detection and isolation and diagnosis communities. It uses a model of the system to analyze in an exhaustive manner the analytical redundancies associated with the availability of sensors and performs from that a full diagnosability assessment. The method is applied to an industrial smart actuator that was used as a benchmark in the Development and Application of Methods for Actuator Diagnosis in Industrial Control Systems European project

Journal ArticleDOI
TL;DR: In this paper, it is shown that a net acceptor density of around 10/sup 17/ cm/sup -3/ is required to ensure suppression of short-channel effects.
Abstract: Short-channel punch-through effects are demonstrated in 0.17 /spl mu/m gate length AlGaN/GaN single heterojunction field-effect transistors. These take the form of a high output conductance and the strong dependence of pinch-off voltage on drain voltage. It is shown by simulation that they can be explained by poor confinement of charge at the AlGaN/GaN interface resulting in current flow within the bulk of the GaN layer. This is caused by there being a concentration of only /spl sim/1.5/spl times/10/sup 16/ cm/sup -3/ deep levels in the insulating GaN buffer layer. It is found that a net acceptor density of around 10/sup 17/ cm/sup -3/ is required to ensure suppression of short-channel effects.

Journal ArticleDOI
TL;DR: In this paper, the design of a directive antenna using the electromagnetic resonances of a Fabry-Perot cavity was reported. But the design was based on a patch antenna placed in the cavity at the vicinity of the ground plane and only one excitation point is needed.
Abstract: We report the design of a directive antenna using the electromagnetic resonances of a Fabry-Perot cavity The Fabry-Perot cavity is made of a ground plane and a single metallic grid The resonance is excited by a patch antenna placed in the cavity at the vicinity of the ground plane The two remarkable features of Fabry-Perot cavity antennas are, first, that they are very thin and second that only one excitation point is needed A directivity of about 600 is measured at f=1480 GHz which is to our knowledge one of the highest directivities reported for an antenna using Fabry-Perot resonances

Patent
Zlatko Krstulich1
27 Dec 2006
TL;DR: In this paper, a method for ensuring that specific traffic flows are adequately prioritized in a public packet communication network even when the network is heavily congested is provided for ensuring queuing.
Abstract: A method is provided for ensuring that specific traffic flows are adequately prioritized in a public packet communication network even when the network is heavily congested. Per-flow QoS capability is added to VPN tunnels. Connection requests are routed through a specific port in an access provider's network to designated VPN gateway. Deep packet inspection is performed on traffic through the port in an attempt to determine whether the connection request was accepted. If the connection request was accepted, the traffic flows associated with that session may be given a specific priority of QoS level when transiting a packet access network.

Proceedings ArticleDOI
11 Dec 2006
TL;DR: The ultimate performance limits of inter-cell coordinatation in a cellular downlink network are quantified and a simple upper bound on the max-min rate of any scheme is obtained.
Abstract: We quantify the ultimate performance limits of inter-cell coordinatation in a cellular downlink network. The goal is to achieve fairness by maximizing the minimum rate in the network subject to per base power constraints. We first solve the max-min rate problem for a particular zero-forcing dirty paper coding scheme so as to obtain an achievable max-min rate, which serves as a lower bound on the ultimate limit. We then obtain a simple upper bound on the max-min rate of any scheme, and show that the rate achievable by the zero-forcing dirty paper coding scheme is close to this upper bound. We also extend our analysis to coordinated networks with multiple antennas.