Institution
Alcatel-Lucent
Stuttgart, Germany•
About: Alcatel-Lucent is a based out in Stuttgart, Germany. It is known for research contribution in the topics: Signal & Network packet. The organization has 37003 authors who have published 53332 publications receiving 1430547 citations. The organization is also known as: Alcatel-Lucent S.A. & Alcatel.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The accuracy of coherence estimation is investigated as a function of the coherence map resolution and it is established that the magnitude of the averaged sample coherence estimate is slightly biased for high-resolution coherence maps and that the bias reduces with coarser resolution.
Abstract: In dual- or multiple-channel synthetic aperture radar (SAR) imaging modes, cross-channel correlation is a potential source of information. The sample coherence magnitude is calculated over a moving window to generate a coherence magnitude map. High-resolution coherence maps may be useful to discriminate fine structures. Coarser resolution is needed for a more accurate estimation of the coherence magnitude. In this study, the accuracy of coherence estimation is investigated as a function of the coherence map resolution. It is shown that the space-averaged coherence magnitude is biased toward higher values. The accuracy of the coherence magnitude estimate obtained is a function of the number of pixels averaged and the number of independent samples per pixel (i.e., the coherence map resolution). A method is proposed to remove the bias from the space-averaged sample coherence magnitude. Coherence magnitude estimation from complex (magnitude and phase) coherence maps is also considered. It is established that the magnitude of the averaged sample coherence estimate is slightly biased for high-resolution coherence maps and that the bias reduces with coarser resolution. Finally, coherence estimation for nonstationary targets is discussed. It is shown that the averaged sample coherence obtained from complex coherence maps or coherence magnitude maps is suitable for estimation of nonstationary coherence. The averaged sample (complex) coherence permits the calculation of an unbiased coherence estimate, provided that the original signals can be assumed to be locally stationary over a sufficiently coarse resolution cell.
676 citations
••
TL;DR: GexSi1−x films are grown on Si by molecular beam epitaxy and analyzed by Nomarski optical interference microscopy, Rutherford ion backscattering and channeling, x-ray diffraction, and transmission electron microscopy as discussed by the authors.
Abstract: GexSi1−x films are grown on Si by molecular beam epitaxy and analyzed by Nomarski optical interference microscopy, Rutherford ion backscattering and channeling, x‐ray diffraction, and transmission electron microscopy. The full range of alloy compositions will grow smoothly on silicon. GexSi1−x films with x≤0.5 can be grown free of dislocations by means of strained‐layer epitaxy where lattice mismatch is accommodated by tetragonal strain. Critical thickness and composition values are tabulated for strained‐layer growth. Multiple strained layers are combined to form a GexSi1−x/Si strained‐layer superlattice.
675 citations
••
12 Jan 2005TL;DR: A new approach to partial-order reduction for model checking software is presented, based on initially exploring an arbitrary interleaving of the various concurrent processes/threads, and dynamically tracking interactions between these to identify backtracking points where alternative paths in the state space need to be explored.
Abstract: We present a new approach to partial-order reduction for model checking software. This approach is based on initially exploring an arbitrary interleaving of the various concurrent processes/threads, and dynamically tracking interactions between these to identify backtracking points where alternative paths in the state space need to be explored. We present examples of multi-threaded programs where our new dynamic partial-order reduction technique significantly reduces the search space, even though traditional partial-order algorithms are helpless.
669 citations
••
01 Jul 1999TL;DR: Digital watermarking techniques are described, known as perceptually based watermarks, that are designed to exploit aspects of the the human visual system in order to provide a transparent (invisible), yet robust watermark.
Abstract: The growth of new imaging technologies has created a need for techniques that can be used for copyright protection of digital images and video. One approach for copyright protection is to introduce an invisible signal, known as a digital watermark, into an image or video sequence. In this paper, we describe digital watermarking techniques, known as perceptually based watermarks, that are designed to exploit aspects of the the human visual system in order to provide a transparent (invisible), yet robust watermark. In the most general sense, any watermarking technique that attempts to incorporate an invisible mark into an image is perceptually based. However, in order to provide transparency and robustness to attack, two conflicting requirements from a signal processing perspective, more sophisticated use of perceptual information in the watermarking process is required. We describe watermarking techniques ranging from simple schemes which incorporate common-sense rules in using perceptual information in the watermarking process, to more elaborate schemes which adapt to local image characteristics based on more formal perceptual models. This review is not meant to be exhaustive; its aim is to provide the reader with an understanding of how the techniques have been evolving as the requirements and applications become better defined.
668 citations
•
TL;DR: Non-linear Independent Component Estimation (NICE) as discussed by the authors is a deep learning framework for modeling complex high-dimensional densities based on the idea that a good representation is one in which the data has a distribution that is easy to model.
Abstract: We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the Jacobian determinant and inverse transform is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.
667 citations
Authors
Showing all 37011 results
Name | H-index | Papers | Citations |
---|---|---|---|
George M. Whitesides | 240 | 1739 | 269833 |
Yoshua Bengio | 202 | 1033 | 420313 |
John A. Rogers | 177 | 1341 | 127390 |
Zhenan Bao | 169 | 865 | 106571 |
Thomas S. Huang | 146 | 1299 | 101564 |
Federico Capasso | 134 | 1189 | 76957 |
Robert S. Brown | 130 | 1243 | 65822 |
Christos Faloutsos | 127 | 789 | 77746 |
Robert J. Cava | 125 | 1042 | 71819 |
Ramamoorthy Ramesh | 122 | 649 | 67418 |
Yann LeCun | 121 | 369 | 171211 |
Kamil Ugurbil | 120 | 536 | 59053 |
Don Towsley | 119 | 883 | 56671 |
Steven P. DenBaars | 118 | 1366 | 60343 |
Robert E. Tarjan | 114 | 400 | 67305 |