Institution
Dolby Laboratories
Company•Amsterdam, Netherlands•
About: Dolby Laboratories is a company organization based out in Amsterdam, Netherlands. It is known for research contribution in the topics: Audio signal & Audio signal flow. The organization has 956 authors who have published 1726 publications receiving 29456 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: Spatial sound intensity vectors in spherical harmonic domain are formulates such that the vectors contain energy and directivity information over continuous spatial regions enabling ease of implementation.
Abstract: Sound intensity is a fundamental quantity describing acoustic wave fields and it contains both energy and directivity information. It is used in a variety of applications such as source localization, reproduction, and power measurement. Until now, intensity is defined at a point in space, however given sound propagates over space, knowing its spatial distribution could be more powerful. This paper formulates spatial sound intensity vectors in spherical harmonic domain such that the vectors contain energy and directivity information over continuous spatial regions. These representations are derived with finite sets of closed form coefficients enabling ease of implementation.
10 citations
••
29 Dec 2011TL;DR: Experimental results indicate that the proposed algorithm is able to reduce the energy consumption by 4.25% on average, while achieving the same or better subjective image quality as conventional color quantization.
Abstract: In this paper we present a novel perceptually-based algorithm for color quantization that produces images that consume less energy than conventionally quantized images when displayed on modern energy-adaptive displays. To evaluate the performance of the proposed algorithm, we performed a subjective study on a standard Kodak color image database. Experimental results indicate that the proposed algorithm is able to reduce the energy consumption by 4.25% on average, while achieving the same or better subjective image quality as conventional color quantization.
10 citations
•
15 Aug 2005TL;DR: In this article, upmixing and mixing components are disclosed that allow audio signals to be delivered to all output channels regardless of the configuration of the audio sources and the number of channels that are provided by those audio sources.
Abstract: Audio sources in typical computer systems provide different numbers of channels of audio signals to a mixing component of the operating system. This conventional arrangement usually prevents the audio signals from all sources from being played back through all output channels. Novel arrangements of upmixing and mixing components are disclosed that allow audio signals to be delivered to all output channels regardless of the configuration of the audio sources and the number of channels that are provided by those audio sources.
10 citations
•
22 Jun 2012TL;DR: In this article, a convolutive blind source separation method is described, in which each of a plurality of input signals is transformed into frequency domain and source signals are estimated by filtering the transformed input signals through the respective unmixing filter configured with calculated values of the coefficients.
Abstract: Methods and apparatuses for convolutive blind source separation are described. Each of a plurality of input signals is transformed into frequency domain. Values of coefficients of unmixing filter corresponding to frequency bins are calculated by performing a gradient descent process on a cost function at least dependent on the coefficients of the unmixing filters. In each iteration of the gradient descent process, gradient terms for calculating the values of the same coefficient of the unmixing filters are adjusted to improve smoothness of gradient terms across the frequency bins. With respect to each of the frequency bins, source signals are estimated by filtering the transformed input signals through the respective unmixing filter configured with the calculated values of the coefficients. The estimated source signals on the respective frequency bins are transformed into time domain. The cost function is adapted to evaluate decorrelation between the estimated source signals.
10 citations
••
01 Nov 2017TL;DR: FusionGAN as discussed by the authors is a novel genre fusion framework for music generation that integrates the strengths of generative adversarial networks and dual learning, which can effectively integrate the styles of the given domains.
Abstract: FusionGAN is a novel genre fusion framework for music generation that integrates the strengths of generative adversarial networks and dual learning. In particular, the proposed method offers a dual learning extension that can effectively integrate the styles of the given domains. To efficiently quantify the difference among diverse domains and avoid the vanishing gradient issue, FusionGAN provides a Wasserstein based metric to approximate the distance between the target domain and the existing domains. Adopting the Wasserstein distance, a new domain is created by combining the patterns of the existing domains using adversarial learning. Experimental results on public music datasets demonstrated that our approach could effectively merge two genres.
10 citations
Authors
Showing all 959 results
Name | H-index | Papers | Citations |
---|---|---|---|
Wolfgang Heidrich | 64 | 312 | 15854 |
Rabab K. Ward | 56 | 549 | 14364 |
Lorne A. Whitehead | 42 | 232 | 6661 |
Scott J. Daly | 41 | 230 | 5543 |
Michael E. Miller | 40 | 225 | 5264 |
Alireza Marandi | 39 | 140 | 6116 |
Wolfgang Stuerzlinger | 35 | 230 | 5192 |
Lars Villemoes | 33 | 180 | 2815 |
Joan Serrà | 31 | 139 | 4046 |
Dong Tian | 31 | 116 | 3621 |
Peng Yin | 30 | 133 | 2454 |
Ning Xu | 28 | 117 | 2705 |
Nicolas R. Tsingos | 28 | 110 | 2749 |
Panos Nasiopoulos | 27 | 271 | 3706 |
Zhibo Chen | 27 | 344 | 3385 |