scispace - formally typeset
Search or ask a question
Institution

Dolby Laboratories

CompanyAmsterdam, Netherlands
About: Dolby Laboratories is a company organization based out in Amsterdam, Netherlands. It is known for research contribution in the topics: Audio signal & Audio signal flow. The organization has 956 authors who have published 1726 publications receiving 29456 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Spatial sound intensity vectors in spherical harmonic domain are formulates such that the vectors contain energy and directivity information over continuous spatial regions enabling ease of implementation.
Abstract: Sound intensity is a fundamental quantity describing acoustic wave fields and it contains both energy and directivity information. It is used in a variety of applications such as source localization, reproduction, and power measurement. Until now, intensity is defined at a point in space, however given sound propagates over space, knowing its spatial distribution could be more powerful. This paper formulates spatial sound intensity vectors in spherical harmonic domain such that the vectors contain energy and directivity information over continuous spatial regions. These representations are derived with finite sets of closed form coefficients enabling ease of implementation.

10 citations

Proceedings ArticleDOI
29 Dec 2011
TL;DR: Experimental results indicate that the proposed algorithm is able to reduce the energy consumption by 4.25% on average, while achieving the same or better subjective image quality as conventional color quantization.
Abstract: In this paper we present a novel perceptually-based algorithm for color quantization that produces images that consume less energy than conventionally quantized images when displayed on modern energy-adaptive displays. To evaluate the performance of the proposed algorithm, we performed a subjective study on a standard Kodak color image database. Experimental results indicate that the proposed algorithm is able to reduce the energy consumption by 4.25% on average, while achieving the same or better subjective image quality as conventional color quantization.

10 citations

Patent
15 Aug 2005
TL;DR: In this article, upmixing and mixing components are disclosed that allow audio signals to be delivered to all output channels regardless of the configuration of the audio sources and the number of channels that are provided by those audio sources.
Abstract: Audio sources in typical computer systems provide different numbers of channels of audio signals to a mixing component of the operating system. This conventional arrangement usually prevents the audio signals from all sources from being played back through all output channels. Novel arrangements of upmixing and mixing components are disclosed that allow audio signals to be delivered to all output channels regardless of the configuration of the audio sources and the number of channels that are provided by those audio sources.

10 citations

Patent
Xuejing Sun1
22 Jun 2012
TL;DR: In this article, a convolutive blind source separation method is described, in which each of a plurality of input signals is transformed into frequency domain and source signals are estimated by filtering the transformed input signals through the respective unmixing filter configured with calculated values of the coefficients.
Abstract: Methods and apparatuses for convolutive blind source separation are described. Each of a plurality of input signals is transformed into frequency domain. Values of coefficients of unmixing filter corresponding to frequency bins are calculated by performing a gradient descent process on a cost function at least dependent on the coefficients of the unmixing filters. In each iteration of the gradient descent process, gradient terms for calculating the values of the same coefficient of the unmixing filters are adjusted to improve smoothness of gradient terms across the frequency bins. With respect to each of the frequency bins, source signals are estimated by filtering the transformed input signals through the respective unmixing filter configured with the calculated values of the coefficients. The estimated source signals on the respective frequency bins are transformed into time domain. The cost function is adapted to evaluate decorrelation between the estimated source signals.

10 citations

Proceedings ArticleDOI
01 Nov 2017
TL;DR: FusionGAN as discussed by the authors is a novel genre fusion framework for music generation that integrates the strengths of generative adversarial networks and dual learning, which can effectively integrate the styles of the given domains.
Abstract: FusionGAN is a novel genre fusion framework for music generation that integrates the strengths of generative adversarial networks and dual learning. In particular, the proposed method offers a dual learning extension that can effectively integrate the styles of the given domains. To efficiently quantify the difference among diverse domains and avoid the vanishing gradient issue, FusionGAN provides a Wasserstein based metric to approximate the distance between the target domain and the existing domains. Adopting the Wasserstein distance, a new domain is created by combining the patterns of the existing domains using adversarial learning. Experimental results on public music datasets demonstrated that our approach could effectively merge two genres.

10 citations


Authors

Showing all 959 results

NameH-indexPapersCitations
Wolfgang Heidrich6431215854
Rabab K. Ward5654914364
Lorne A. Whitehead422326661
Scott J. Daly412305543
Michael E. Miller402255264
Alireza Marandi391406116
Wolfgang Stuerzlinger352305192
Lars Villemoes331802815
Joan Serrà311394046
Dong Tian311163621
Peng Yin301332454
Ning Xu281172705
Nicolas R. Tsingos281102749
Panos Nasiopoulos272713706
Zhibo Chen273443385
Network Information
Related Institutions (5)
Canon Inc.
84.1K papers, 901K citations

76% related

Nokia
28.3K papers, 695.7K citations

73% related

Samsung
163.6K papers, 2M citations

71% related

Ericsson
35.3K papers, 584.5K citations

70% related

Qualcomm
38.4K papers, 804.6K citations

70% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20231
20223
202126
202082
201989
201869