scispace - formally typeset
Search or ask a question

Showing papers on "Hadamard transform published in 2013"


Proceedings Article
16 Jun 2013
TL;DR: Improvements to Fastfood, an approximation that accelerates kernel methods significantly and achieves similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory, make kernel methods more practical for applications that have large training sets and/or require real-time prediction.
Abstract: Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices when combined with diagonal Gaussian matrices exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks (Rahimi & Recht, 2007) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy. We prove that the approximation is unbiased and has low variance. Extensive experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and/or require real-time prediction.

446 citations



Journal ArticleDOI
TL;DR: This study uses Hadamard matrices to construct the first explicit two-parity MDS storage code with optimal repair properties for all single node failures, including the parities, and generalizes this construction to design high-rate maximum-distance separable codes that achieve the optimum repair communication for single systematic node failures.
Abstract: In distributed storage systems that employ erasure coding, the issue of minimizing the total communication required to exactly rebuild a storage node after a failure arises. This repair bandwidth depends on the structure of the storage code and the repair strategies used to restore the lost data. Designing high-rate maximum-distance separable (MDS) codes that achieve the optimum repair communication has been a well-known open problem. Our work resolves, in part, this open problem. In this study, we use Hadamard matrices to construct the first explicit two-parity MDS storage code with optimal repair properties for all single node failures, including the parities. Our construction relies on a novel method of achieving perfect interference alignment over finite fields with a finite number of symbol extensions. We generalize this construction to design $m$ -parity MDS codes that achieve the optimum repair communication for single systematic node failures.

180 citations


Proceedings Article
05 Dec 2013
TL;DR: The algorithm Subsampled Randomized Hadamard Transform- Dual Ridge Regression (SRHT-DRR) runs in time O(np log(n)) and works by preconditioning the design matrix by a Randomized Walsh-Hadamard transform with a subsequent subsampling of features.
Abstract: We propose a fast algorithm for ridge regression when the number of features is much larger than the number of observations (p ≫ n) The standard way to solve ridge regression in this setting works in the dual space and gives a running time of O(n2p) Our algorithm Subsampled Randomized Hadamard Transform- Dual Ridge Regression (SRHT-DRR) runs in time O(np log(n)) and works by preconditioning the design matrix by a Randomized Walsh-Hadamard Transform with a subsequent subsampling of features We provide risk bounds for our SRHT-DRR algorithm in the fixed design setting and show experimental results on synthetic and real datasets

128 citations


Journal ArticleDOI
TL;DR: This article addresses the efficacy, in the Frobenius and spectral norms, of an SRHT-based low-rank matrix approximation technique introduced by Woolfe, Liberty, Rohklin, and Tygert, and produces several results on matrix operations with SRHTs that may be of independent interest.
Abstract: Several recent randomized linear algebra algorithms rely upon fast dimension reduction methods. A popular choice is the subsampled randomized Hadamard transform (SRHT). In this article, we address the efficacy, in the Frobenius and spectral norms, of an SRHT-based low-rank matrix approximation technique introduced by Woolfe, Liberty, Rohklin, and Tygert. We establish a slightly better Frobenius norm error bound than is currently available, and a much sharper spectral norm error bound (in the presence of reasonable decay of the singular values). Along the way, we produce several results on matrix operations with SRHTs (such as approximate matrix multiplication) that may be of independent interest. Our approach builds upon Tropp's in “Improved Analysis of the Subsampled Randomized Hadamard Transform” [Adv. Adaptive Data Anal., 3 (2011), pp. 115--126].

126 citations


Book ChapterDOI
01 Jan 2013
TL;DR: As previously mentioned, for problems in mathematical physics Hadamard postulated three requirements: a solution should exist, the solution should be unique, and the solutionShould depend continuously on the data.
Abstract: As previously mentioned, for problems in mathematical physics Hadamard [95] postulated three requirements: a solution should exist, the solution should be unique, and the solution should depend continuously on the data. The third postulate is motivated by the fact that in all applications the data will be measured quantities. Therefore, one wants to make sure that small errors in the data will cause only small errors in the solution. A problem satisfying all three requirements is called well-posed. Otherwise, it is called ill-posed. As shown in the previous chapter, the direct obstacle scattering problem is well-posed.

125 citations


Posted Content
TL;DR: Hermite–Hadamard’s inequalities for harmonically convex functions via fractional integrals are established and some Hermite– hadamard type inequalities of these classes of functions are obtained.
Abstract: In this paper, the author established Hermite-Hadamard's inequalities for harmonically convex functions via fractional integrals and obtained some Hermite-Hadamard type inequalities of these classes of functions.

94 citations


Journal ArticleDOI
TL;DR: A proposed Hadamard encoding of labeling that speeds the imaging and improves the signal‐to‐noise ratio efficiency is implemented and evaluated.
Abstract: Creating images of the transit delay from the labeling location to image tissue can aid the optimization and quantification of arterial spin labeling perfusion measurements and may provide diagnostic information independent of perfusion. Unfortunately, measuring transit delay requires acquiring a series of images with different labeling timing that adds to the time cost and increases the noise of the arterial spin labeling study. Here, we implement and evaluate a proposed Hadamard encoding of labeling that speeds the imaging and improves the signal-to-noise ratio efficiency. Volumetric images in human volunteers confirmed the theoretical advantages of Hadamard encoding over sequential acquisition of images with multiple labeling timing. Perfusion images calculated from Hadamard encoded acquisition had reduced signal-to-noise ratio relative to a dedicated perfusion acquisition with either assumed or separately measured transit delays, however. Magn Reson Med 69:1014–1022, 2013. © 2012 Wiley Periodicals, Inc.

91 citations


Journal ArticleDOI
TL;DR: It is shown that typical spatial homogeneous QWs with ballistic spreading belong to the universality class and it is found that the walk treated here with one defect also belongs to the class.
Abstract: We treat three types of measures of the quantum walk (QW) with the spatial perturbation at the origin, which was introduced by Konno (Quantum Inf Proc 9:405, 2010): time averaged limit measure, weak limit measure, and stationary measure. From the first two measures, we see a coexistence of the ballistic and localized behaviors in the walk as a sequential result following (Konno in Quantum Inf Proc 9:405, 2010; Quantum Inf Proc 8:387---399, 2009). We propose a universality class of QWs with respect to weak limit measure. It is shown that typical spatial homogeneous QWs with ballistic spreading belong to the universality class. We find that the walk treated here with one defect also belongs to the class. We mainly consider the walk starting from the origin. However when we remove this restriction, we obtain a stationary measure of the walk. As a consequence, by choosing parameters in the stationary measure, we get the uniform measure as a stationary measure of the Hadamard walk and a time averaged limit measure of the walk with one defect respectively.

89 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the Wick squares of all time derivatives of the quantized Klein-Gordon field have finite fluctuations only if the Wick-ordering is defined with respect to a Hadamard state.
Abstract: Hadamard states are generally considered as the physical states for linear quantized fields on curved spacetimes, for several good reasons. Here, we provide a new motivation for the Hadamard condition: for “ultrastatic slab spacetimes” with compact Cauchy surface, we show that the Wick squares of all time derivatives of the quantized Klein-Gordon field have finite fluctuations only if the Wick-ordering is defined with respect to a Hadamard state. This provides a converse to an important result of Brunetti and Fredenhagen. The recently proposed “S-J (Sorkin-Johnston) states” are shown, generically, to give infinite fluctuations for the Wick square of the time derivative of the field, further limiting their utility as reasonable states. Motivated by the S-J construction, we also study the general question of extending states that are pure (or given by density matrices relative to a pure state) on a double-cone region of Minkowski space. We prove a result for general quantum field theories showing that such states cannot be extended to any larger double-cone without encountering singular behaviour at the spacelike boundary of the inner region. In the context of the Klein-Gordon field this shows that even if an S-J state is Hadamard within the double cone, this must fail at the boundary.

85 citations


Journal ArticleDOI
TL;DR: A systematic method for developing a binary version of a given transform by using the Walsh-Hadamard transform (WHT) is proposed and it is shown that the resulting BDCT corresponds to the well-known sequency-ordered WHT, whereas the BDHT can be considered as a new Hartley-ordering WHT.
Abstract: In this paper, a systematic method for developing a binary version of a given transform by using the Walsh-Hadamard transform (WHT) is proposed. The resulting transform approximates the underlying transform very well, while maintaining all the advantages and properties of WHT. The method is successfully applied for developing a binary discrete cosine transform (BDCT) and a binary discrete Hartley transform (BDHT). It is shown that the resulting BDCT corresponds to the well-known sequency-ordered WHT, whereas the BDHT can be considered as a new Hartley-ordered WHT. Specifically, the properties of the proposed Hartley-ordering are discussed and a shift-copy scheme is proposed for a simple and direct generation of the Hartley-ordering functions. For software and hardware implementation purposes, a unified structure for the computation of the WHT, BDCT, and BDHT is proposed by establishing an elegant relationship between the three transform matrices. In addition, a spiral-ordering is proposed to graphically obtain the BDHT from the BDCT and vice versa. The application of these binary transforms in image compression, encryption and spectral analysis clearly shows the ability of the BDCT (BDHT) in approximating the DCT (DHT) very well.

Journal ArticleDOI
TL;DR: It is shown that a strong converse theorem holds for the classical capacity of all entanglement-breaking channels and all Hadamard channels (the complementary channels of the former) that the probability of correctly decoding a classical message converges exponentially fast to zero in the limit of many channel uses if the rate of communication exceeds the classical Capacity.
Abstract: A strong converse theorem for the classical capacity of a quantum channel states that the probability of correctly decoding a classical message converges exponentially fast to zero in the limit of many channel uses if the rate of communication exceeds the classical capacity of the channel. Along with a corresponding achievability statement for rates below the capacity, such a strong converse theorem enhances our understanding of the capacity as a very sharp dividing line between achievable and unachievable rates of communication. Here, we show that such a strong converse theorem holds for the classical capacity of all entanglement-breaking channels and all Hadamard channels (the complementary channels of the former). These results follow by bounding the success probability in terms of a "sandwiched" Renyi relative entropy, by showing that this quantity is subadditive for all entanglement-breaking and Hadamard channels, and by relating this quantity to the Holevo capacity. Prior results regarding strong converse theorems for particular covariant channels emerge as a special case of our results.

Proceedings Article
05 Dec 2013
TL;DR: This work proposes three methods which solve the big data problem by subsampling the covariance matrix using either a single or two stage estimation of ordinary least squares from large amounts of data with an error bound of O(√p/n).
Abstract: We address the problem of fast estimation of ordinary least squares (OLS) from large amounts of data (n ≫ p). We propose three methods which solve the big data problem by subsampling the covariance matrix using either a single or two stage estimation. All three run in the order of size of input i.e. O(np) and our best method, Uluru, gives an error bound of O(√p/n) which is independent of the amount of subsampling as long as it is above a threshold. We provide theoretical bounds for our algorithms in the fixed design (with Randomized Hadamard preconditioning) as well as sub-Gaussian random design setting. We also compare the performance of our methods on synthetic and real-world datasets and show that if observations are i.i.d., sub-Gaussian then one can directly subsample without the expensive Randomized Hadamard preconditioning without loss of accuracy.

Journal ArticleDOI
TL;DR: This paper proposes a novel framework called DCast for distributed video coding and transmission over wireless networks, which is different from existing distributed schemes in three aspects, and proposes a power distortion optimization algorithm to replace the traditional rate distortion optimization.
Abstract: This paper proposes a novel framework called DCast for distributed video coding and transmission over wireless networks, which is different from existing distributed schemes in three aspects. First, coset quantized DCT coefficients and motion data are directly delivered to the channel coding layer without syndrome or entropy coding. Second, transmission power is directly allocated to coset data and motion data according to their distributions and magnitudes without forward error correction. Third, these data are transformed by Hadamard and then directly mapped using a dense constellation (64K-QAM) for transmission without Gray coding. One of the most important properties in this framework is that the coding and transmission rate is fixed and distortion is minimized by allocating the transmission power. Thus, we further propose a power distortion optimization algorithm to replace the traditional rate distortion optimization. This framework avoids the annoying cliff effect caused by the mismatch between transmission rate and channel condition. In multicast, each user can get approximately the best quality matching its channel condition. Our experiment results show that the proposed DCast outperforms the typical solution using H.264 over 802.11 up to 8 dB in video PSNR in video broadcast. Even in video unicast, the proposed DCast is still comparable to the typical solution.

Journal ArticleDOI
01 Jun 2013-Analysis
TL;DR: In this article, the authors introduce a notion of geometric-arithmetically s-convex functions and establish some inequalities of Hermite-Hadamard type for geometric-rithmetric s-conscave functions and apply these inequalities to construct inequalities for special means.
Abstract: Abstract In the paper, the authors introduce a notion “geometric-arithmetically s-convex function”, establish some inequalities of Hermite–Hadamard type for geometric-arithmetically s-convex functions, and apply these inequalities to construct inequalities for special means.

Journal ArticleDOI
TL;DR: In this article, it was shown that the Wick squares of all time derivatives of the quantized Klein-Gordon field have finite fluctuations only if the Wick-ordering is defined with respect to a Hadamard state.
Abstract: Hadamard states are generally considered as the physical states for linear quantized fields on curved spacetimes, for several good reasons. Here, we provide a new motivation for the Hadamard condition: for "ultrastatic slab spacetimes" with compact Cauchy surface, we show that the Wick squares of all time derivatives of the quantized Klein-Gordon field have finite fluctuations only if the Wick-ordering is defined with respect to a Hadamard state. This provides a converse to an important result of Brunetti and Fredenhagen. The recently proposed "S-J (Sorkin-Johnston) states" are shown, generically, to give infinite fluctuations for the Wick square of the time derivative of the field, further limiting their utility as reasonable states. Motivated by the S-J construction, we also study the general question of extending states that are pure (or given by density matrices relative to a pure state) on a double-cone region of Minkowski space. We prove a result for general quantum field theories showing that such states cannot be extended to any larger double-cone without encountering singular behaviour at the spacelike boundary of the inner region. In the context of the Klein-Gordon field this shows that even if an S-J state is Hadamard within the double cone, this must fail at the boundary.

Journal ArticleDOI
TL;DR: In this paper, a quantization scheme for the vector potential on globally hyperbolic spacetimes is developed, which realizes it as a locally covariant conformal quantum field theory, and employs a bulk-to-boundary correspondence procedure in order to identify for the underlying field algebra a distinguished ground state which is of Hadamard form.
Abstract: We develop a quantization scheme for the vector potential on globally hyperbolic spacetimes which realizes it as a locally covariant conformal quantum field theory. This result allows us to employ on a large class of backgrounds, which are asymptotically flat at null infinity, a bulk-to-boundary correspondence procedure in order to identify for the underlying field algebra a distinguished ground state which is of Hadamard form.

Journal ArticleDOI
TL;DR: In this article, a Mittag-Leffler-type function is introduced and its properties in relation to some integro-differential operators involving Hadamard fractional derivatives or hyper-Bessel-type operators.
Abstract: In this paper, we introduce a novel Mittag–Leffler-type function and study its properties in relation to some integro-differential operators involving Hadamard fractional derivatives or hyper-Bessel-type operators. We discuss then the utility of these results to solve some integro-differential equations involving these operators by means of operational methods. We show the advantage of our approach through some examples. Among these, an application to a modified Lamb–Bateman integral equation is presented.

Journal ArticleDOI
TL;DR: In this paper, Hermite-Hadamard type inequalities involving Hadamard fractional integrals for the functions satisfying monotonicity, convexity and s-e-condition are studied.
Abstract: In this paper, Hermite-Hadamard type inequalities involving Hadamard fractional integrals for the functions satisfying monotonicity, convexity and s-e-condition are studied. Three classes of left-type Hadamard fractional integral identities including the first-order derivative are firstly established. Some interesting Hermite-Hadamard type integral inequalities involving Hadamard fractional integrals are also presented by using the established integral identities. Finally, some applications to special means of real numbers are given. MSC: 26A33; 26A51; 26D15

Journal Article
TL;DR: In this paper, the authors studied nonlinear fractional differential equations with Hadamard derivative and Ulam stability in the weighted space of continuous functions and derived sufficient conditions for the existence of solutions and a sufficient condition for the nonexistence of blowing-up problems.
Abstract: In this paper, we study nonlinear fractional differential equations with Hadamard derivative and Ulam stability in the weighted space of continuous functions. Firstly, some new nonlinear integral inequalities with Hadamard type singular kernel are established, which can be used in the theory of certain classes of fractional differential equations. Secondly, some sufficient conditions for existence of solutions are given by using fixed point theorems via a prior estimation in the weighted space of the continuous functions. Meanwhile, a sufficient condition for nonexistence of blowing-up solutions is derived. Thirdly, four types of Ulam-Hyers stability definitions for fractional differential equations with Hadamard derivative are introduced and Ulam-Hyers stability and generalized Ulam-Hyers-Rassias stability results are presented. Finally, some examples and counterexamples on Ulam-Hyers stability are given.

Journal ArticleDOI
TL;DR: The proximal point algorithm (in short PPA) for variational inequalities with pseudomonotone vector fields on Hadamard manifolds is investigated and it is proved that the sequence generated by PPA is well defined and the sequence converges to a solution of variational inequality, whenever it exists.
Abstract: In this paper, we investigate the proximal point algorithm (in short PPA) for variational inequalities with pseudomonotone vector fields on Hadamard manifolds. Under weaker assumptions than monotonicity, we show that the sequence generated by PPA is well defined and prove that the sequence converges to a solution of variational inequality, whenever it exists. The results presented in this paper generalize and improve some corresponding known results given in literatures.

Journal ArticleDOI
TL;DR: It is shown empirically that after proper randomization, the structure of the operators does not significantly affect the performances of the solver, and for some specially designed spatially coupled operators, this allows a computationally fast and memory efficient reconstruction in compressed sensing up to the information-theoretical limit.
Abstract: We study the behavior of Approximate Message-Passing, a solver for linear sparse estimation problems such as compressed sensing, when the i.i.d matrices -for which it has been specifically designed- are replaced by structured operators, such as Fourier and Hadamard ones. We show empirically that after proper randomization, the structure of the operators does not significantly affect the performances of the solver. Furthermore, for some specially designed spatially coupled operators, this allows a computationally fast and memory efficient reconstruction in compressed sensing up to the information-theoretical limit. We also show how this approach can be applied to sparse superposition codes, allowing the Approximate Message-Passing decoder to perform at large rates for moderate block length.

Proceedings ArticleDOI
19 May 2013
TL;DR: A unified architecture for IDCT and DCT through the algorithm optimization is devised and one proposed engine provides the throughput for 8K-UHDTV real-time decoding, and it also fully supports the real- time encoding of HDTV1080p@20fps with 311MHz clock speed1.
Abstract: Great amount of two-dimensional (2D) discrete cosine transforms and Hadamard transforms are executed in HEVC. Upon the end of real-time UHDTV Codec, the full pipeline variable block size 2D transform engine with the efficient hardware utilization is proposed to handle the DCT/IDCT and Hadamard transforms. The efficiency comes from two aspects. First, the hardware for small-size transforms is fully reused by other larger-size transform processing. Second, we devise the unified architecture for IDCT and DCT through the algorithm optimization. The maximum clock speed of our design is 311MHz under 90nm technology. Experiments demonstrate that, at 47MHz clock frequency, one proposed engine provides the throughput for 8K-UHDTV real-time decoding, and it also fully supports the real-time encoding of HDTV1080p@20fps with 311MHz clock speed1.

Journal ArticleDOI
TL;DR: Among the many predefined modes, the intra‐prediction mode is chosen using the difference between the minimum and second minimum of the rate‐distortion cost estimation based on the Hadamard transform, which achieves a nearly similar coding performance to that of HEVC test model 2.1.
Abstract: A fast intra-prediction method is proposed for High Efficiency Video Coding (HEVC) using a fast intra-mode decision and fast coding unit (CU) size decision. HEVC supports very sophisticated intra modes and a recursive quadtree-based CU structure. To provide a high coding efficiency, the mode and CU size are selected in a ratedistortion optimized manner. This causes a high computational complexity in the encoder, and, for practical applications, the complexity should be significantly reduced. In this paper, among the many predefined modes, the intra-prediction mode is chosen without rate-distortion optimization processes, instead using the difference between the minimum and second minimum of the rate-distortion cost estimation based on the Hadamard transform. The experiment results show that the proposed method achieves a 49.04% reduction in the intra-prediction time and a 32.74% reduction in the total encoding time with a nearly similar coding performance to that of HEVC test model 2.1.

Journal ArticleDOI
01 Dec 2013-Analysis
TL;DR: In this article, the authors introduce the notion of log-h-convex functions and establish several Hermite-Hadamard type integral inequalities for these kinds of functions.
Abstract: Abstract In the paper, the authors introduce a notion “log -h-convex functions” and establish several Hermite–Hadamard type integral inequalities for this kind of functions.

Journal ArticleDOI
TL;DR: In this article, the Hermite-Hadamard type inequalities involving Hadamard fractional integrals via convex functions are studied and an important integral identity and new Hermitehadamard-type integral inequalities are also presented, some applications to special means of real numbers are given.
Abstract: In this paper, Hermite-Hadamard type inequalities involving Hadamard fractional integrals via convex functions are studied. An important integral identity and new Hermite-Hadamard type integral inequalities involving Hadamard fractional integrals are also presented. Some applications to special means of real numbers are given.

Journal ArticleDOI
TL;DR: In this paper, a modification of the Sorkin-Johnston states for scalar free quantum fields on a class of globally hyperbolic spacetimes possessing compact Cauchy hypersurfaces is presented.
Abstract: We present a modification of the recently proposed Sorkin-Johnston states for scalar free quantum fields on a class of globally hyperbolic spacetimes possessing compact Cauchy hypersurfaces. The modification relies on a smooth cutoff of the commutator function and leads always to Hadamard states, in contrast to the original Sorkin-Johnston states. The modified Sorkin-Johnston states are, however, due to the smoothing no longer uniquely associated to the spacetime.

Journal ArticleDOI
TL;DR: A class of functions from Niho exponents with four-valued Walsh transform is obtained for any prime by a uniform method, and the distribution of the Walsh transform values is also completely determined.
Abstract: In this paper, a class of functions from Niho exponents with four-valued Walsh transform is obtained for any prime by a uniform method, and the distribution of the Walsh transform values is also completely determined. In particular, this class of functions is proven to be bent for a special case. Although it is shown that the obtained bent functions are equivalent to the Leander-Kholosha's class of bent functions, a direct and much simpler proof for the bentness of this kind of Niho functions is provided.

Journal ArticleDOI
01 Dec 2013
TL;DR: A novel and adaptive visible/invisible watermarking scheme for embedding and extracting a digital watermark into/from an image by using an adaptive procedure for calculating scaling factor or scaling strength using sigmoid function in Hadamard transform domain.
Abstract: In this work, a novel and adaptive visible/invisible watermarking scheme for embedding and extracting a digital watermark into/from an image is proposed. The proposed method uses an adaptive procedure for calculating scaling factor or scaling strength using sigmoid function in Hadamard transform domain. The value of scaling factor is governed by a control parameter. The control parameter can be adjusted to make the watermarking scheme as either visible or invisible. The proposed methodology facilitates in preserving ownership rights and prevents the piracy of digital data which are considered to be the basic needs of digital watermarking. As proposed watermarking process is carried out in Hadamard transform domain, it is more robust to image/signal processing attacks. The experimental results and performance analysis confirm the efficiency of the proposed scheme.

Proceedings ArticleDOI
Jia Zhu1, Zhenyu Liu1, Dongsheng Wang1, Qingrui Han2, Yang Song2 
01 Sep 2013
TL;DR: The simplified Rate-Distortion (RD) cost estimation algorithms, which are based on the Hadamard transform and the distortion evaluation without the signal reconstruction, are proposed and one proposed engine fulfills the throughput for 4K-UHDTV@28fps real-time encoding.
Abstract: The emerging video coding standard, High Efficiency Video Coding (HEVC), aims at doubling coding efficiency of H.264/AVC. In the intra encoding, Rate-Distortion Optimization (RDO) processes are employed to determine the best prediction mode. RDO based mode decision accounts for 35-39% coding time, because the large scale two-dimensional(2D) DCT/IDCT introduces a plethora of computation- and hardware-consuming multiplications. In this paper, the simplified Rate-Distortion (RD) cost estimation algorithms, which are based on the Hadamard transform and the distortion evaluation without the signal reconstruction, are proposed. When embedded to HEVC test Model(HM-5.2), our methods averagely achieve 16.1% time saving at the price of 0.055dB BD-PSNR loss, or equivalently 1.27% BD-BR increasing in intra coding. The corresponding VLSI design of the proposed algorithms is implemented with TSMC 90nm 1P9M technology. The maximum clock speed is 418 MHz under the worst work conditions (125°C, 0.9V). As compared with the primitive design, 64.9% hardware cost can be saved by our schemes. One proposed engine fulfills the throughput for 4K-UHDTV@28fps real-time encoding.