scispace - formally typeset
Search or ask a question

Showing papers on "Bilateral filter published in 2006"


Book ChapterDOI
07 May 2006
TL;DR: A new signal-processing analysis of the bilateral filter is proposed, which complements the recent studies that analyzed it as a PDE or as a robust statistics estimator and allows for a novel bilateral filtering acceleration using a downsampling in space and intensity.
Abstract: The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and a fast version has been proposed. Unfortunately, little is known about the accuracy of such acceleration. In this paper, we propose a new signal-processing analysis of the bilateral filter, which complements the recent studies that analyzed it as a PDE or as a robust statistics estimator. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using a downsampling in space and intensity. This affords a principled expression of the accuracy in terms of bandwidth and sampling. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. The bilateral filter can then be expressed as simple linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive simple criteria for downsampling the key operations and to achieve important acceleration of the bilateral filter. We show that, for the same running time, our method is significantly more accurate than previous acceleration techniques.

675 citations


Journal ArticleDOI
TL;DR: A novel adaptive and patch-based approach is proposed for image denoising and representation based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel to associate with each pixel the weighted sum of data points within an adaptive neighborhood.
Abstract: A novel adaptive and patch-based approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of data points within an adaptive neighborhood, in a manner that it balances the accuracy of approximation and the stochastic error, at each spatial position. This method is general and can be applied under the assumption that there exists repetitive patterns in a local neighborhood of a point. By introducing spatial adaptivity, we extend the work earlier described by Buades et al. which can be considered as an extension of bilateral filtering to image patches. Finally, we propose a nearly parameter-free algorithm for image denoising. The method is applied to both artificially corrupted (white Gaussian noise) and real images and the performance is very close to, and in some cases even surpasses, that of the already published denoising methods

486 citations


Journal ArticleDOI
Ben Weiss1
01 Jul 2006
TL;DR: This work introduces a CPU-based, vectorizable O(log r) algorithm for median filtering, to its knowledge the most efficient yet developed and extended to images of any bit-depth, and can also be adapted to perform bilateral filtering.
Abstract: Median filtering is a cornerstone of modern image processing and is used extensively in smoothing and de-noising applications. The fastest commercial implementations (e.g. in Adobe® Photoshop® CS2) exhibit O(r) runtime in the radius of the filter, which limits their usefulness in realtime or resolution-independent contexts. We introduce a CPU-based, vectorizable O(log r) algorithm for median filtering, to our knowledge the most efficient yet developed. Our algorithm extends to images of any bit-depth, and can also be adapted to perform bilateral filtering. On 8-bit data our median filter outperforms Photoshop's implementation by up to a factor of fifty.

380 citations


Journal Article
TL;DR: In this paper, a multi-cue driven adaptive bilateral filter is proposed to regularize the flow computation, which is able to achieve the smoothly varied optical flow field with highly desirable motion discontinuities.
Abstract: Using the variational approaches to estimate optical flow between two frames, the flow discontinuities between different motion fields are usually not distinguished even when an anisotropic diffusion operator is applied. In this paper, we propose a multi-cue driven adaptive bilateral filter to regularize the flow computation, which is able to achieve the smoothly varied optical flow field with highly desirable motion discontinuities. First, we separate the traditional one-step variational updating model into a two-step filtering-based updating model. Then, employing our occlusion detector, we reformulate the energy functional of optical flow estimation by explicitly introducing an occlusion term to balance the energy loss due to the occlusion or mismatches. Furthermore, based on the two-step updating framework, a novel multi-cue driven bilateral filter is proposed to substitute the original anisotropic diffusion process, and it is able to adaptively control the diffusion process according to the occlusion detection, image intensity dissimilarity, and motion dissimilarity. After applying our approach on various video sources (movie and TV) in the presence of occlusion, motion blurring, non-rigid deformation, and weak textureness, we generate a spatial-coherent flow field between each pair of input frames and detect more accurate flow discontinuities along the motion boundaries.

244 citations


Book ChapterDOI
07 May 2006
TL;DR: A novel multi-cue driven adaptive bilateral filter is proposed to substitute the original anisotropic diffusion process, and it is able to adaptively control the diffusion process according to the occlusion detection, image intensity Dissimilarity, and motion dissimilarity.
Abstract: Using the variational approaches to estimate optical flow between two frames, the flow discontinuities between different motion fields are usually not distinguished even when an anisotropic diffusion operator is applied. In this paper, we propose a multi-cue driven adaptive bilateral filter to regularize the flow computation, which is able to achieve the smoothly varied optical flow field with highly desirable motion discontinuities. First, we separate the traditional one-step variational updating model into a two-step filtering-based updating model. Then, employing our occlusion detector, we reformulate the energy functional of optical flow estimation by explicitly introducing an occlusion term to balance the energy loss due to the occlusion or mismatches. Furthermore, based on the two-step updating framework, a novel multi-cue driven bilateral filter is proposed to substitute the original anisotropic diffusion process, and it is able to adaptively control the diffusion process according to the occlusion detection, image intensity dissimilarity, and motion dissimilarity. After applying our approach on various video sources (movie and TV) in the presence of occlusion, motion blurring, non-rigid deformation, and weak textureness, we generate a spatial-coherent flow field between each pair of input frames and detect more accurate flow discontinuities along the motion boundaries.

237 citations


Journal ArticleDOI
TL;DR: This paper first explains the staircase effect by finding the subjacent partial differential equation (PDE) of the neighborhood filter, and shows that this ill-posed PDE is a variant of another famous image processing model, the Perona-Malik equation, which suffers the same artifacts.
Abstract: Many classical image denoising methods are based on a local averaging of the color, which increases the signal/noise ratio. One of the most used algorithms is the neighborhood filter by Yaroslavsky or sigma filter by Lee, also called in a variant "SUSAN" by Smith and Brady or "Bilateral filter" by Tomasi and Manduchi. These filters replace the actual value of the color at a point by an average of all values of points which are simultaneously close in space and in color. Unfortunately, these filters show a "staircase effect," that is, the creation in the image of flat regions separated by artifact boundaries. In this paper, we first explain the staircase effect by finding the subjacent partial differential equation (PDE) of the filter. We show that this ill-posed PDE is a variant of another famous image processing model, the Perona-Malik equation, which suffers the same artifacts. As we prove, a simple variant of the neighborhood filter solves the problem. We find the subjacent stable PDE of this variant. Finally, we apply the same correction to the recently introduced NL-means algorithm which had the same staircase effect, for the same reason.

231 citations


Journal ArticleDOI
TL;DR: In this article, B-spline channel smoothing (BSS) is proposed for robust smoothing of low-level signal features, which consists of three steps: encoding of the signal features into channels, averaging of channels, and decoding of the channels.
Abstract: In this paper, we present a new and efficient method to implement robust smoothing of low-level signal features: B-spline channel smoothing. This method consists of three steps: encoding of the signal features into channels, averaging of the channels, and decoding of the channels. We show that linear smoothing of channels is equivalent to robust smoothing of the signal features if we make use of quadratic B-splines to generate the channels. The linear decoding from B-spline channels allows the derivation of a robust error norm, which is very similar to Tukey's biweight error norm. We compare channel smoothing with three other robust smoothing techniques: nonlinear diffusion, bilateral filtering, and mean-shift filtering, both theoretically and on a 2D orientation-data smoothing task. Channel smoothing is found to be superior in four respects: it has a lower computational complexity, it is easy to implement, it chooses the global minimum error instead of the nearest local minimum, and it can also be used on nonlinear spaces, such as orientation space.

110 citations


Book ChapterDOI
01 Jan 2006
TL;DR: In this paper, a unified framework of functional minimisation combining nonlocal data and nonlocal smoothness terms is presented, which can be used for combining the advantages of known filters.
Abstract: This paper deals with establishing relations between a number of widely-used nonlinear filters for digital image processing. We cover robust statistical estimation with (local) M-estimators, local mode filtering in image or histogram space, bilateral filtering, nonlinear diusion, and regularisation approaches. Although these methods originate in dierent mathematical theories, we show that their implementation reveals a highly similar structure. We demonstrate that all these methods can be cast into a unified framework of functional minimisation combining nonlocal data and nonlocal smoothness terms. This unification contributes to a better understanding of the individual methods, and it opens the way to new techniques combining the advantages of known filters.

106 citations


Journal ArticleDOI
TL;DR: This paper presents a robust general approach conducting bilateral filters to recover sharp edges on such insensitive sampled triangular meshes, and shows that the proposed method can robustly reconstructsharp edges on feature-insensitive sampled meshes.
Abstract: A variety of computer graphics applications sample surfaces of 3D shapes in a regular grid without making the sampling rate adaptive to the surface curvature or sharp features. Triangular meshes that interpolate or approximate these samples usually exhibit relatively big error around the insensitive sampled sharp features. This paper presents a robust general approach conducting bilateral filters to recover sharp edges on such insensitive sampled triangular meshes. Motivated by the impressive results of bilateral filtering for mesh smoothing and denoising, we adopt it to govern the sharpening of triangular meshes. After recognizing the regions that embed sharp features, we recover the sharpness geometry through bilateral filtering, followed by iteratively modifying the given mesh's connectivity to form single-wide sharp edges that can be easily detected by their dihedral angles. We show that the proposed method can robustly reconstruct sharp edges on feature-insensitive sampled meshes.

63 citations


Journal ArticleDOI
TL;DR: A novel image-hiding scheme that is capable of offering better stego-image quality than a number of well-accepted schemes and following the dynamic programming strategy is proposed.

59 citations


Patent
30 Nov 2006
TL;DR: In this paper, a method of processing an array of pixels captured by an image capture device, including providing a first two-dimensional array having first and second groups of pixels, was proposed.
Abstract: A method of processing an array of pixels captured by an image capture device, includes providing a first two-dimensional array having first and second groups of pixels wherein pixels from the first group of pixels have narrower spectral photoresponses than pixels from the second group of pixels and wherein the first group of pixels has individual pixels that have spectral photoresponses that correspond to a set of at least two colors and the placement of the first and second groups of pixels define a pattern that has a minimal repeating unit arranged to permit the reproduction of a captured color image under different lighting conditions; responding to ambient lighting conditions, whether panchromatic pixels are to be combined with color pixels; combining pixels to produce a second two-dimensional array of pixels which has fewer pixels than the first two-dimensional array of pixels; and correcting the color pixels.

Patent
Noboru Yamaguchi1, Tomoya Kodama1
08 Nov 2006
TL;DR: In this article, a decoding apparatus has a de-ringing filter to filter image data decoded from encoded image data by orthogonal transformation encoding, where a subtracter generates an absolute value of difference between a value of a filter object pixel and a value selected from pixels surrounding the filter object pixels on the image data.
Abstract: A decoding apparatus having a de-ringing filter to filter image data decoded from encoded image data by orthogonal transformation encoding. In the de-ringing filter, a subtracter generates an absolute value of difference between a value of a filter object pixel and a value of at least one pixel selected from pixels surrounding the filter object pixel on the image data. A comparator compares the absolute value with a threshold. A selector outputs the value of the at least one pixel if the absolute value is less than the threshold, and outputs the value of the filter object pixel if the absolute value is not less than the threshold. A convolution operator convolutes a filter coefficient with the value output from the selector, and outputs a convolution result as a filtered value of the filter object pixel.

Patent
30 Nov 2006
TL;DR: In this article, a method of processing an array of pixels captured by an image capture device, having a first two-dimensional array of pixel from the image capturing device, some of which are color pixels, and others are panchromatic pixels, was proposed.
Abstract: A method of processing an array of pixels captured by an image capture device, having a first two-dimensional array of pixels from the image capture device, some of which are color pixels, and some of which are panchromatic pixels; determining in response to ambient lighting conditions, whether panchromatic pixels are to be combined with color pixels; combining pixels to produce a second two-dimensional array of pixels which has fewer pixels than the first two-dimensional array of pixels; and correcting the color pixels.

Journal Article
TL;DR: In this article, a particular class of fuzzy metrics is used to represent the spatial and photometric relations between the color pixels adapting the classical bilateral filtering, which is more appropriate than the classical measures used.
Abstract: Bilateral filtering is a well-known technique for smoothing gray-scale and color images while preserving edges and image details by means of an appropriate nonlinear combination of the color vectors in a neighborhood. The pixel colors are combined based on their spatial closeness and photometric similarity. In this paper, a particular class of fuzzy metrics is used to represent the spatial and photometric relations between the color pixels adapting the classical bilateral filtering. It is shown that the use of these fuzzy metrics is more appropriate than the classical measures used.

19 Oct 2006
TL;DR: In this article, the authors derived the Cramer-Rao lower bound of image registration and showed that iterative gradient-based estimators achieve this performance limit. And they proposed a robust image restoration method using Gaussian error norm rather than quadratic error norm.
Abstract: This thesis concerns the use of spatial and tonal adaptivity in improving the resolution of aliased image sequences under scene or camera motion. Each of the five content chapters focuses on a different subtopic of super-resolution: image registration (chapter 2), image fusion (chapter 3 and 4), super-resolution restoration (chapter 5), and super-resolution synthesis (chapter 6). Chapter 2 derives the Cramer-Rao lower bound of image registration and shows that iterative gradient-based estimators achieve this performance limit. Chapter 3 presents an algorithm for image fusion of irregularly sampled and uncertain data using robust normalized convolution. The size and shape of the fusion kernel is adapted to local curvilinear structures in the image. Each data sample is assigned an intensity-related certainty value to limit the influence of outliers. Chapter 4 presents two fast implementations of the signal-adaptive bilateral filter. The xy-separable implementation filters the image along sampling axes, while the uv-separable implementation filters the image along gauge coordinates. Chapter 5 presents a robust image restoration method using Gaussian error norm rather than quadratic error norm. The robust solution resembles the maximum-likelihood solution under Gaussian noise, and it is not susceptible to outliers. A series of objective quality measures confirms the superiority of this solution to current super-resolution algorithms in the literature. Chapter 6 proposes a super-resolution synthesis algorithm in the DCT domain. The algorithm requires a high-resolution image of similar content to be used as texture source for the low-resolution input image.

Journal ArticleDOI
TL;DR: Comparison of Gaussian and bilateral filtering suggests that the modified technique more accurately locates brain activation and increases the significance of activation bordering sharp transitions in the pre-operative assessment of brain lesions.

Patent
13 Jan 2006
TL;DR: In this paper, a bilateral high-pass filtering kernel is determined based at least in part upon the target pixel and the surrounding pixels, and the resulting bilateral high pass filtering kernel was thereafter applied to the target pixels and their surrounding pixels to provide a filtered pixel.
Abstract: A target pixel and surrounding pixels corresponding to the target pixel are obtained from a digitally represented image. A bilateral high pass filtering kernel is determined based at least in part upon the target pixel and the surrounding pixels. A high pass spatial filtering kernel is provided and multiplied with the high pass photometric filtering kernel to provide a bilateral high pass filtering kernel. The resulting bilateral high pass filtering kernel is thereafter applied to the target pixel and the surrounding pixels to provide a filtered pixel. When it is desirable to combine noise filtering capabilities with sharpening capabilities, the bilateral high pass filter of the present invention may be combined with a bilateral low pass filtering kernel to provide a combined noise reduction and edge sharpening filter. The present invention may be advantageously applied to a variety of devices, including cellular telephones that employ image sensing technology.

Patent
09 Aug 2006
TL;DR: In this article, a noise reduction block 4′ performs a second-order differentiation process and a symmetry process to decide adjacent pixels with which noise reduction is preformed for an attention pixel, with the pixel level of the attention pixel in the detection range and the pixel levels of adjacent pixels used for noise reduction.
Abstract: Noise reduction is performed on the basis of characteristics of an image in a detection range. A noise reduction block 4′ performs a second-order differentiation process and a symmetry process to decide adjacent pixels with which noise reduction is preformed for an attention pixel. With the pixel level of the attention pixel in the detection range and the pixel levels of adjacent pixels used for noise reduction, an arithmetic mean processing section 16 calculates a mean value. A median filter 17 selects a median value. With the number of pixels used for noise reduction, it is determined whether the image in the detection range contains a flat portion, a ramp portion, or an edge. The mean value and the median value are weight-added with a weighted coefficient that are changed on the basis of characteristics of the image. The result is substituted for the level of the attention pixel. When the attention pixel is an isolated point, an all-pixel median filter section 31 selects a medium value of the levels of all the pixels in the detection range including the attention pixel and substitutes the median value for the level of the attention pixel.

Journal ArticleDOI
TL;DR: The DBL filter effectively reduces noise in low SNR single particle data as well as cellular tomograms of stained plastic sections and its usefulness for single particle analysis and for pre-processing Cellular tomograms ahead of image segmentation is discussed.

Patent
05 Jun 2006
TL;DR: In this paper, the luminance intensity of pixels of an input digital image is corrected for generating a corrected digital image using a bilateral filtering technique, where a mask of the image to be corrected is generated according to a bilateral filter technique.
Abstract: A luminance intensity of pixels of an input digital image is corrected for generating a corrected digital image. A luminance of each pixel is calculated as a function of the luminance of a corresponding pixel in an original image according to a parametric function. A mask of the input digital image to be corrected is generated according to a bilateral filtering technique. For each pixel of the input digital image, a respective value of at least one parameter of the parametric function is established based upon the luminance of a corresponding pixel of the mask.

Journal Article
TL;DR: A new signal-processing analysis of the bilateral filter is proposed which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator and develops a novel bilateral filtering acceleration using downsampling in space and intensity.
Abstract: The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and a fast version has been proposed. Unfortunately, little is known about the accuracy of such acceleration. In this paper, we propose a new signal-processing analysis of the bilateral filter, which complements the recent studies that analyzed it as a PDE or as a robust statistics estimator. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using a downsampling in space and intensity. This affords a principled expression of the accuracy in terms of bandwidth and sampling. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Tlie bilateral filter can then be expressed as simple linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive simple criteria for down-sampling the key operations and to achieve important acceleration of the bilateral filter. We show that, for the same running time, our method is significantly more accurate than previous acceleration techniques.

Journal ArticleDOI
TL;DR: In this paper, two modifications in this technique are introduced, which allow further acceleration and a significant increase in quality, and they are used for image processing tasks, including noise reduction and dynamic range compression.
Abstract: Edge-preserving lowpass filters are a valuable tool in several image processing tasks, including noise reduction and dynamic range compression. A high-quality algorithm is the bilateral filter, but its computational cost is very high. A fast but approximate implementation was introduced by Durand and Dorsey. Introduced are two modifications in this technique which allow further acceleration and a significant increase in quality.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: A semi-automatic mapping methodology for the generation of hardware accelerators for such a generic class of adaptive filtering applications in image processing, and the final architecture deliver similar synthesis results as a hand-tuned design.
Abstract: Massively parallel processor array architectures can be used as hardware accelerators for a plenty of dataflow dominant applications. Bilateral filtering is an example of a state-of-the-art algorithm in medical imaging, which falls in the class of 2D adaptive filter algorithms. In this paper, we propose a semi-automatic mapping methodology for the generation of hardware accelerators for such a generic class of adaptive filtering applications in image processing. The final architecture deliver similar synthesis results as a hand-tuned design.

Proceedings ArticleDOI
TL;DR: This modified bilateral filter uses geometrical and photometric distance to select pixels for combined low and high pass filtering and uses a simple window filter to reduce computational complexity.
Abstract: The classical bilateral filter smoothes images and preserves edges using a nonlinear combination of surrounding pixels. Our modified bilateral filter advances this approach by sharpening edges as well. This method uses geometrical and photometric distance to select pixels for combined low and high pass filtering. It also uses a simple window filter to reduce computational complexity.

Book ChapterDOI
18 Sep 2006
TL;DR: This paper uses a particular class of fuzzy metrics to represent the spatial and photometric relations between the color pixels adapting the classical bilateral filtering and shows that the use of these fuzzy metrics is more appropriate than the classical measures used.
Abstract: Bilateral filtering is a well-known technique for smoothing gray-scale and color images while preserving edges and image details by means of an appropriate nonlinear combination of the color vectors in a neighborhood. The pixel colors are combined based on their spatial closeness and photometric similarity. In this paper, a particular class of fuzzy metrics is used to represent the spatial and photometric relations between the color pixels adapting the classical bilateral filtering. It is shown that the use of these fuzzy metrics is more appropriate than the classical measures used.

Patent
Eugene Fainstain1
27 Apr 2006
TL;DR: In this article, a method of displaying a captured image using an array of pixels to capture an image was proposed, which includes a first plurality of pixels of a first color, a second plurality of pixel of a second color, and a third plurality ofpixel of a third color.
Abstract: A method of displaying a captured image includes using an array of pixels to capture an image. The array of pixels includes a first plurality of pixels of a first color, a second plurality of pixels of a second color, and a third plurality of pixels of a third color. The pixels are arranged into rows and columns and the pixels of the third plurality of pixels have two different arrangements within the array of pixels with respect to neighboring pixels. The method includes, for each pixel in the third plurality of pixels, normalizing the pixel's value as a function of the pixel values of at least six other pixels in the third plurality of pixels. The method also includes displaying the captured image using a normalized value for the pixel value of each pixel in the third plurality of pixels.

Patent
13 Oct 2006
TL;DR: In this paper, a method of implementing high-performance color filter mosaic arrays (CFA) using luminance pixels was proposed, which greatly improves the accuracy of the image acquisition process for a given pixel and image sensor size.
Abstract: A method of implementing high-performance color filter mosaic arrays (CFA) using luminance pixels. The introduction of luminance pixels greatly improves the accuracy of the image acquisition process for a given pixel and image sensor size.

Patent
Yi-Jen Chiu1
28 Sep 2006
TL;DR: In this paper, a block of pixels containing an edge pixel is selected among the blocks of pixels, and an edge-preserved filter is applied if the first pixel is not a ringing noise pixel.
Abstract: A method can include selecting a block of pixels. It may be determined whether the block of pixels contains an edge pixel. If the block of pixels contains an edge pixel, a first pixel may be selected among the block of pixels. If it is determined that the first pixel is a ringing noise pixel, a ringing filter may be applied. An edge-preserved filter may be applied if the first pixel is not a ringing noise pixel.

Journal ArticleDOI
TL;DR: Analytic and experimental results confirm the robustness and computational efficiency of the proposed method, and edges are preserved while blockiness and ringing are alleviated.
Abstract: Images are often coded using block-based discrete cosine transform (DCT), where blocking and ringing artifacts are the most common visual distortion. In this letter, a fast algorithm is proposed to alleviate the said artifacts in the DCT domain. The new concept is to decompose a row or column image vector to a gradually changed signal and a fast variational signal, which correspond to low-frequency (LF) and high-frequency (HF) DCT subbands, respectively. Blocking artifacts between adjacent LF blocks are suppressed by smoothing LF components and discarding invalid HF ones, and ringing artifacts inside HF vectors are reduced by a simplified bilateral filter. With such a process, edges are preserved while blockiness and ringing are alleviated. Analytic and experimental results confirm the robustness and computational efficiency of the proposed method

Book ChapterDOI
06 Nov 2006
TL;DR: In this article, the edge detection method based on bilateral filtering is presented, where both spatial closeness and intensity similarity of pixels are considered in order to preserve important visual cues provided by edges and reduce the sharpness of transitions in intensity values.
Abstract: Edge detection plays an important role in image processing area. This paper presents an edge detection method based on bilateral filtering which achieves better performance than single Gaussian filtering. In this form of filtering, both spatial closeness and intensity similarity of pixels are considered in order to preserve important visual cues provided by edges and reduce the sharpness of transitions in intensity values as well. In addition, the edge detec-tion method proposed in this paper is achieved on sampled images represented on a newly developed virtual hexagonal structure. Due to the compact and circular nature of the hexagonal lattice, a better quality edge map is obtained on the hexagonal structure than common edge detection on square structure. Experimental results using proposed methods exhibit encouraging performance.