Author

# Punya Thitimajshima

Other affiliations: École Normale Supérieure

Bio: Punya Thitimajshima is an academic researcher from King Mongkut's Institute of Technology Ladkrabang. The author has contributed to research in topics: Wavelet transform & Wavelet. The author has an hindex of 7, co-authored 33 publications receiving 6164 citations. Previous affiliations of Punya Thitimajshima include École Normale Supérieure.

##### Papers

More filters

••

23 May 1993TL;DR: In this article, a new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate (BER) are close to the Shannon limit, is discussed.

Abstract: A new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate (BER) are close to the Shannon limit, is discussed. The turbo-code encoder is built using a parallel concatenation of two recursive systematic convolutional codes, and the associated decoder, using a feedback decoding rule, is implemented as P pipelined identical elementary decoders. >

5,963 citations

••

01 Apr 1988

TL;DR: In this paper, an electronically tunable second-generation current conveyor (ECCII) whose current transfer ratio can be varied by electronic means is introduced, and a method is proposed for realising ECCII in monolithic integrated form providing both a positive and a negative ECC II in the same circuit.

Abstract: A generalised current conveyor, termed an electronically tunable second-generation current conveyor (ECCII), whose current-transfer ratio can be varied by electronic means, is introduced. A method is proposed for realising ECCII in monolithic integrated form providing both a positive and a negative ECCII in the same circuit. Experimental and simulation results demonstrating the circuit performance are included. Some simple applications which show that their circuit property can be varied by electronic means are also presented.

99 citations

••

24 Nov 1998TL;DR: A novel method based on the kFill algorithm that can be accomplished in single-pass scan over the image to remove simultaneously both salt noise and pepper noise of any sizes that are smaller than the size of document objects is proposed.

Abstract: Documents containing text and graphics components are usually acquired as binary images for computer processing purposes. Salt-and-pepper noise is a prevalent artifact in such images. Removing this noise usually requires iterative or multiple-pass processing, some techniques even cause distortions in document components. In this paper, we propose a novel method based on the kFill algorithm that can be accomplished in single-pass scan over the image. The algorithm is capable of removing simultaneously both salt noise and pepper noise of any sizes that are smaller than the size of document objects. Results of the proposed method are given in comparison with the well-known morphological operations.

54 citations

••

24 Jul 2000

TL;DR: A modified fuzzy c-means classification algorithm is used to provide a fuzzy partition that is less sensitive to noise as it filters the image while clustering it, which is based on the consideration of the neighbors as factors the attract pixels into their cluster.

Abstract: Generally fuzzy c-means algorithm is one proved that very well suited for remote sensing image segmentation, exhibited sensitivity to the initial guess with regard to both speed and stability. But it also showed sensitivity to noise. This paper proposes a fully automatic technique to obtain image clusters. A modified fuzzy c-means classification algorithm is used to provide a fuzzy partition. This method is less sensitive to noise as it filters the image while clustering it, which is based on the consideration of the neighbors as factors the attract pixels into their cluster. The experimental results on JERS-1 synthetic aperture radar (SAR) image demonstrate its potential usefulness.

32 citations

••

24 Jul 2000

TL;DR: A method to speed up the fuzzy c-means clustering algorithm by reducing the number of numeric operations performed in each iteration, while keeping the exact result as the standard algorithm is presented.

Abstract: The purpose of cluster analysis is to partition a data set into a number of disjoint groups or clusters. The members within a cluster are more similar to each other than members from different clusters. The fuzzy c-means (FCM) clustering is an iterative partitioning method that produces optimal c-partitions. Since the standard FCM algorithm takes a long time to partition a large data set. Because FCM program must read the entire data set into a memory for processing. This paper presents a method to speed up the FCM algorithm by reducing the number of numeric operations performed in each iteration, while keeping the exact result as the standard algorithm. The application of this method to multispectral satellite images has been evaluated, about 40% of time saving was obtained.

28 citations

##### Cited by

More filters

••

Alcatel-Lucent

^{1}TL;DR: In this article, the authors examined the performance of using multi-element array (MEA) technology to improve the bit-rate of digital wireless communications and showed that with high probability extraordinary capacity is available.

Abstract: This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver.
We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon‘s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised.

10,526 citations

••

TL;DR: A generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph, that computes-either exactly or approximately-various marginal functions derived from the global function.

Abstract: Algorithms that must deal with complicated global functions of many variables often exploit the manner in which the given functions factor as a product of "local" functions, each of which depends on a subset of the variables. Such a factorization can be visualized with a bipartite graph that we call a factor graph, In this tutorial paper, we present a generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph. Following a single, simple computational rule, the sum-product algorithm computes-either exactly or approximately-various marginal functions derived from the global function. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative "turbo" decoding algorithm, Pearl's (1988) belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform (FFT) algorithms.

6,637 citations

••

29 Jun 1997TL;DR: It is proved that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit, and experimental results for binary-symmetric channels and Gaussian channels demonstrate that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved.

Abstract: We study two families of error-correcting codes defined in terms of very sparse matrices "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties The decoding of both codes can be tackled with a practical sum-product algorithm We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes

3,842 citations

••

TL;DR: This work designs low-density parity-check codes that perform at rates extremely close to the Shannon capacity and proves a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution.

Abstract: We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on the work of Richardson and Urbanke (see ibid., vol.47, no.2, p.599-618, 2000). Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.

3,520 citations

••

Bell Labs

^{1}TL;DR: The results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon.

Abstract: We present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.

3,393 citations