scispace - formally typeset
Search or ask a question
Author

Jos H. Weber

Other affiliations: University of Johannesburg
Bio: Jos H. Weber is an academic researcher from Delft University of Technology. The author has contributed to research in topics: Offset (computer science) & Decoding methods. The author has an hindex of 4, co-authored 16 publications receiving 56 citations. Previous affiliations of Jos H. Weber include University of Johannesburg.

Papers
More filters
Journal ArticleDOI
TL;DR: The Pearson distance can only fruitfully be used for sets of $q$-ary codewords, called Pearson codes, that satisfy specific properties as discussed by the authors, and the Pearson distance is used for improving the error performance of noisy channels with unknown gain and offset.
Abstract: The Pearson distance has been advocated for improving the error performance of noisy channels with unknown gain and offset The Pearson distance can only fruitfully be used for sets of $q$-ary codewords, called Pearson codes, that satisfy specific properties We will analyze constructions and properties of optimal Pearson codes We will compare the redundancy of optimal Pearson codes with the redundancy of prior art $T$-constrained codes, which consist of $q$-ary sequences in which $T$ pre-determined reference symbols appear at least once In particular, it will be shown that for $q\le 3$ the $2$-constrained codes are optimal Pearson codes, while for $q\ge 4$ these codes are not optimal

16 citations

Journal Article
TL;DR: It will be shown that for q ≤ 3, the two-constrained codes are optimal Pearson codes, while for q ≥ 4 these codes are not optimal.
Abstract: The Pearson distance has been advocated for improving the error performance of noisy channels with unknown gain and offset. The Pearson distance can only fruitfully be used for sets of $q$ -ary codewords, called Pearson codes, that satisfy specific properties. We will analyze constructions and properties of optimal Pearson codes. We will compare the redundancy of optimal Pearson codes with the redundancy of prior art $T$ -constrained codes, which consist of $q$ -ary sequences in which $T$ pre-determined reference symbols appear at least once. In particular, it will be shown that for $q\le 3$ , the two-constrained codes are optimal Pearson codes, while for $q\ge 4$ these codes are not optimal.

12 citations

Journal ArticleDOI
TL;DR: This work considers the transmission and storage of encoded strings of symbols over a noisy channel, where dynamic threshold detection is proposed for achieving resilience against unknown scaling and offset of the received signal.
Abstract: We consider the transmission and storage of encoded strings of symbols over a noisy channel, where dynamic threshold detection is proposed for achieving resilience against unknown scaling and offset of the received signal. We derive simple rules for dynamically estimating the unknown scale (gain) and offset. The estimates of the actual gain and offset so obtained are used to adjust the threshold levels or to re-scale the received signal within its regular range. Then, the re-scaled signal, brought into its standard range, can be forwarded to the final detection/decoding system, where optimum use can be made of the distance properties of the code by applying, for example, the Chase algorithm. A worked example of a spin-torque transfer magnetic random access memory with an application to an extended (72, 64) Hamming code is described, where the retrieved signal is perturbed by additive Gaussian noise and unknown gain or offset.

11 citations

Journal ArticleDOI
TL;DR: Gaussian noise channels suffering from either an unknown offset or an unknown gain, and when assuming a Gaussian or uniform distribution for the offset in the absence of gain mismatch are considered.
Abstract: Besides the omnipresent noise, other important inconveniences in communication and storage systems are formed by gain and/or offset mismatches. In the prior art, a maximum likelihood (ML) decision criterion has already been developed for Gaussian noise channels suffering from unknown gain and offset mismatches. Here, such criteria are considered for Gaussian noise channels suffering from either an unknown offset or an unknown gain. Furthermore, ML decision criteria are derived when assuming a Gaussian or uniform distribution for the offset in the absence of gain mismatch.

7 citations

Journal ArticleDOI
TL;DR: A systematic variable-to-fixed length scheme encoding binary information sequences into binary balanced sequences, which has the biggest advantage from the simplicity of the scheme: encoding only requires one to keep track of the sequence weight, while decoding requires only one extremely simple step, irrespective of the sequences length.
Abstract: We present a systematic variable-to-fixed length scheme encoding binary information sequences into binary balanced sequences. The redundancy of the proposed scheme is larger than the redundancy of the best fixed-to-fixed length schemes in the case of long codes, but it is smaller in the case of short codes. The biggest advantage comes from the simplicity of the scheme: encoding only requires one to keep track of the sequence weight, while decoding requires only one extremely simple step, irrespective of the sequence length.

6 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work uses problem transform methods to convert the chronic diseases prediction into a multi-label classification problem and proposes a novel convolutional neural network (CNN) architecture named GroupNet to solve the multi- label chronic disease classification problem.
Abstract: Chronic diseases are one of the biggest threats to human life. It is clinically significant to predict the chronic disease prior to diagnosis time and take effective therapy as early as possible. In this work, we use problem transform methods to convert the chronic diseases prediction into a multi-label classification problem and propose a novel convolutional neural network (CNN) architecture named GroupNet to solve the multi-label chronic disease classification problem. Binary Relevance (BR) and Label Powerset (LP) methods are adopted to transform multiple chronic disease labels. We present the correlated loss as the loss function used in the GroupNet, which integrates the correlation coefficient between different diseases. The experiments are conducted on the physical examination datasets collected from a local medical center. In the experiments, we compare GroupNet with other methods and models. GroupNet outperforms others and achieves the best accuracy of 81.13%.

33 citations

Journal ArticleDOI
01 Jan 1974

32 citations

Journal ArticleDOI
23 Feb 2021
TL;DR: Discretized Gaussian Mixture likelihood is proposed to determine the latent code parameters in order to attain a more flexible and accurate model of entropy and achieves a higher rate of performance.
Abstract: There have been many compression standards developed during the past few decades and technological advances has resulted in introducing many methodologies with promising results. As far as PSNR metric is concerned, there is a performance gap between reigning compression standards and learned compression algorithms. Based on research, we experimented using an accurate entropy model on the learned compression algorithms to determine the rate-distortion performance. In this paper, discretized Gaussian Mixture likelihood is proposed to determine the latent code parameters in order to attain a more flexible and accurate model of entropy. Moreover, we have also enhanced the performance of the work by introducing recent attention modules in the network architecture. Simulation results indicate that when compared with the previously existing techniques using high-resolution and Kodak datasets, the proposed work achieves a higher rate of performance. When MS-SSIM is used for optimization, our work generates a more visually pleasant image.

17 citations