scispace - formally typeset
Search or ask a question
Author

Wei Zhang

Bio: Wei Zhang is an academic researcher from Tianjin University. The author has contributed to research in topics: Decoding methods & Reed–Solomon error correction. The author has an hindex of 8, co-authored 55 publications receiving 229 citations. Previous affiliations of Wei Zhang include Kunming University of Science and Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: Modifications are made to the lifting scheme, and the intermediate results are recombined and stored to reduce the number of pipelining stages to achieve a critical path with only one multiplier.
Abstract: A high-speed and reduced-area 2-D discrete wavelet transform (2-D DWT) architecture is proposed. Previous DWT architectures are mostly based on the modified lifting scheme or the flipping structure. In order to achieve a critical path with only one multiplier, at least four pipelining stages are required for one lifting step, or a large temporal buffer is needed. In this brief, modifications are made to the lifting scheme, and the intermediate results are recombined and stored to reduce the number of pipelining stages. As a result, the number of registers can be reduced to 18 without extending the critical path. In addition, the two-input/two-output parallel scanning architecture is adopted in our design. For a 2-D DWT with the size of , the proposed architecture only requires three registers between the row and column filters as the transposing buffer, and a higher efficiency can be achieved.

71 citations

Journal ArticleDOI
Yang Wang1, Wei Zhang1, Yanyan Liu2, Lingyu Wang1, Yu Liang1 
TL;DR: An improved Reed–Solomon (RS)-polar code concatenation scheme with a threshold that applies successive cancellation list and hard-decision decoding based on the low-complexity chase algorithm achieves high decoding performance, low decoding complexity, and short latency.
Abstract: An improved Reed–Solomon (RS)-polar code concatenation scheme with a threshold is proposed. It applies successive cancellation list and hard-decision decoding based on the low-complexity chase algorithm. A new multiplicity assignment module is proposed to make it possible to realize the RS-polar code concatenation and facilitate hardware decoder design. The proposed scheme achieves high decoding performance, low decoding complexity, and short latency. The simulation results show that the improved concatenation scheme gives a coding gain reaching about 0.4 dB compared with the traditional RS-polar scheme. Moreover, it can reduce the average number of RS codewords to be decoded in a concatenated code by 65% and decrease the decoding latency by 50% when the signal-to-noise ratio is 1.75 dB.

19 citations

Journal ArticleDOI
TL;DR: A unified syndrome computation algorithm and the corresponding architecture are proposed for low-complexity RS decoder that can speed up by 57% and the area will be reduced to 62% compared with the original design for η = 3.
Abstract: Reed-Solomon (RS) codes are widely used in digital communication and storage systems. Algebraic soft-decision decoding (ASD) of RS codes can obtain significant coding gain over the hard-decision decoding (HDD). Compared with other ASD algorithms, the low-complexity Chase (LCC) decoding algorithm needs less computation complexity with similar or higher coding gain. Besides employing complicated interpolation algorithm, the LCC decoding can also be implemented based on the HDD. However, the previous syndrome computation for 2η test vectors and the key equation solver (KES) in the HDD requires long latency and remarkable hardware. In this brief, a unified syndrome computation algorithm and the corresponding architecture are proposed. Cooperating with the KES in the reduced inversion-free Berlekamp-Messy algorithm, the reduced-complexity LCC RS decoder can speed up by 57% and the area will be reduced to 62% compared with the original design for η = 3.

17 citations

Journal ArticleDOI
01 Jan 2012
TL;DR: A modified LCC (MLCC) decoding is proposed by adding erasures to the test vectors to achieve much better performance than the original LCC decoding, and a prioritized interpolation scheme to test a small proportion of the vectors at a time, starting with the ones with higher reliabilities.
Abstract: Reed---Solomon (RS) codes are widely used as error-correcting codes in digital communication and storage systems Algebraic soft-decision decoding (ASD) of RS codes can achieve substantial coding gain with polynomial complexity Among practical ASD algorithms, the low-complexity chase (LCC) algorithm that tests 2 ? vectors can achieve similar or higher coding gain with lower complexity For applications such as magnetic recording, the performance of the LCC decoding is degraded by the inter-symbol interference from the channel Improving the performance of the LCC decoding requires larger ?, which leads to higher complexity In this paper, a modified LCC (MLCC) decoding is proposed by adding erasures to the test vectors With the same ?, the proposed algorithm can achieve much better performance than the original LCC decoding One major step of the LCC and MLCC decoding is the interpolation To reduce the complexity of the interpolation, this paper also proposed a prioritized interpolation scheme to test a small proportion of the vectors at a time, starting with the ones with higher reliabilities For a (458, 410) RS code, by testing 1/8 of the vectors at a time, the area requirement of the MLCC decoder with ??=?8 can be reduced to 57%, and the average decoding latency is reduced to 73%

14 citations

Journal ArticleDOI
TL;DR: A deep learning based iterative soft decision decoding algorithm for Reed-Solomon codes that takes advantage of deep neural network and Stochastic Shifting based Iterative Decoding and changes the architecture of SSID achieves better decoding performance.
Abstract: In this letter, a deep learning based iterative soft decision decoding algorithm for Reed-Solomon codes is proposed. This algorithm takes advantage of deep neural network and Stochastic Shifting based Iterative Decoding (SSID). By assigning weights to every edge in the Tanner graph and changing the architecture of SSID method, the proposed neural network decoder achieves better decoding performance. Compared with BM decoding, HDD-LCC and traditional SSID, simulation results show that for RS (15, 11), this algorithm provides coding gain up to 1.5dB, 1.1dB and 0.5dB, respectively when FER = 10−2.

14 citations


Cited by
More filters
Journal ArticleDOI
01 Sep 1986
TL;DR: This chapter discusses algorithmics and modular computations, Theory of Codes and Cryptography (3), and the theory and practice of error control codes (3).
Abstract: algorithmics and modular computations, Theory of Codes and Cryptography (3).From an analytical 1. RE Blahut. Theory and practice of error control codes. eecs.uottawa.ca/∼yongacog/courses/coding/ (3) R.E. Blahut,Theory and Practice of Error Control Codes, Addison Wesley, 1983. QA 268. Cached. Download as a PDF 457, Theory and Practice of Error Control CodesBlahut 1984 (Show Context). Citation Context..ontinued fractions.

597 citations

01 Jan 2013
TL;DR: In this paper, an efficient beam alignment technique using adaptive subspace sampling and hierarchical beam codebooks was proposed to solve the problem of spectrum reusability and flexible prototyping radio platform using software-defined radio (SDR).
Abstract: Mobile data traffic will continue its tremendous growth in some markets, and has already resulted in an apparent radio spectrum scarcity. There is a strong need for more efficient methods to use spectrum resources, leading to extensive research on increasing spectrum reusability on flexible radio platforms. This study solves this problem in two sub topics, millimeter wave communication on wireless backhaul for spectrum reusability, and flexible prototyping radio platform using software-defined radio (SDR). Wireless backhaul has received significant attention as a key technology affecting the development of future wireless cellular networks because it helps to easily deploy many small size cells, an essential part of a high capacity system. Millimeter wave is considered a possible candidate for cost-effective wireless backhaul. In the outdoor deployment using a millimeter wave, beamforming methods are key techniques to establish wireless links in the 60 GHz to 80 GHz to overcome pathloss constraints (i.e., rainfall effect and oxygen absorption). The millimeter wave communication system cannot directly access the channel knowledge. To overcome this, a beamforming method based on codebook search is considered. The millimeter wave communication cannot access channel knowledge, therefore alternatively a beamforming method based on a codebook search is considered. In the first part, we propose an efficient beam alignment technique using adaptive subspace sampling and hierarchical beam codebooks. A wind sway analysis is presented to establish a notion of beam coherence time. This highlights a previously unexplored tradeoff between array size and wind-induced movement. Generally, it is not possible to use larger arrays without risking a performance loss from wind-induced beam misalignment. The performance of the proposed alignment technique is analyzed and compared with other search and alignment methods. Results show significant performance improvement with reduced search time. In the second part of this study, SDR is discussed as an approach toward flexible wireless communication systems. Most layers of SDR are implemented by software. Therefore, only a software change is needed to transform the type of radio system. The translation of the signal processing into software performed by a regular computer opens up a huge number of possibilities at a reasonable price and effort. SDR systems are widely used to build prototypes, saving time and money. In this project, a robust wireless communication system in high interference environment was developed. For the physical layer (PHY) of the system, we implemented a channel sub-bandding method that utilizes frequency division multiplexing to avoid interference. Then, to overcome a further interfered channel, Direct Spread Spectrum System (DSSS) was considered and implemented. These prototyped testbeds were evaluated for system performance in the interference environment.

103 citations

Journal ArticleDOI
TL;DR: In this paper, the influence of 10 groups of mixtures on the performance of an organic Rankine cycle (ORC) was analyzed by theoretical analysis and detailed numerical simulation, and the results showed that the maximum net power output of the ORC using each mixture corresponds to an optimal mixture ratio that gives the highest temperature glide.

67 citations

Journal ArticleDOI
TL;DR: A detailed analysis of currently used short packet coding schemes in industrial wireless sensor networks, including seven coding schemes and their possible variants, and proposes four principles to filter the most promising coding schemes.
Abstract: Demanded by high-performance wireless (WirelessHP) networks for industrial control applications, channel coding should be used and optimized. However, the adopted coding schemes in modern wireless communication standards are not sufficient for WirelessHP applications, in terms of both low latency and high reliability. Starting from the essential characteristics of WirelessHP regarding channel coding, this paper gives a detailed analysis of currently used short packet coding schemes in industrial wireless sensor networks, including seven coding schemes and their possible variants. The metrics employed for evaluation are bit-error rate, packet error rate, and throughput. To find suitable coding schemes from a large number of options, we propose four principles to filter the most promising coding schemes. Based on overall comparison from the perspective of practical implementation, challenges of the available coding schemes are analyzed, and directions are recommended for future research. Some reflections on how to construct specially designed coding schemes for short packets to meet the high reliability and low-latency constraints of WirelessHP are also provided.

52 citations

Journal ArticleDOI
Jonghoon Kim1
TL;DR: Experimental results showed the clearness of the proposed DWT-based approach for cell discrimination very well, and using the wavelet decomposition implementing the multiresolution analysis (MRA), it is possible to discriminate Li-ion cells that have similar electrochemical characteristics corresponding to information extracted from the ECDVS over wide frequency ranges.
Abstract: The difference in electrochemical characteristics among Li-ion cells in the battery pack inevitably result in voltage and state-of-charge (SOC) imbalances caused by cell-to-cell variation. Therefore, in this approach, with lower requirements of active and passive balancing circuits, a novel approach based on the discrete wavelet transform (DWT) that are well suitable for analyzing and evaluating an experimental charging/discharging voltage signal (ECDVS) is newly introduced to minimize the aforementioned problem. The ECDVS can be applied as source data in the DWT-based analysis because of its great ability to extract variable information of electrochemical characteristics from the nonstationary and transient phenomena simultaneously in both the time and frequency domains. By using the wavelet decomposition implementing the multiresolution analysis (MRA), it is possible to discriminate Li-ion cells that have similar electrochemical characteristics corresponding to information extracted from the ECDVS over wide frequency ranges. Consequently, experimental results showed the clearness of the proposed DWT-based approach for cell discrimination very well.

41 citations