scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Data compression using Shannon-fano algorithm implemented by VHDL

TL;DR: This paper has implemented a Shannon-fano algorithm for data compression through VHDL coding, which is also used in an implode compression method which are used in zip file or .rar format.
Abstract: In digital communication while transmit the data it is well desire that the transmitting data bits should be as minimum as possible, so to compress the data there are several technique used In this paper we have implemented a Shannon-fano algorithm for data compression through VHDL coding Using VHDL implementation we can easily observe that how many bits we can save or how much data gets compressed during transmission, and we can also see the encoding of the respective symbol of transmit data In the field of data compression the Shannon-fano algorithm is used, this algorithm is also used in an implode compression method which are used in zip file or rar format To implement this algorithm in VHDL we use ModelSim SE 64 simulators and to synthesize these code Quartus-II tool has been used
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , the authors developed a method for compressing NMEA data for GNSS information to an autonomous vehicle or other infrastructure such as a satellite with maximum accuracy and efficiency.
Abstract: Autonomous vehicles contain many sensors, enabling them to drive by themselves. Autonomous vehicles need to communicate with other vehicles (V2V) wirelessly and with infrastructures (V2I) like satellites with diverse connections as well, to implement safety, reliability, and efficiency. Information transfer from remote communication appliances is a critical task and should be accomplished quickly, in real time and with maximum reliability. A message that arrives late, arrives with errors, or does not arrive at all can create an unsafe situation. This study aims at employing data compression to efficiently transmit GNSS information to an autonomous vehicle or other infrastructure such as a satellite with maximum accuracy and efficiency. We developed a method for compressing NMEA data. Furthermore, our results were better than other ones in current studies, while supporting error tolerance and data omission.

3 citations

DOI
28 Aug 2019
TL;DR: In this article, the Shannon-Fano compression algorithm was applied to homogeneous digital images using applications created with the MATLAB Program and the results showed that the compression results reached half of the original image.
Abstract: Compression is a field that needed at this time where increasingly high-tech digital and computing makes it possible to process data in a size that is large enough like multimedia. Compression needed to keep pressing the storage media consumption of data and information stored on computer media. The Shannon-Fano compression algorithm is one of the well-known compression algorithms and is useful in saving data storage space. Shannon-Fano compression algorithms can be performed on text and digital images. In this study applying the Shannon-Fano method to homogeneous digital images using applications created with the MATLAB Program. The product of this research will be tested using several images taken using a webcam. The product tested to find out whether the coding used is correct or has errors. The product will be successful if it can reduce the size of the compressed image from the original image. Homogeneous images that have tested using the Shannon-Fano image compression application have an average compression ratio value of 52%. The size of the compression results reaches half of the original image. The Shannon-Fano compression algorithm detects the number of times that characters appear in each experiment, then the coding of the frequency of characters appearing in binary numbers.

1 citations

DOI
09 Aug 2022
TL;DR: This paper proposes the use of the Shannon-Fano compression technique to increase the PSNR (Peak Signal-to-Noise Ratio) value of the steganography image and the StegoCrypto algorithm used is the AES (Advanced Encryption Standard) cryptographic algorithm.
Abstract: One way to write information data is by using cryptography and steganography. Cryptography converts information messages into a form that can no longer be understood. In comparison, steganography is used to insert a secret message into the container media such as images or sounds so that someone will not be aware of the existence of the secret message. The combination of cryptography and steganography methods aims to provide a high level of security for information data. In this paper, we propose the use of the Shannon-Fano compression technique to increase the PSNR (Peak Signal-to-Noise Ratio) value of the steganography image. The StegoCrypto algorithm used is the AES (Advanced Encryption Standard) cryptographic algorithm which is the current standard for symmetric key encryption and the LSB (Least Significant Bit) steganography method. The use of the compression technique with the Shannon-Fano method aims to reduce the amount of embedded data so that it increases the value of the PSNR steganography image. From the test results, the resulting performance for the PSNR value from the combination of methods produces a value above 40, which means it is still above the criteria for a good steganography image and up to 2.785 dB or 89% better than a system that does not use a compression process.
Book ChapterDOI
04 May 2019
TL;DR: This paper focus on the coefficient selection of random measurement matrix to find the relationship between the image structure similarity coefficient and the other characteristic indexes and an algorithm for dimension design of measurement matrix is proposed.
Abstract: This paper is based on the sparse representation of signals in orthogonal space. Data collection and compressed are combined by compressed sensing theory. The image signal can be reconstructed by fewer observations which we obtained it under the measurement matrix. Compressive sensing theory breaks through the limitation of data sampling. In the theory of compressed sensing, the selection of measurement matrix plays a key role in whether the compressed signal can be reconstructed or not. In this paper, different measurement matrices are selected to achieve the compressive sensing and their similarity coefficient matrices are analyzed to compare the different performance. This paper focus on the coefficient selection of random measurement matrix. To find the relationship between the image structure similarity coefficient and the other characteristic indexes. An algorithm for dimension design of measurement matrix is proposed. A high performance algorithm for image compression perception and image restoration is implemented.
References
More filters
Book
01 Jan 2004
TL;DR: This book's highly original approach of teaching through extensive system examples as well as its unique integration of VHDL and design make it suitable both for use by students in computer science and electrical engineering.
Abstract: This textbook teaches VHDL using system examples combined with programmable logic and supported by laboratory exercises. While other textbooks concentrate only on language features, Circuit Design with VHDL offers a fully integrated presentation of VHDL and design concepts by including a large number of complete design examples, illustrative circuit diagrams, a review of fundamental design concepts, fully explained solutions, and simulation results. The text presents the information concisely yet completely, discussing in detail all indispensable features of the VHDL synthesis. The book is organized in a clear progression, with the first part covering the circuit level, treating foundations of VHDL and fundamental coding, and the second part covering the system level (units that might be located in a library for code sharing, reuse, and partitioning), expanding upon the earlier chapters to discuss system coding. Part I, "Circuit Design," examines in detail the background and coding techniques of VHDL, including code structure, data types, operators and attributes, concurrent and sequential statements and code, objects (signals, variables, and constants), design of finite state machines, and examples of additional circuit designs. Part II, "System Design," builds on the material already presented, adding elements intended mainly for library allocation; it examines packages and components, functions and procedures, and additional examples of system design. Appendixes on programmable logic devices (PLDs/FPGAs) and synthesis tools follow Part II. The book's highly original approach of teaching through extensive system examples as well as its unique integration of VHDL and design make it suitable both for use by students in computer science and electrical engineering.

281 citations

Book
01 Sep 1993
TL;DR: This chapter discusses Behavioral Modeling, Sequential Processing, Subprograms and Packages, and At Speed Debugging Techniques.
Abstract: Table of contents Foreword Preface Acknowledgments Chapter 1: Introduction to VHDL Chapter 2: Behavioral Modeling Chapter 3: Sequential Processing Chapter 4: Data Types Chapter 5: Subprograms and Packages Chapter 6: Predefined Attributes Chapter 7: Configurations Chapter 8: Advanced Topics Chapter 9: Synthesis Chapter 10: VHDL Systems Chapter 11: High Level Design Flow Chapter 12: Top-Level System Design Chapter 13: CPU: Synthesis Description Chapter 14: CPU: RTL Simulation Chapter 15: CPU Design: Synthesis Results Chapter 16: Place and Route Chapter 17: CPU: VITAL Simulation Chapter 18: At Speed Debugging Techniques Appendix A: Standard Logic Package Appendix B: VHDL Reference Tables Appendix C: Reading VHDL BNF Appendix D: VHDL93 Updates Index About the Author

186 citations