scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2022"


Journal ArticleDOI
TL;DR: G-TADOC as mentioned in this paper proposes a fine-grained thread-level workload scheduling strategy for GPU threads, which partitions heavily-dependent loads adaptively in a finegrained manner.
Abstract: With the development of computer architecture, even for embedded systems, GPU devices can be integrated, providing outstanding performance and energy efficiency to meet the requirements of different industries, applications, and deployment environments. Data analytics is an important application scenario for embedded systems. Unfortunately, due to the limitation of the capacity of the embedded device, the scale of problems handled by the embedded system is limited. In this paper, we propose a novel data analytics method, called G-TADOC, for efficient text analytics directly on compression on embedded GPU systems. A large amount of data can be compressed and stored in embedded systems, and can be processed directly in the compressed state, which greatly enhances the processing capabilities of the systems. Particularly, G-TADOC has three innovations. First, a novel fine-grained thread-level workload scheduling strategy for GPU threads has been developed, which partitions heavily-dependent loads adaptively in a fine-grained manner. Second, a GPU thread-safe memory pool has been developed to handle inconsistency with low synchronization overheads. Third, a sequence-support strategy is provided to maintain high GPU parallelism while ensuring sequence information for lossless compression. Moreover, G-TADOC involves special optimizations for embedded GPUs, such as utilizing the CPU-GPU shared unified memory. Experiments show that G-TADOC provides 13.2× average speedup compared to the state-of-the-art TADOC. G-TADOC also improves performance-per-cost by 2.6× and energy efficiency by 32.5× over TADOC.

23 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a new end-to-end optimized image compression scheme, in which iWave, a trained wavelet-like transform, converts images into coefficients without any information loss, and then the coefficients are optionally quantized and encoded into bits.
Abstract: Built on deep networks, end-to-end optimized image compression has made impressive progress in the past few years. Previous studies usually adopt a compressive auto-encoder, where the encoder part first converts image into latent features, and then quantizes the features before encoding them into bits. Both the conversion and the quantization incur information loss, resulting in a difficulty to optimally achieve arbitrary compression ratio. We propose iWave++ as a new end-to-end optimized image compression scheme, in which iWave, a trained wavelet-like transform, converts images into coefficients without any information loss. Then the coefficients are optionally quantized and encoded into bits. Different from the previous schemes, iWave++ is versatile: a single model supports both lossless and lossy compression, and also achieves arbitrary compression ratio by simply adjusting the quantization scale. iWave++ also features a carefully designed entropy coding engine to encode the coefficients progressively, and a de-quantization module for lossy compression. Experimental results show that lossy iWave++ achieves state-of-the-art compression efficiency compared with deep network-based methods; on the Kodak dataset, lossy iWave++ leads to 17.34 percent bits saving over BPG; lossless iWave++ achieves comparable or better performance than FLIF. Our code and models are available at https://github.com/mahaichuan/Versatile-Image-Compression.

22 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors combined log-polar transform (LPT) and discrete cosine transform (DCT) for medical images, and realized the lossless embedding of patient information into medical images.
Abstract: Abstract In the information age, network security has gradually become a potentially huge problem, especially in the medical field, it is essential to ensure the accuracy and safety of images, and patient information needs to be included with minimal change. Combined log-polar transform (LPT) and discrete cosine transform (DCT), a novel robust watermarking algorithm for medical images, is proposed. It realized the lossless embedding of patient information into medical images. In the process of feature extraction and watermark embedding, the proposed algorithm reflects the characteristics of LPT, scale invariance and rotation invariance, and retains the advantages of DCT's ability to resist conventional attacks and robustness. As it adopts zero-watermarking embedding technology, it solves the defects caused by the traditional watermark embedding technology to modify the original image data and guaranteed the quality of medical images. The good experimental results show the effectiveness of this algorithm.

20 citations



Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper combined log-polar transform (LPT) and discrete cosine transform (DCT) for medical images, and realized the lossless embedding of patient information into medical images.
Abstract: Abstract In the information age, network security has gradually become a potentially huge problem, especially in the medical field, it is essential to ensure the accuracy and safety of images, and patient information needs to be included with minimal change. Combined log-polar transform (LPT) and discrete cosine transform (DCT), a novel robust watermarking algorithm for medical images, is proposed. It realized the lossless embedding of patient information into medical images. In the process of feature extraction and watermark embedding, the proposed algorithm reflects the characteristics of LPT, scale invariance and rotation invariance, and retains the advantages of DCT's ability to resist conventional attacks and robustness. As it adopts zero-watermarking embedding technology, it solves the defects caused by the traditional watermark embedding technology to modify the original image data and guaranteed the quality of medical images. The good experimental results show the effectiveness of this algorithm.

17 citations


Journal ArticleDOI
TL;DR: An edge-fog computing-enabled lossless electroencephalogram (EEG) data compression with epileptic seizure detection in IoMT networks is proposed and the proposed ESDNB outperforms the other methods in terms of accuracy.
Abstract: The need to improve smart health systems to monitor the health situation of patients has grown as a result of the spread of epidemic diseases, the ageing of the population, the increase in the number of patients, and the lack of facilities to treat them. This led to an increased demand for remote healthcare systems using biosensors. These biosensors produce a large volume of sensed data that will be received by the edge of the Internet of Medical Things (IoMT) to be forwarded to the data centers of the cloud for further treatment. An edge-fog computing-enabled lossless electroencephalogram (EEG) data compression with epileptic seizure detection in IoMT networks is proposed in this article. The proposed approach achieves three functionalities. First, it reduces the amount of sent data from the edge to the fog gateway using lossless EEG data compression based on a hybrid approach of $k$ -means clustering and Huffman encoding (KCHE) at the edge gateway. Second, it decides the epileptic seizure situation of the patient at the fog gateway based on the epileptic seizure detector-based Naive Bayes (ESDNB) algorithm. Third, it reduces the size of IoMT EEG data delivered to the cloud using the same lossless compression algorithm in the first step. Various measures implemented to show the effectiveness of the suggested approach and the comparison results confirm that the KCHE reduces the amount of EEG data transmitted to the fog and cloud platform and produces a suitable detection of an epileptic seizure. The average of compression power of the proposed KCHE is four times the average of compression power of other methods for all EEG records ( $Z, F, N, O$ , and $S$ ). Furthermore, the proposed ESDNB outperforms the other methods in terms of accuracy, where it provides accuracy from 99.53 % up to 99.99 % using the data set of Bonn University.

16 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors designed a drop-tolerant secure aggregation algorithm FTSA, which ensures the confidentiality of local updates, and a lossless model perturbation mechanism PTSP is proposed to protect sensitive data in global model parameters.

12 citations


Journal ArticleDOI
20 Jan 2022-Optica
TL;DR: In this article , the authors proposed a concept for simultaneous amplification and noise mitigation of temporal waveforms, which is shown to work well on optical signals with bandwidths spanning several orders of magnitude, from the kHz to GHz scale.
Abstract: Mitigating the stochastic noise introduced during the generation, transmission, and detection of temporal optical waveforms remains a significant challenge across many applications, including radio-frequency photonics, light-based telecommunications, spectroscopy, etc. The problem is particularly difficult for the weak-intensity signals often found in practice. Active amplification worsens the signal-to-noise ratio, whereas noise mitigation based on optical bandpass filtering attenuates further the waveform of interest. Additionally, current optical filtering approaches are not optimal for signal bandwidths narrower than just a few GHz. We propose a versatile concept for simultaneous amplification and noise mitigation of temporal waveforms, here successfully demonstrated on optical signals with bandwidths spanning several orders of magnitude, from the kHz to GHz scale. The concept is based on lossless temporal sampling of the incoming coherent waveform through Talbot processing. By reaching high gain factors ( > 100 ), we show the recovery of ultra-weak optical signals, with power levels below the detector threshold, additionally buried under a much stronger noise background. The method is inherently self-tracking, a capability demonstrated by simultaneously denoising four data signals in a dense wavelength division multiplexing scheme.

12 citations


Journal ArticleDOI
TL;DR: A bibliometric analysis and literature survey of all Deep Learning (DL) methods used in video compression in recent years and provides information on DL-based approaches for video compression, as well as the advantages, disadvantages, and challenges of using them.
Abstract: Every data and kind of data need a physical drive to store it. There has been an explosion in the volume of images, video, and other similar data types circulated over the internet. Users using the internet expect intelligible data, even under the pressure of multiple resource constraints such as bandwidth bottleneck and noisy channels. Therefore, data compression is becoming a fundamental problem in wider engineering communities. There has been some related work on data compression using neural networks. Various machine learning approaches are currently applied in data compression techniques and tested to obtain better lossy and lossless compression results. A very efficient and variety of research is already available for image compression. However, this is not the case for video compression. Because of the explosion of big data and the excess use of cameras in various places globally, around 82% of the data generated involve videos. Proposed approaches have used Deep Neural Networks (DNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs), and various variants of Autoencoders (AEs) are used in their approaches. All newly proposed methods aim to increase performance (reducing bitrate up to 50% at the same data quality and complexity). This paper presents a bibliometric analysis and literature survey of all Deep Learning (DL) methods used in video compression in recent years. Scopus and Web of Science are well-known research databases. The results retrieved from them are used for this analytical study. Two types of analysis are performed on the extracted documents. They include quantitative and qualitative results. In quantitative analysis, records are analyzed based on their citations, keywords, source of publication, and country of publication. The qualitative analysis provides information on DL-based approaches for video compression, as well as the advantages, disadvantages, and challenges of using them.

12 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an end-to-end optimized learning framework for lossless compressing 3D volumetric data. But, their method is limited to 3D Medical Images and Hyper-Spectral Images.
Abstract: 3D volumetric image processing has attracted increasing attention in the last decades, in which one major research area is to develop efficient lossless volumetric image compression techniques to better store and transmit such images with massive amount of information. In this work, we propose the first end-to-end optimized learning framework for losslessly compressing 3D volumetric data. Our approach builds upon a hierarchical compression scheme by additionally introducing the intra-slice auxiliary features and estimating the entropy model based on both intra-slice and inter-slice latent priors. Specifically, we first extract the hierarchical intra-slice auxiliary features through multi-scale feature extraction modules. Then, an Intra-slice and Inter-slice Conditional Entropy Coding module is proposed to fuse the intra-slice and inter-slice information from different scales as the context information. Based on such context information, we can predict the distributions for both intra-slice auxiliary features and the slice images. To further improve the lossless compression performance, we also introduce two new gating mechanisms called Intra-Gate and Inter-Gate to generate the optimal feature representations for better information fusion. Eventually, we can produce the bitstream for losslessly compressing volumetric images based on the estimated entropy model. Different from the existing lossless volumetric image codecs, our end-to-end optimized framework jointly learns both intra-slice auxiliary features at different scales for each slice and inter-slice latent features from previously encoded slices for better entropy estimation. The extensive experimental results indicate that our framework outperforms the state-of-the-art hand-crafted lossless volumetric image codecs (e.g., JP3D) and the learning-based lossless image compression method on four volumetric image benchmarks for losslessly compressing both 3D Medical Images and Hyper-Spectral Images.

12 citations


Journal ArticleDOI
01 May 2022-Sensors
TL;DR: A lossless image compression algorithm based on DAT that is based on a special class of atomic functions generalizing the well-known up-function of V.A. Rvachev is developed, and its performance is studied for different structures of DAT.
Abstract: Digital images are used in various technological, financial, economic, and social processes. Huge datasets of high-resolution images require protected storage and low resource-intensive processing, especially when applying edge computing (EC) for designing Internet of Things (IoT) systems for industrial domains such as autonomous transport systems. For this reason, the problem of the development of image representation, which provides compression and protection features in combination with the ability to perform low complexity analysis, is relevant for EC-based systems. Security and privacy issues are important for image processing considering IoT and cloud architectures as well. To solve this problem, we propose to apply discrete atomic transform (DAT) that is based on a special class of atomic functions generalizing the well-known up-function of V.A. Rvachev. A lossless image compression algorithm based on DAT is developed, and its performance is studied for different structures of DAT. This algorithm, which combines low computational complexity, efficient lossless compression, and reliable protection features with convenient image representation, is the main contribution of the paper. It is shown that a sufficient reduction of memory expenses can be obtained. Additionally, a dependence of compression efficiency measured by compression ratio (CR) on the structure of DAT applied is investigated. It is established that the variation of DAT structure produces a minor variation of CR. A possibility to apply this feature to data protection and security assurance is grounded and discussed. In addition, a structure or file for storing the compressed and protected data is proposed, and its properties are considered. Multi-level structure for the application of atomic functions in image processing and protection for EC in IoT systems is suggested and analyzed.

Journal ArticleDOI
TL;DR: A novel Compression-Based Data Reduction (CBDR) technology and an effective transmitting data strategy derived from data correlation are being developed at the sensor node level, designed to more efficiently compress data readings from IoT devices.

Journal ArticleDOI
TL;DR: In this paper , the authors characterize operationally meaningful quantum gains in a paradigmatic model of lossless multiple-phase interferometry and stress the insufficiency of the analysis based solely on the concept of quantum Fisher information.
Abstract: We characterize operationally meaningful quantum gains in a paradigmatic model of lossless multiple-phase interferometry and stress the insufficiency of the analysis based solely on the concept of quantum Fisher information. We show that the advantage of the optimal simultaneous estimation scheme amounts to a constant factor improvement when compared with schemes where each phase is estimated separately, which is contrary to widely cited results claiming a better precision scaling in terms of the number of phases involved.

Journal ArticleDOI
TL;DR: In this article , the authors proposed compression techniques using the optimized tunable-Q wavelet transform (TQWT) for ECG signals, which are stored in a digitized format at higher bits per sample that requires ample space for storage.

Journal ArticleDOI
TL;DR: In this article , the authors investigated the lossless coupling method of photovoltaic and thermoelectric devices to address the issue of large electrical coupling losses under different incident power densities.

Journal ArticleDOI
TL;DR: In this paper , the effects of incident angle, metamaterial thickness and electromagnetic damping on the EM wave transmission were analyzed with the help of simple transfer matrix method, and it was shown that the splitting frequency increases with increase in the incident angle exhibiting a blue shift.

Journal ArticleDOI
TL;DR: Liu et al. as mentioned in this paper proposed a top-down approach to estimate the just noticeable difference (JND) of natural images, which refers to the maximum pixel intensity change magnitude that typical human visual system (HVS) cannot perceive.
Abstract: Just noticeable difference (JND) of natural images refers to the maximum pixel intensity change magnitude that typical human visual system (HVS) cannot perceive. Existing efforts on JND estimation mainly dedicate to modeling the diverse masking effects in either/both spatial or/and frequency domains, and then fusing them into an overall JND estimate. In this work, we turn to a dramatically different way to address this problem with a top-down design philosophy. Instead of explicitly formulating and fusing different masking effects in a bottom-up way, the proposed JND estimation model dedicates to first predicting a critical perceptual lossless (CPL) counterpart of the original image and then calculating the difference map between the original image and the predicted CPL image as the JND map. We conduct subjective experiments to determine the critical points of 500 images and find that the distribution of cumulative normalized KLT coefficient energy values over all 500 images at these critical points can be well characterized by a Weibull distribution. Given a testing image, its corresponding critical point is determined by a simple weighted average scheme where the weights are determined by a fitted Weibull distribution function. The performance of the proposed JND model is evaluated explicitly with direct JND prediction and implicitly with three applications including JND-guided noise injection, JND-guided image compression, and distortion detection and discrimination. Experimental results have demonstrated that promising performance of the proposed JND model. The data and code of this work are available at https://github.com/Zhentao-Liu/KLT-JND.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an algorithm for fitting distributed linear mixed models (DLMMs) without sharing IPD across sites, which achieves results identical to those achieved using pooled IPD from multiple sites (i.e., the same effect size and standard error estimates).
Abstract: Abstract Linear mixed models are commonly used in healthcare-based association analyses for analyzing multi-site data with heterogeneous site-specific random effects. Due to regulations for protecting patients’ privacy, sensitive individual patient data (IPD) typically cannot be shared across sites. We propose an algorithm for fitting distributed linear mixed models (DLMMs) without sharing IPD across sites. This algorithm achieves results identical to those achieved using pooled IPD from multiple sites (i.e., the same effect size and standard error estimates), hence demonstrating the lossless property. The algorithm requires each site to contribute minimal aggregated data in only one round of communication. We demonstrate the lossless property of the proposed DLMM algorithm by investigating the associations between demographic and clinical characteristics and length of hospital stay in COVID-19 patients using administrative claims from the UnitedHealth Group Clinical Discovery Database. We extend this association study by incorporating 120,609 COVID-19 patients from 11 collaborative data sources worldwide.

Journal ArticleDOI
TL;DR: The final output demonstrates that the balancing-based LDC can reduce compression time and finally improve dependability, and the model proposed can enhance the computing capabilities in data compression compared to the existing methodologies.
Abstract: Telemetric information is great in size, requiring extra room and transmission time. There is a significant obstruction of storing or sending telemetric information. Lossless data compression (LDC) algorithms have evolved to process telemetric data effectively and efficiently with a high compression ratio and a short processing time. Telemetric information can be packed to control the extra room and association data transmission. In spite of the fact that different examinations on the pressure of telemetric information have been conducted, the idea of telemetric information makes pressure incredibly troublesome. The purpose of this study is to offer a subsampled and balanced recurrent neural lossless data compression (SB-RNLDC) approach for increasing the compression rate while decreasing the compression time. This is accomplished through the development of two models: one for subsampled averaged telemetry data preprocessing and another for BRN-LDC. Subsampling and averaging are conducted at the preprocessing stage using an adjustable sampling factor. A balanced compression interval (BCI) is used to encode the data depending on the probability measurement during the LDC stage. The aim of this research work is to compare differential compression techniques directly. The final output demonstrates that the balancing-based LDC can reduce compression time and finally improve dependability. The final experimental results show that the model proposed can enhance the computing capabilities in data compression compared to the existing methodologies.

Proceedings ArticleDOI
09 Mar 2022
TL;DR: A lossless compression scheme based on run-length encoding is proposed that translates into lower memory footprint and better energy efficiency compared to the original Tsetlin Machine algorithm, and provides promising trade offs when compared against binary neural networks.
Abstract: The emergence of embedded machine learning has enabled the migration of intelligence from the cloud to the edge and to the sensors. To explore the practicalities of wide-spread deployments of these intelligent sensors, we look beyond traditional arithmetic-based neural networks (NNs) to the logic-based learning algorithm called the Tsetlin Machine (TM). TMs have not yet been implemented and explored on general purpose microcontrollers especially that are intermittently powered. In this paper, we argue that their simple architecture makes them a promising candidate for batteryless ML systems. However, in their current form, they are not suitable to be deployed on resource-constrained sensors because of the substantial memory footprint of trained models. To tackle this issue, we propose a lossless compression scheme based on run-length encoding and evaluate against standard TMs for vision and acoustic workloads. We show that our encoding can compress the model by up to 99% without accuracy loss. This translates into lower memory footprint and better energy efficiency (up to 4.9x) compared to the original Tsetlin Machine algorithm, and provides promising trade offs when compared against binary neural networks.

Posted ContentDOI
05 Sep 2022-bioRxiv
TL;DR: Using entropy compression, it is shown that the SBWT can support membership queries on the k-spectrum of a single string in O(k) time and (n + k)(log σ + 1/ ln 2) + o(( n + k)σ) bits of space, where n is the number of distinct substrings of length k in the input and σ is the size of the alphabet.
Abstract: The k-spectrum of a string is the set of all distinct substrings of length k occurring in the string. This is a lossy but computationally convenient representation of the information in the string, with many applications in high-throughput bioinformatics. In this work, we define the notion of the Spectral Burrows-Wheeler Transform (SBWT), which is a sequence of subsets of the alphabet of the string encoding the k-spectrum of the string. The SBWT is a distillation of the ideas found in the BOSS and Wheeler graph data structures. We explore multiple different approaches to index the SBWT for membership queries on the underlying k-spectrum. We identify subset rank queries as the essential subproblem, and propose four succinct index structures to solve it. One of the approaches essentially leads to the known BOSS data structure, while the other three offer attractive time-space trade-offs and support simpler query algorithms that rely only on fast rank queries. The most general approach involves a novel data structure we call the subset wavelet tree, which we find to be of independent interest. All of the approaches are also amendable to entropy compression, which leads to good space bounds on the sizes of the data structures. Using entropy compression, we show that the SBWT can support membership queries on the k-spectrum of a single string in O(k) time and (n + k)(log σ + 1/ ln 2) + o((n + k)σ) bits of space, where n is the number of distinct substrings of length k in the input and σ is the size of the alphabet. This improves from the time O(k log σ) achieved by the BOSS data structure. We show, via experiments on a range of genomic data sets, that the simplicity of our new indexes translates into large performance gains in practice over prior art.

Journal ArticleDOI
TL;DR: In this article , the authors presented an efficient compression method with good numerical accuracy that preserves the topology of ring-linear polymer blends, which enables efficient archiving of molecular dynamics (MD) trajectories.
Abstract: To effectively archive configuration data during molecular dynamics (MD) simulations of polymer systems, we present an efficient compression method with good numerical accuracy that preserves the topology of ring-linear polymer blends. To compress the fraction of floating-point data, we used the Jointed Hierarchical Precision Compression Number - Data Format (JHPCN-DF) method to apply zero padding for the tailing fraction bits, which did not affect the numerical accuracy, then compressed the data with Huffman coding. We also provided a dataset of well-equilibrated configurations of MD simulations for ring-linear polymer blends with various lengths of linear and ring polymers, including ring complexes composed of multiple rings such as polycatenane. We executed 109 MD steps to obtain 150 equilibrated configurations. The combination of JHPCN-DF and SZ compression achieved the best compression ratio for all cases. Therefore, the proposed method enables efficient archiving of MD trajectories. Moreover, the publicly available dataset of ring-linear polymer blends can be employed for studies of mathematical methods, including topology analysis and data compression, as well as MD simulations.

Journal ArticleDOI
TL;DR: A new visibility threshold method is designed by incorporating blur sensitivity and oblique correction effects and an end-to-end mapping between the visibility threshold and quality control factor is learned and represented as a deep convolutional neural network.
Abstract: Screen content data, such as computer-generated photographs, desktop sharing, remote education, video game streaming and screenshot, is one of the most popular visual information carriers in Internet of Video Things. Although lossless compression can guarantee high quality of service for these screen content based industrial applications, it also causes considerable storage space and transmission bandwidth issues. To alleviate these challenges, in this article, we present a visually quasi-lossless coding approach to control the compression distortion below visibility threshold in the human visual system. Specifically, to better quantify the visual redundancy for screen content data, a new visibility threshold method is designed by incorporating blur sensitivity and oblique correction effects. Then, an end-to-end mapping between the visibility threshold and quality control factor is learned and represented as a deep convolutional neural network. The experimental results demonstrate that the proposed method saves the average encoding bits up to 23.15% compared with the latest scheme under the same perceptual quality.

Journal ArticleDOI
TL;DR: The letter introduces an efficient context-based lossless image codec for encoding event camera frames, where up to eight EFs are represented as a pair of an event map image (EMI), containing the spatial information, and a vector, containing the polarity information.
Abstract: The letter introduces an efficient context-based lossless image codec for encoding event camera frames. The asynchronous events acquired by an event camera in a spatio-temporal neighborhood are collected by an Event Frame (EF). A more efficient way to store the event data is introduced, where up to eight EFs are represented as a pair of an event map image (EMI), containing the spatial information, and a vector, containing the polarity information. The proposed codec encodes the EMI using: (i) a binary map, which signals the positions where at least one event occurs in the EFs; (ii) the number of events for each signalled position; and (iii) their EF index. Template context modelling and adaptive Markov modelling are employed to encode these three types of data. The experimental evaluation for EFs generated at different time intervals demonstrates that the proposed method achieves an improved performance compared with both video coding standards, HEVC and VVC, and state-of-the-art lossless image coding methods, CALIC and FLIF. When all events are collected by EFs, a relative compression of up to 5.8 is achieved compared with the raw event encapsulation provided by the event camera.

Journal ArticleDOI
TL;DR: In this article , a hybrid reversible-zero watermarking (HRZW) scheme was proposed to combine the complementary advantages of reversible and zero-watermarking for medical images, and the generated ownership share was embedded reversibly based on SLT-SVD-QIM.
Abstract: The verification of copyright and authenticity for medical images is critical in telemedical applications. Watermarking is a key technique for protecting medical images and can be mainly divided into three categories: region of interest (ROI) lossless watermarking, reversible watermarking and zero-watermarking. However, ROI lossless watermarking causes biases on diagnosis. Reversible watermarking can hardly provide a continuous verification function and may face verification disputes after image recovering. Zero-watermarking requires third-party storage which may cause additional security problems. To address these issues, a hybrid reversible-zero watermarking (HRZW) is proposed in this paper to effectively combine the complementary advantages of reversible watermarking and zero-watermarking. In our scheme, a novel hybrid structure is designed including a zero-watermarking component and a reversible watermarking component. In the first component, ownership share is generated by mapping nearest neighbor grayscale residual (NNGR) based features and watermark information. In the second component, the generated ownership share is embedded reversibly based on Slantlet Transform, Singular Value Decomposition and Quantization Index Modulation (SLT-SVD-QIM). Experimental results demonstrate that our proposed scheme not only yields remarkable watermarking imperceptibility, distinguishability and robustness, but also provides continuous verification function without any dispute or third-party storage, which outperforms existing watermarking schemes for medical images.

Journal ArticleDOI
TL;DR: Tests show that the proposed algorithm is close to the theoretical value in terms of information entropy, correlation coefficient, mean square error of reconstructed image and other related indicators, and has high security and lossless compression performance.
Abstract: In order to satisfy the requirements of high quality and security during image transmission and storage, this paper proposes an image lossless compression encryption algorithm based on 1D chaotic map and Set Partitioned Embedded block encoder (SPECK). Initially, this paper proposes a new 1D chaotic map, and applies the chaotic sequences generated by it to each stage of the compression encryption algorithm. In addition, according to the feature that the degree of energy concentration in the wavelet coefficient matrix gradually decreases from low frequency to high frequency, this paper proposes a wavelet coefficient encryption algorithm, which can balance security and compression performance. Furthermore, multiple encryption points are introduced in the SPECK encoding process, and a secure SPECK encoding algorithm is proposed. Finally, theoretical analysis and simulation results show that the proposed algorithm is close to the theoretical value in terms of information entropy, correlation coefficient, mean square error of reconstructed image and other related indicators. Therefore, the algorithm has high security and lossless compression performance.


Journal ArticleDOI
TL;DR: This paper proposes a lossless and generic federated recommendation framework via fake marks and secret sharing (FMSS), which can not only protect the two types of users’ privacy, without sacrificing the recommendation performance, but can also be applied to most recommendation algorithms for rating prediction, item ranking, and sequential recommendation.
Abstract: With the implementation of privacy protection laws such as GDPR, it is increasingly difficult for organizations to legally collect users’ data. However, a typical machine learning-based recommendation algorithm requires the data to learn users’ preferences. Some recent works thus turn to develop federated learning-based recommendation algorithms, but most of them either cannot protect the users’ privacy well, or sacrifice the model accuracy. In this article, we propose a lossless and generic federated recommendation framework via fake marks and secret sharing (FMSS). Our FMSS can not only protect the two types of users’ privacy, i.e., rating values and rating behaviors, without sacrificing the recommendation performance, but can also be applied to most recommendation algorithms for rating prediction, item ranking, and sequential recommendation. Specifically, we extend existing fake items to fake marks, and combine it with secret sharing to perturb the data uploaded by the clients to a server. We then apply our FMSS to six representative recommendation algorithms, i.e., MF-MPC and NeuMF for rating prediction, eALS and VAE-CF for item ranking, and Fossil and GRU4Rec for sequential recommendation. The experimental results demonstrate that our FMSS is a lossless and generic framework, which is able to federate a series of different recommendation algorithms in a lossless and privacy-aware manner.


Journal ArticleDOI
TL;DR: A method with a principal component analysis (PCA) and a deep neural network (DNN) to predict the entropy of data to be compressed and achieves a good compression ratio without trying to compress the entire amount of data at once.
Abstract: When we compress a large amount of data, we face the problem of the time it takes to compress it. Moreover, we cannot predict how effective the compression performance will be. Therefore, we are not able to choose the best algorithm to compress the data to its minimum size. According to the Kolmogorov complexity, the compression performances of the algorithms implemented in the available compression programs in the system differ. Thus, it is impossible to deliberately select the best compression program before we try the compression operation. From this background, this paper proposes a method with a principal component analysis (PCA) and a deep neural network (DNN) to predict the entropy of data to be compressed. The method infers an appropriate compression program in the system for each data block of the input data and achieves a good compression ratio without trying to compress the entire amount of data at once. This paper especially focuses on lossless compression for image data, focusing on the image blocks. Through experimental evaluation, this paper shows the reasonable compression performance when the proposed method is applied rather than when a compression program randomly selected is applied to the entire dataset.