scispace - formally typeset
Search or ask a question
Author

Lina Sha

Bio: Lina Sha is an academic researcher from Xidian University. The author has contributed to research in topics: Data compression & Motion compensation. The author has an hindex of 1, co-authored 2 publications receiving 4 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a novel algorithm for image set compression using multiple reference images, and proposes a rate-distortion optimized multiple reference image selection method.
Abstract: Image set compression has recently become an active research topic due to the explosion of digital photographs. In order to efficiently compress the image sets of similar images with moving objects, in this paper, we propose a novel algorithm for image set compression using multiple reference images. First, for an image set, its depth-constrained minimum arborescence is generated. We then present a reference image candidate determination method to build the reference image candidates for the images of the set. Furthermore, we propose a rate-distortion optimized multiple reference image selection method. This method compares the correlation between every image and each of its reference image candidates to produce its multiple reference images. Finally, compressed image data are achieved by employing block-based motion compensation and residue coding. In addition, we also give a new way of access to images to keep the same access delay with single reference image-based schemes. Compared with the state-of-the-art image compression algorithms, experimental results show that our proposed algorithm can significantly improve the image compression performance.

6 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed algorithm can effectively compress the similar images with priorities, and achieves similar rate-distortion performance to the state-of-the-art image set compression algorithm which has no constraints of depth and priority.
Abstract: This Letter first proposes an algorithm for compressing the sets of the similar images with priorities. The proposed algorithm includes four modules, i.e. priority assigned, depth- and priority-constrained minimum arborescence generation, disparity reduction, and residue coding. Experimental results demonstrate that the proposed algorithm can effectively compress the similar images with priorities, and achieves similar rate-distortion performance to the state-of-the-art image set compression algorithm which has no constraints of depth and priority.

1 citations


Cited by
More filters
Journal ArticleDOI
23 Feb 2021
TL;DR: The scheme proposed in this paper is protected, competent, and appropriate for providing sensor technology based IoT services and applications and is secure against hello flood attack, DoS attack, man-in-middle attack, etc.
Abstract: Wireless Sensor Networks (WSN) is arising as a potential computing platform in diverse zones such as weather forecasting, modern robotization, medical health care, and military systems, etc. Since the sensors are constantly gathering information from the actual world and communicate with one another through remote connections, keeping up the security and protection of WSN communication is a prerequisite. In this paper, safe confirmation and key organization scheme dependent on Elliptic Curve Cryptography (ECC) has been suggested to make sure about information/picture transmission in WSNs. The scheme proposed in this paper is protected, competent, and appropriate for providing sensor technology based IoT services and applications. The protocol provides all the security features such as mutual authentication, confidentiality, data integrity, perfect forward secrecy, fair key agreement, etc. and is secure against hello flood attack, DoS attack, man-in-middle attack, etc. Simulation software AVISPA has confirmed the safety of the protocol for the known assaults. The performance analysis ensures the superiority of the projected proposal over the existing schemes. Index Terms – WSN, Security, Elliptical Curve Cryptography (ECC), Automated Validation of Internet Security Protocols and Applications (AVISPA).

6 citations

Journal ArticleDOI
TL;DR: The proposed criterion is more robust in terms of subset-guided consistency enhancement assessment, which may effectively find an optimal-consistency or non-consistsency enhancement algorithm for the rest of an imageset.
Abstract: In a new era for machine vision, some image enhancement algorithms can be used to improve the quality of an imageset without reference. To assess the performance of imageset enhancement, the existing average criterion utilizes a no-reference image quality metric to calculate the quality score of each enhanced image, and quantifies the performance of an enhancement algorithm by the mean value of all scores. If the quality scores of some images fluctuate greatly, their mean value difficultly reflects the degradation or worst cases during imageset enhancement. Therefore, this paper analyzes and illustrates the need and significance of consistency enhancement assessment, and then proposes a subset-guided consistency enhancement assessment criterion for an imageset without reference. By measuring the subset of an imageset, the proposed criterion firstly calculates the difference of quality scores of each image before and after enhancement and then filters the outlier data outside confidence interval, and finally quantifies the consistency enhancement performance of an enhancement algorithm according to its consistency enhancement degree. When a small subset is used to guide its large imageset, the average criterion judges a consistency or non-consistency enhancement algorithm with a 16.7% false identification ratio, and also makes one misjudgment about the optimal-consistency algorithm, while the proposed criterion always correctly judges the non-consistency or optimal-consistency enhancement algorithm. This paper can help the scientific community to select a robust enhancement algorithm in the degradation or worst cases. As compared with the average criterion, the proposed criterion is more robust in terms of subset-guided consistency enhancement assessment, which may effectively find an optimal-consistency or non-consistency enhancement algorithm for the rest of an imageset.

2 citations

Journal ArticleDOI
TL;DR: This letter proposes a novel paradigm of compression algorithms, aimed at minimizing the information loss perceived by the final user instead of the actual source quality loss, under compression rate constraints, and proposes an algorithm to solve this compression problem.
Abstract: Lossy compression algorithms trade bits for quality, aiming at reducing as much as possible the bitrate needed to represent the original source (or set of sources), while preserving the source quality. In this letter, we propose a novel paradigm of compression algorithms, aimed at minimizing the information loss perceived by the final user instead of the actual source quality loss, under compression rate constraints. As main contributions, we first introduce the concept of perceived information (PI), which reflects the information perceived by a given user experiencing a data collection, and which is evaluated as the volume spanned by the sources features in a personalized latent space. We then formalize the rate-PI optimization problem and propose an algorithm to solve this compression problem. Finally, we validate our algorithm against benchmark solutions with simulation results, showing the gain in taking into account users’ preferences while also maximizing the perceived information in the feature domain.

2 citations

Proceedings ArticleDOI
20 Jun 2021
TL;DR: In this paper, the authors proposed a new method for efficient lossless compression of image sets by combining a minimum spanning tree (MST) and the Free Lossless Image Format (FLIF).
Abstract: Many image datasets are available on the Internet, contributing to the development of computer vision. While huge datasets are useful for research, they are time-consuming to transfer due to their large data volume. In particular, lossless compression has a worse compression ratio than lossy compression. It is considered that a higher compression ratio can be achieved by encoding multiple images together exploiting the features in the dataset rather than encoding each image individually. In this paper, we propose a new method for efficient lossless compression of image sets by combining a minimum spanning tree (MST) and the Free Lossless Image Format (FLIF). The experimental results show that the compression ratio of the proposed method is better than that of the HEVC-based method. We also show that the compression ratio can be further improved by extending the entropy coder of FLIF, but the effect of the compression ratio improvement depends on the characteristics of the images in the set.

1 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a low-complexity and high-coding-efficiency image deletion algorithm, which can effectively remove any image, including root vertex, internal vertices, and leaf vertices.
Abstract: Image deletion refers to removing images from a compressed image set in cloud servers, which has always received much attention. However, in some cases images are not successfully deleted, and coding performance still remains to rise. In this paper, we propose a low-complexity and high-coding-efficiency image deletion algorithm. First, all the images are classified into to-be-deleted images, images unneeded to be processed, and images needed to be processed further divided into images needed to be only decoded and images needed to be re-encoded. Then, we also propose a depth- and subtree-constrained minimum spanning tree (DSCMST) heuristics to produce the DSCMST of images needed to be processed. Third, every image unneeded to be processed is added to the just obtained DSCMST as the child of the vertex that is still its parent in the compressed image set. Finally, after the encoding of images needed to be re-encoded, a new compressed image set is constructed, implying the completion of image deletion. Experimental results show that under various circumstances our proposed algorithm can effectively remove any images, including root vertex, internal vertices, and leaf vertices. Moreover, compared with state-of-the-art methods, the proposed algorithm achieves higher coding efficiency while having the minimum complexity.