scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2013"


Proceedings ArticleDOI
Hosein Nikopour1, Hadi Baligh1
25 Nov 2013
TL;DR: A new multiple access scheme so called sparse code multiple access (SCMA) is proposed which still enjoys the low complexity reception technique but with better performance compared to LDS, allowing us to take advantage of a near optimal ML receiver with practically feasible complexity.
Abstract: Multicarrier CDMA is a multiplexing approach in which modulated QAM symbols are spread over multiple OFDMA tones by using a generally complex spreading sequence. Effectively, a QAM symbol is repeated over multiple tones. Low density signature (LDS) is a version of CDMA with low density spreading sequence allowing us to take advantage of a near optimal ML receiver with practically feasible complexity. In this paper, we propose a new multiple access scheme so called sparse code multiple access (SCMA) which still enjoys the low complexity reception technique but with better performance compared to LDS. In SCMA, the procedure of bit to QAM symbol mapping and spreading are combined together and incoming bits are directly mapped to a multidimensional codeword of an SCMA codebook set. Each layer or user has its dedicated codebook. Shaping gain of a multidimensional constellation is the main source of the performance improvement in comparison to the simple repetition of QAM symbols in LDS. In general, SCMA codebook design is an optimization problem. A systematic sub-optimal approach is proposed here for SCMA codebook design.

1,202 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: New models with a compositional parameterization of cluster centers are developed, so representational capacity increases super-linearly in the number of parameters, allowing one to effectively quantize data using billions or trillions of centers.
Abstract: A fundamental limitation of quantization techniques like the k-means clustering algorithm is the storage and run-time cost associated with the large numbers of clusters required to keep quantization errors small and model fidelity high. We develop new models with a compositional parameterization of cluster centers, so representational capacity increases super-linearly in the number of parameters. This allows one to effectively quantize data using billions or trillions of centers. We formulate two such models, Orthogonal k-means and Cartesian k-means. They are closely related to one another, to k-means, to methods for binary hash function optimization like ITQ (Gong and Lazebnik, 2011), and to Product Quantization for vector quantization (Jegou et al., 2011). The models are tested on large-scale ANN retrieval tasks (1M GIST, 1B SIFT features), and on codebook learning for object recognition (CIFAR-10).

335 citations


Journal ArticleDOI
TL;DR: A framework to classify time series based on a bag-of-features representation (TSBF) that provides a feature-based approach that can handle warping (although differently from DTW), and experimental results show that TSBF provides better results than competitive methods on benchmark datasets from the UCR time series database.
Abstract: Time series classification is an important task with many challenging applications. A nearest neighbor (NN) classifier with dynamic time warping (DTW) distance is a strong solution in this context. On the other hand, feature-based approaches have been proposed as both classifiers and to provide insight into the series, but these approaches have problems handling translations and dilations in local patterns. Considering these shortcomings, we present a framework to classify time series based on a bag-of-features representation (TSBF). Multiple subsequences selected from random locations and of random lengths are partitioned into shorter intervals to capture the local information. Consequently, features computed from these subsequences measure properties at different locations and dilations when viewed from the original series. This provides a feature-based approach that can handle warping (although differently from DTW). Moreover, a supervised learner (that handles mixed data types, different units, etc.) integrates location information into a compact codebook through class probability estimates. Additionally, relevant global features can easily supplement the codebook. TSBF is compared to NN classifiers and other alternatives (bag-of-words strategies, sparse spatial sample kernels, shapelets). Our experimental results show that TSBF provides better results than competitive methods on benchmark datasets from the UCR time series database.

320 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: Multipath Hierarchical Matching Pursuit (M-HMP), a novel feature learning architecture that combines a collection of hierarchical sparse features for image classification to capture multiple aspects of discriminative structures, is proposed.
Abstract: Complex real-world signals, such as images, contain discriminative structures that differ in many aspects including scale, invariance, and data channel. While progress in deep learning shows the importance of learning features through multiple layers, it is equally important to learn features through multiple paths. We propose Multipath Hierarchical Matching Pursuit (M-HMP), a novel feature learning architecture that combines a collection of hierarchical sparse features for image classification to capture multiple aspects of discriminative structures. Our building blocks are MI-KSVD, a codebook learning algorithm that balances the reconstruction error and the mutual incoherence of the codebook, and batch orthogonal matching pursuit (OMP), we apply them recursively at varying layers and scales. The result is a highly discriminative image representation that leads to large improvements to the state-of-the-art on many standard benchmarks, e.g., Caltech-101, Caltech-256, MITScenes, Oxford-IIIT Pet and Caltech-UCSD Bird-200.

226 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: This paper aims to minimize the distribution divergence between the labeled and unlabeled images, and incorporates this criterion into the objective function of sparse coding to make the new representations robust to the distribution difference.
Abstract: Sparse coding learns a set of basis functions such that each input signal can be well approximated by a linear combination of just a few of the bases. It has attracted increasing interest due to its state-of-the-art performance in BoW based image representation. However, when labeled and unlabeled images are sampled from different distributions, they may be quantized into different visual words of the codebook and encoded with different representations, which may severely degrade classification performance. In this paper, we propose a Transfer Sparse Coding (TSC) approach to construct robust sparse representations for classifying cross-distribution images accurately. Specifically, we aim to minimize the distribution divergence between the labeled and unlabeled images, and incorporate this criterion into the objective function of sparse coding to make the new representations robust to the distribution difference. Experiments show that TSC can significantly outperform state-of-the-art methods on three types of computer vision datasets.

221 citations


Journal ArticleDOI
TL;DR: B BossaNova is proposed, a novel representation for content-based concept detection in images and videos, which enriches the Bag-of-Words model, and is compact and simple to compute.

202 citations


Patent
27 Sep 2013
TL;DR: In this article, a WTRU may send channel state information (CSI) feedback for each component codebook to the base station for consideration when performing communications with the WTRUs.
Abstract: Communications may be performed in a communications system using multi-dimensional antenna configurations. A WTRU may receive communications from a base station via one or more channels. The communications may be performed using multiple component codebooks. The WTRU may send channel state information (CSI) feedback for each component codebook to the base station for consideration when performing communications with the WTRU. The WTRU may determine the CSI feedback for each component codebook based on channel measurements. The component codebooks may include a horizontal component codebook and/or a vertical component codebook. The WTRU may send the CSI feedback for each component codebook to the base station independently or in the form of a composite codebook. The WTRU may determine a composite codebook a function of the component codebooks.

187 citations


Patent
13 May 2013
TL;DR: In this paper, a channel codebook is generated by identifying a subset of antenna configurations from a plurality of antennas configurations of an antenna associated with a transmitter by transmitting a sequence of symbols from the transmitter to a receiver using the plurality of antenna configuration, wherein each antenna configuration provides a unique transmission characteristic to the receiver.
Abstract: Generating a channel codebook by identifying a subset of antenna configurations from a plurality of antenna configurations of an antenna associated with a transmitter by: transmitting a sequence of symbols from the transmitter to a receiver using the plurality of antenna configurations, wherein each antenna configuration provides a unique transmission characteristic to the receiver; receiving feedback from the receiver that identifies the subset of antenna configurations; and, generating channel codebook entries corresponding to the subset of antenna configurations; and, transmitting data from the transmitter to the receiver using the channel codebook.

155 citations


Journal ArticleDOI
TL;DR: This paper presents a simple but effective scene classification approach based on the incorporation of a multi-resolution representation into a bag-of-features model and shows that the proposed approach performs competitively against previous methods across all data sets.

138 citations


Journal ArticleDOI
TL;DR: A simple yet effective bag-of-words representation that is originally developed for text document analysis is extended for biomedical time series representation and is able to capture high-level structural information because both local and global structural information are well utilized.

133 citations


Journal ArticleDOI
TL;DR: The pixel-based classification is adopted for refining the results from the block-based background subtraction, which can further classify pixels as foreground, shadows, and highlights and can provide a high precision and efficient processing speed to meet the requirements of real-time moving object detection.
Abstract: Moving object detection is an important and fundamental step for intelligent video surveillance systems because it provides a focus of attention for post-processing. A multilayer codebook-based background subtraction (MCBS) model is proposed for video sequences to detect moving objects. Combining the multilayer block-based strategy and the adaptive feature extraction from blocks of various sizes, the proposed method can remove most of the nonstationary (dynamic) background and significantly increase the processing efficiency. Moreover, the pixel-based classification is adopted for refining the results from the block-based background subtraction, which can further classify pixels as foreground, shadows, and highlights. As a result, the proposed scheme can provide a high precision and efficient processing speed to meet the requirements of real-time moving object detection.

Patent
27 Sep 2013
TL;DR: In this paper, codebook-based beamforming feedback signaling and sounding mechanisms for use in wireless communications are described and a preamble structure to enable the use of smoothing methods for improved channel estimation, codebook designs that may be used for codebook based feedback, and multi-resolution explicit feedback are disclosed as well.
Abstract: Methods for WiFi beamforming, feedback, and sounding (WiBEAM) are described. Codebook based beamforming feedback signaling and sounding mechanisms for use in wireless communications are disclosed. The methods described herein improve the feedback efficiency by using Givens rotation based decompositions and quantizing the resulting angles of the Givens rotation based decompositions using a range from a subset of [0, 2π]. Feedback may also be divided into multiple components to improve feedback efficiency/accuracy. Time domain beamforming reports for taking advantage of channel reciprocity while still taking into account practical radio frequency (RF) channel impairments are also described. Beamforming feedback that prioritizes the feedback bits in accordance with the significance of the bits is also disclosed. A preamble structure to enable the use of smoothing methods for improved channel estimation, codebook designs that may be used for codebook based beamforming feedback, and multi-resolution explicit feedback are disclosed as well.

Patent
11 Jul 2013
TL;DR: In this article, the authors proposed a codebook sampling method for a wireless network having two-dimensional antenna systems, which includes receiving from an eNodeB (eNB) an indication of a restricted subset M of vertical precoding matrices, wherein M is less than a total number of vertical sub-matrices N in the codebook.
Abstract: A user equipment (UE) in a wireless network having two-dimensional antenna systems performs a method of codebook sampling. The method includes receiving from an eNodeB (eNB) an indication of a restricted subset M of vertical precoding matrices, wherein M is less than a total number of vertical precoding matrices N in a codebook, the codebook comprising a plurality of vertical precoding matrices and horizontal precoding matrices. The method also includes feeding back vertical precoding matrix indicators (V-PMI) to the eNB based on the restricted subset of vertical precoding matrices.

Journal ArticleDOI
TL;DR: An efficient method for text-independent writer identification using a codebook method that uses the occurrence histogram of the shapes in a code book to create a feature vector for each specific manuscript is proposed.

Proceedings ArticleDOI
Arnold Wiliem1, Yongkang Wong1, Conrad Sanderson1, Peter Hobson1, Shaokang Chen1, Brian C. Lovell1 
TL;DR: Experiments show that the proposed cell classification system has consistent high performance and is more robust than two recent CAD systems, and is the first time codebook-based descriptors are applied and studied in this domain.
Abstract: The Anti-Nuclear Antibody (ANA) clinical pathology test is commonly used to identify the existence of various diseases. A hallmark method for identifying the presence of ANAs is the Indirect Immunofluorescence method on Human Epithelial (HEp-2) cells, due to its high sensitivity and the large range of antigens that can be detected. However, the method suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg., speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. In this paper, we propose a cell classification system comprised of a dual-region codebook-based descriptor, combined with the Nearest Convex Hull Classifier. We evaluate the performance of several variants of the descriptor on two publicly available datasets: ICPR HEp-2 cell classification contest dataset and the new SNPHEp-2 dataset. To our knowledge, this is the first time codebook-based descriptors are applied and studied in this domain. Experiments show that the proposed system has consistent high performance and is more robust than two recent CAD systems.

Patent
29 Jul 2013
TL;DR: In this article, the HARQ-ACK codebook size for inter-band time division duplex (TDD) carrier aggregation (CA) was determined for a hybrid automatic repeat reQuest-ACKnowledge (HARQ) codebook.
Abstract: Technology to determine a Hybrid Automatic Repeat reQuest-ACKnowledge (HARQ-ACK) codebook size for inter-band time division duplex (TDD) carrier aggregation (CA) is disclosed. In an example, a user equipment (UE) operable to determine a HARQ-ACK codebook size for inter-band TDD CA can include computer circuitry configured to: Determine a HARQ bundling window for inter-band TDD CA including a number of downlink (DL) subframes using HARQ-ACK feedback; divide the HARQ bundling window into a first part and a second part; and calculate the HARQ-ACK codebook size based on the first part and the second part. The first part can include DL subframes of configured serving cells that occur no later than the DL subframe where a downlink control information (DCI) transmission for uplink scheduling on a serving cell is conveyed, and the second part can include physical downlink shared channel (PDSCH) subframes occurring after the DCI transmission of the serving cells.

Proceedings ArticleDOI
Arnold Wiliem1, Yongkang Wong1, Conrad Sanderson1, Peter Hobson1, Shaokang Chen1, Brian C. Lovell1 
15 Jan 2013
TL;DR: In this article, a dual-region codebook-based descriptor was combined with the Nearest Convex Hull Classifier to classify human epithelial (HEp-2) cells.
Abstract: The Anti-Nuclear Antibody (ANA) clinical pathology test is commonly used to identify the existence of various diseases. A hallmark method for identifying the presence of ANAs is the Indirect Immunofluorescence method on Human Epithelial (HEp-2) cells, due to its high sensitivity and the large range of antigens that can be detected. However, the method suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg., speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. In this paper, we propose a cell classification system comprised of a dual-region codebook-based descriptor, combined with the Nearest Convex Hull Classifier. We evaluate the performance of several variants of the descriptor on two publicly available datasets: ICPR HEp-2 cell classification contest dataset and the new SNPHEp-2 dataset. To our knowledge, this is the first time codebook-based descriptors are applied and studied in this domain. Experiments show that the proposed system has consistent high performance and is more robust than two recent CAD systems.

Journal ArticleDOI
TL;DR: It is shown that when the number of feedback bits scales with SNR as well as the value of scaling coefficient can be significantly reduced in networks with asymmetric interference topology, the sum degrees of freedom of the network are preserved.
Abstract: Interference alignment is degree of freedom optimal on $K$ -user MIMO interference channels and many previous works have studied the transceiver designs. However, these works predominantly focus on networks with perfect channel state information at the transmitters and symmetrical interference topology. In this paper, we consider a limited feedback system with heterogeneous path loss and spatial correlations and investigate how the dynamics of the interference topology can be exploited to improve the feedback efficiency. We propose a novel spatial codebook design and perform dynamic quantization via bit allocations to adapt to the asymmetry of the interference topology. We bound the system throughput under the proposed dynamic scheme in terms of the transmit SNR, feedback bits, and the interference topology parameters. It is shown that when the number of feedback bits scales with SNR as $C_{s}\cdot \log {\hbox{SNR}}+ {\cal O}(1)$ , the sum degrees of freedom of the network are preserved. Moreover, the value of scaling coefficient $C_{s}$ can be significantly reduced in networks with asymmetric interference topology.

Patent
22 Oct 2013
TL;DR: In this paper, a method for receiving a reference CSI configuration and a following CSI configuration information which is configured to report a same RI (Rank Indicator) as the reference CSI information is presented.
Abstract: The present invention relates to a method for receiving a reference CSI configuration information and a following CSI configuration information which is configured to report a same RI (Rank Indicator) as the reference CSI configuration information, receiving a first precoding codebook subset information for the reference CSI configuration information and a second precoding codebook subset information for the following CSI configuration information, set of RIs according to the second precoding codebook subset information is same as set of RIs according to the first precoding codebook subset information, and transmitting CSI determined based on at least one of the first precoding codebook subset information and the second precoding codebook subset information.

Proceedings ArticleDOI
25 Nov 2013
TL;DR: Simulation shows that with properly clustered codewords, the proposed 3D MU-MIMO feedback scheme has a significant throughput gain against 2D MU/MIMo feedback scheme.
Abstract: This paper proposes a new codebook structure called Kronecker-product based codebook (KPC), where each codeword is the Kronecker product of two oversampled DFT codewords in both the horizontal and vertical domains. The KPC is especially suitable for the three-dimensional (3D) multiuser multi-input multi-output (MU-MIMO) systems. Besides, channel state information feedback based on the best companion cluster scheme is investigated. Since all codewords have been grouped into several clusters, each user feeds back its best precoding matrix index, best interference cluster index and channel quality information, then the BS pairs and schedules users according to the received feedback. Different codewords clustering methods affect the performance of the limited feedback schemes. We proposes two kinds of codewords clustering methods based on 3D beam patterns, including both the symmetric and asymmetric one. Simulation shows that with properly clustered codewords, our proposed 3D MU-MIMO feedback scheme has a significant throughput gain against 2D MU-MIMO feedback scheme.

Journal ArticleDOI
01 Nov 2013
TL;DR: Experimental results on five commonly used benchmarks demonstrate that the time-consuming clustering is not necessary for the codebook construction of the CM-BOF approach, and the methods are superior or comparable to the state of the art in applications of both rigid and non-rigid 3D shape retrieval.
Abstract: Content-based 3D object retrieval has become an active topic in many research communities. In this paper, we propose a novel visual similarity-based 3D shape retrieval method (CM-BOF) using Clock Matching and Bag-of-Features. Specifically, pose normalization is first applied to each object to generate its canonical pose, and then the normalized object is represented by a set of depth-buffer images captured on the vertices of a given geodesic sphere. Afterwards, each image is described as a word histogram obtained by the vector quantization of the image's salient local features. Finally, an efficient multi-view shape matching scheme (i.e., Clock Matching) is employed to measure the dissimilarity between two models. When applying the CM-BOF method in non-rigid 3D shape retrieval, multidimensional scaling (MDS) should be utilized before pose normalization to calculate the canonical form for each object. This paper also investigates several critical issues for the CM-BOF method, including the influence of the number of views, codebook, training data, and distance function. Experimental results on five commonly used benchmarks demonstrate that: (1) In contrast to the traditional Bag-of-Features, the time-consuming clustering is not necessary for the codebook construction of the CM-BOF approach; (2) Our methods are superior or comparable to the state of the art in applications of both rigid and non-rigid 3D shape retrieval.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This paper proposes a new technique for learning a discriminative codebook for local feature descriptors, specifically designed for scalable landmark classification, and significantly outperforms the state of the art in landmark classification.
Abstract: In this paper we propose a new technique for learning a discriminative codebook for local feature descriptors, specifically designed for scalable landmark classification. The key contribution lies in exploiting the knowledge of correspondences within sets of feature descriptors during code-book learning. Feature correspondences are obtained using structure from motion (SfM) computation on Internet photo collections which serve as the training data. Our codebook is defined by a random forest that is trained to map corresponding feature descriptors into identical codes. Unlike prior forest-based codebook learning methods, we utilize fine-grained descriptor labels and address the challenge of training a forest with an extremely large number of labels. Our codebook is used with various existing feature encoding schemes and also a variant we propose for importance-weighted aggregation of local features. We evaluate our approach on a public dataset of 25 landmarks and our new dataset of 620 landmarks (614K images). Our approach significantly outperforms the state of the art in landmark classification. Furthermore, our method is memory efficient and scalable.

Patent
Hao Xu1, Stefan Geirhofer1, Peter Gaal1, Wanshi Chen1, Yongbin Wei1 
16 May 2013
TL;DR: In this article, the authors present a method for wireless communications that may be performed by a base station and generally includes mapping N physical antennas arranged in at least two dimensions to K virtual antennas, wherein K is less than N, transmitting reference signals (RS) via the K virtual antenna, and receiving, from a user equipment, feedback based on the RS transmitted on the virtual antennas.
Abstract: Aspects of the present disclosure relate to techniques that may be utilized in networks with base stations and/or mobile devices that use large number of antennas or multi-dimensional arrays of antennas. According to certain aspects, a method for wireless communications is provided. The method may be performed, for example, by a base station and generally includes mapping N physical antennas arranged in at least two dimensions to K virtual antennas, wherein K is less than N, transmitting reference signals (RS) via the K virtual antennas, and receiving, from a user equipment, feedback based on the RS transmitted on the K virtual antennas.

Journal ArticleDOI
TL;DR: The proposed method is based on the bag of video words (BOV) representation and does not require prior knowledge about actions, background subtraction, motion estimation or tracking, and is robust to spatial and temporal scale changes, as well as some deformations.

Journal ArticleDOI
TL;DR: Novel probability density function (PDF) models, based on beta and wrapped Cauchy distributions, are proposed for Givens rotations in correlated MIMO channels and precoding using the proposed codebooks achieves significant performance improvement, in terms of mean square error and sum rate, as compared to using uniform codebooks.
Abstract: Parametrization of unitary matrices using Givens rotations has been used for limited feedback in multiple-input multiple-output (MIMO) systems. Feedback based on these rotations has been adopted in IEEE 802.11n and other upcoming standards. However, the probability distributions of Givens rotations is not known for correlated channels, forcing the use of uniform quantization. In this paper, novel probability density function (PDF) models, based on beta and wrapped Cauchy distributions, are proposed for Givens rotations in correlated MIMO channels. Empirical distributions and goodness-of-fit tests show that the proposed distributions characterize the spatial correlation behavior with good accuracy. Moreover, it is shown that the distributions known in the literature for uncorrelated MIMO channels are only special cases. Distributions of Givens rotations are useful to understand the behavior of singular vectors of correlated channels. In this paper, the PDF models are utilized for bit allocation and optimized codebook design. Simulations show that precoding using the proposed codebooks achieves significant performance improvement, in terms of mean square error and sum rate, as compared to using uniform codebooks. It is also shown that the bit allocations proposed in this paper reduce to that of IEEE 802.11n standard when the MIMO channel is not spatially correlated.

Journal ArticleDOI
TL;DR: A new image classification method by spatial pyramid robust sparse coding (SP-RSC), which tries to find the maximum likelihood estimation solution by alternatively optimizing over the codebook and local feature coding parameters, hence is more robust to outliers than traditional sparse coding based methods.

Journal ArticleDOI
TL;DR: This work considers MIMO (Multiple Input Multiple Output) wiretap channels, where a legitimate transmitter Alice is communicating with a legitimate receiver Bob in the presence of an eavesdropper Eve, and communication is done via MIMo channels.
Abstract: We consider MIMO (Multiple Input Multiple Output) wiretap channels, where a legitimate transmitter Alice is communicating with a legitimate receiver Bob in the presence of an eavesdropper Eve, and communication is done via MIMO channels. We suppose that Alice's strategy is to use an infinite lattice codebook, which then allows her to perform coset encoding. We analyze Eve's probability of correctly decoding the message Alice meant to Bob, and from minimizing this probability, we derive a code design criterion for MIMO lattice wiretap codes. The case of block fading channels is treated similarly, and fast fading channels are derived as a particular case. The Alamouti code is carefully studied as an illustration of the analysis provided.

Journal ArticleDOI
TL;DR: Optimal block-codes with a small number of codewords with a minimum average error probability are investigated for the binary asymmetric channel, including the two special cases of the binary symmetric channel (BSC) and the Z-channel (ZC), both with arbitrary cross-over probabilities.
Abstract: Optimal block-codes (in the sense of minimum average error probability, using maximum likelihood decoding) with a small number of codewords are investigated for the binary asymmetric channel (BAC), including the two special cases of the binary symmetric channel (BSC) and the Z-channel (ZC), both with arbitrary cross-over probabilities. For the ZC, the optimal code structure for an arbitrary finite blocklength is derived in the cases of two, three, and four codewords and conjectured in the case of five codewords. For the BSC, the optimal code structure for an arbitrary finite blocklength is derived in the cases of two and three codewords and conjectured in the case of four codewords. For a general BAC, the best codebooks under the assumption of a threshold decoder are derived for the case of two codewords. The derivation of these optimal codes relies on a new approach of constructing and analyzing the codebook matrix not rowwise (codewords), but columnwise. This new tool leads to an elegant definition of interesting code families that is recursive in the blocklength n and admits their exact analysis of error performance. This allows for a comparison of the average error probability between all possible codebooks.

Journal ArticleDOI
TL;DR: This work designs a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach, and develops a simulated-annealing-based algorithm to solve the problem.
Abstract: Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.

Journal Article
TL;DR: The simulation results reveal that the proposed watermark hiding scheme has good robustness to a range of image processing attacks and aims at improving the security of the related schemes.
Abstract: In this paper, a watermark hiding scheme for copyright protection of sensitive images is proposed. The concept of visual cryptography is used, so that the original host image is not altered. The proposed scheme aims at improving the security of the related schemes. The scheme also reduces the size of codebook and size of shares, to be used in watermark hiding process. This is achieved by adapting the concept of Pair-Wise Visual Cryptography (PWVC). The simulation results reveal that the proposed scheme has good robustness to a range of image processing attacks.