scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2015"


Journal ArticleDOI
TL;DR: This paper devise an efficient hierarchical codebook by jointly exploiting sub-array and deactivation (turning-off) antenna processing techniques, where closed-form expressions are provided to generate the codebook.
Abstract: In millimeter-wave communication, large antenna arrays are required to achieve high power gain by steering towards each other with narrow beams, which poses the problem to efficiently search the best beam direction in the angle domain at both Tx and Rx sides. As the exhaustive search is time consuming, hierarchical search has been widely accepted to reduce the complexity, and its performance is highly dependent on the codebook design. In this paper, we propose two basic criteria for the hierarchical codebook design, and devise an efficient hierarchical codebook by jointly exploiting sub-array and deactivation (turning-off) antenna processing techniques, where closed-form expressions are provided to generate the codebook. Performance evaluations are conducted under different system and channel models. Results show superiority of the proposed codebook over the existing alternatives.

368 citations



Journal ArticleDOI
TL;DR: A classifier based on a new symbolic representation for MTS (denoted as SMTS) with several important elements is provided, which considers all attributes of MTS simultaneously, rather than separately, to extract information contained in the relationships.
Abstract: Multivariate time series (MTS) classification has gained importance with the increase in the number of temporal datasets in different domains (such as medicine, finance, multimedia, etc.). Similarity-based approaches, such as nearest-neighbor classifiers, are often used for univariate time series, but MTS are characterized not only by individual attributes, but also by their relationships. Here we provide a classifier based on a new symbolic representation for MTS (denoted as SMTS) with several important elements. SMTS considers all attributes of MTS simultaneously, rather than separately, to extract information contained in the relationships. Symbols are learned from a supervised algorithm that does not require pre-defined intervals, nor features. An elementary representation is used that consists of the time index, and the values (and first differences for numerical attributes) of the individual time series as columns. That is, there is essentially no feature extraction (aside from first differences) and the local series values are fused to time position through the time index. The initial representation of raw data is quite simple conceptually and operationally. Still, a tree-based ensemble can detect interactions in the space of the time index and time values and this is exploited to generate a high-dimensional codebook from the terminal nodes of the trees. Because the time index is included as an attribute, each MTS is learned to be segmented by time, or by the value of one of its attributes. The codebook is processed with a second ensemble where now implicit feature selection is exploited to handle the high-dimensional input. The constituent properties produce a distinctly different algorithm. Moreover, MTS with nominal and missing values are handled efficiently with tree learners. Experiments demonstrate the effectiveness of the proposed approach in terms of accuracy and computation times in a large collection multivariate (and univariate) datasets.

143 citations


Proceedings ArticleDOI
Bichai Wang1, Kun Wang, Lu Zhaohua, Tian Xie1, Jinguo Quan1 
17 Jun 2015
TL;DR: Simulation results show that in typical Rayleigh fading channels, SCMA has the best performance, while the BER performance of MUSA and PDMA are very close to each other, and analyze the performance of PDMA using the same factor graph as SCMA, which indicates that the performance gain of SCMA over PDMA comes from both the difference of factor graph and the codebook optimization.
Abstract: With the development of mobile Internet and Internet of things (IoT), the 5th generation (5G) wireless communications will foresee explosive increase in mobile traffic. To address challenges in 5G such as higher spectral efficiency, massive connectivity, and lower latency, some non-orthogonal multiple access (NOMA) schemes have been recently actively investigated, including power-domain NOMA, multiple access with low-density spreading (LDS), sparse code multiple access (SCMA), multiuser shared access (MUSA), pattern division multiple access (PDMA), etc. Different from conventional orthogonal multiple access (OMA) schemes, NOMA can realize overloading by introducing some controllable interferences at the cost of slightly increased receiver complexity, which can achieve significant gains in spectral efficiency and accommodate much more users. In this paper, we will discuss basic principles and key features of three typical NOMA schemes, i.e., SCMA, MUSA, and PDMA. What's more, their performance in terms of uplink bit error rate (BER) will be compared. Simulation results show that in typical Rayleigh fading channels, SCMA has the best performance, while the BER performance of MUSA and PDMA are very close to each other. In addition, we also analyze the performance of PDMA using the same factor graph as SCMA, which indicates that the performance gain of SCMA over PDMA comes from both the difference of factor graph and the codebook optimization.

130 citations


Proceedings ArticleDOI
Chuen-Kai Shie1, Chung-Hisang Chuang1, Chun-Nan Chou1, Meng-Hsi Wu1, Edward Y. Chang1 
05 Nov 2015
TL;DR: This work first learns a codebook in an unsupervised way from 15 million images collected from ImageNet, then uses the resulting weighting vectors as the feature vectors of the OM images, and employs a traditional supervised learning algorithm to train an OM classifier.
Abstract: There are two major challenges to overcome when developing a classifier to perform automatic disease diagnosis. First, the amount of labeled medical data is typically very limited, and a classifier cannot be effectively trained to attain high disease-detection accuracy. Second, medical domain knowledge is required to identify representative features in data for detecting a target disease. Most computer scientists and statisticians do not have such domain knowledge. In this work, we show that employing transfer learning can remedy both problems. We use Otitis Media (OM) to conduct our case study. Instead of using domain knowledge to extract features from labeled OM images, we construct features based on a dataset entirely OM-irrelevant. More specifically, we first learn a codebook in an unsupervised way from 15 million images collected from ImageNet. The codebook gives us what the encoders consider being the fundamental elements of those 15 million images. We then encode OM images using the codebook and obtain a weighting vector for each OM image. Using the resulting weighting vectors as the feature vectors of the OM images, we employ a traditional supervised learning algorithm to train an OM classifier. The achieved detection accuracy is 88.5% (89.63% in sensitivity and 86.9% in specificity), markedly higher than all previous attempts, which relied on domain experts to help extract features.

129 citations


Proceedings ArticleDOI
03 Dec 2015
TL;DR: An improved method based on star-QAM signaling constellations is proposed here for designing the SCMA codebooks and it is demonstrated that the new method can greatly improve the BER performance without sacrificing the low detection complexity.
Abstract: In this paper, an optimized codebook design for a non-orthogonal multiple access scheme, called sparse code multiple access (SCMA), is presented. Unlike the low density signature (LDS) systems, in SCMA systems, the procedure of bits to QAM symbol mapping and spreading are combined together, and the incoming bits are directly mapped to the codewords of the SCMA codebook sets. Each layer or user has its dedicated codebook and the codebooks are all different. An improved method based on star-QAM signaling constellations is proposed here for designing the SCMA codebooks. It is demonstrated that the new method can greatly improve the BER performance without sacrificing the low detection complexity, compared to the existing codebooks and LDS.

109 citations


Proceedings ArticleDOI
01 Sep 2015
TL;DR: These findings suggest that it is possible to treat leaf counting as a regression problem, requiring as input only the total leaf count per training image, and relate image-based descriptors learned in an unsupervised fashion to leaf counts using a supervised regression model.
Abstract: Counting the number of leaves in plants is important for plant phenotyping, since it can be used to assess plant growth stages. We propose a learning-based approach for counting leaves in rosette (model) plants. We relate image-based descriptors learned in an unsupervised fashion to leaf counts using a supervised regression model. To take advantage of the circular and coplanar arrangement of leaves and also to introduce scale and rotation invariance, we learn features in a log-polar representation. Image patches extracted in this log-polar domain are provided to K-means, which builds a codebook in a unsupervised manner. Feature codes are obtained by projecting patches on the codebook using the triangle encoding, introducing both sparsity and specifically designed representation. A global, per-plant image descriptor is obtained by pooling local features in specific regions of the image. Finally, we provide the global descriptors to a support vector regression framework to estimate the number of leaves in a plant. We evaluate our method on datasets of the \textit{Leaf Counting Challenge} (LCC), containing images of Arabidopsis and tobacco plants. Experimental results show that on average we reduce absolute counting error by 40% w.r.t. the winner of the 2014 edition of the challenge -a counting via segmentation method. When compared to state-of-the-art density-based approaches to counting, on Arabidopsis image data ~75% less counting errors are observed. Our findings suggest that it is possible to treat leaf counting as a regression problem, requiring as input only the total leaf count per training image.

107 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed an optimized cartesian K-means (ock-meANS) method to better encode the data points for more accurate approximate nearest neighbor search. But the method requires a large number of sub-codewords.
Abstract: Product quantization-based approaches are effective to encode high-dimensional data points for approximate nearest neighbor search. The space is decomposed into a Cartesian product of low-dimensional subspaces, each of which generates a sub codebook. Data points are encoded as compact binary codes using these sub codebooks, and the distance between two data points can be approximated efficiently from their codes by the precomputed lookup tables. Traditionally, to encode a subvector of a data point in a subspace, only one sub codeword in the corresponding sub codebook is selected, which may impose strict restrictions on the search accuracy. In this paper, we propose a novel approach, named optimized cartesian K-means (ock-means), to better encode the data points for more accurate approximate nearest neighbor search. In ock-means, multiple sub codewords are used to encode the subvector of a data point in a subspace. Each sub codeword stems from different sub codebooks in each subspace, which are optimally generated with regards to the minimization of the distortion errors. The high-dimensional data point is then encoded as the concatenation of the indices of multiple sub codewords from all the subspaces. This can provide more flexibility and lower distortion errors than traditional methods. Experimental results on the standard real-life data sets demonstrate the superiority over state-of-the-art approaches for approximate nearest neighbor search.

104 citations


Proceedings ArticleDOI
Alireza Bayesteh1, Hosein Nikopour1, Mahmoud Taherzadeh1, Hadi Baligh1, Jianglei Ma1 
01 Dec 2015
TL;DR: It is shown that significant amount of complexity reduction is possible using the proposed techniques with negligible performance penalty, which paves the way of supporting various applications in future 5G systems using SCMA.
Abstract: Sparse code multiple access (SCMA) is a codebook- based non-orthogonal multiplexing technique. In SCMA, the procedure of bit to QAM symbol mapping and spreading of CDMA are combined together and incoming bits are directly mapped to multi-dimensional codewords of SCMA codebook sets. Due to the sparse nature of codewords, SCMA enjoys the low complexity reception, taking advantage of a near optimal message passing algorithm (MPA). This makes SCMA a candidate for supporting massive connectivity in future 5G networks, where the number of users can potentially be higher than the codeword length (spreading factor). To this end, more efficient reception techniques are needed on top of what MPA delivers. In this paper, some complexity reduction techniques are presented to further reduce the SCMA decoding complexity. These techniques are considered from two perspectives: i) transmitter-side technique, by designing SCMA codebooks with a specific structure providing low complexity of detections, and ii) low complexity decoding techniques taking advantage of the SCMA codebook structure. The proposed techniques are evaluated in terms of both complexity and performance. It is shown that significant amount of complexity reduction is possible using the proposed techniques with negligible performance penalty, which paves the way of supporting various applications in future 5G systems using SCMA.

103 citations


Journal ArticleDOI
TL;DR: In this article, an approximate message passing decoder for sparse superposition codes was proposed, whose decoding complexity scales linearly with the size of the design matrix, and it was shown to asymptotically achieve the AWGN capacity with an appropriate power allocation.
Abstract: Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the AWGN channel at rates approaching the channel capacity. The codebook is defined in terms of a Gaussian design matrix, and codewords are sparse linear combinations of columns of the matrix. In this paper, we propose an approximate message passing decoder for sparse superposition codes, whose decoding complexity scales linearly with the size of the design matrix. The performance of the decoder is rigorously analyzed and it is shown to asymptotically achieve the AWGN capacity with an appropriate power allocation. Simulation results are provided to demonstrate the performance of the decoder at finite blocklengths. We introduce a power allocation scheme to improve the empirical performance, and demonstrate how the decoding complexity can be significantly reduced by using Hadamard design matrices.

89 citations


Journal ArticleDOI
TL;DR: This paper proposes a grapheme-based approach to offline Arabic writer identification and verification that showed better properties than most of the surveyed techniques in terms of supported corpus size and identification rates, and demonstrated the wide representativity and the good generalization capability of synthetic codebooks.

Posted Content
TL;DR: In this article, the authors proposed a Turbo-TS beamforming scheme for millimeter-wave (mmWave) massive MIMO systems, which is composed of the following two key components: 1) Based on the iterative information exchange between the base station and the user, they designed a turbo-like joint search scheme to find out the near-optimal pair of analog precoder and analog combiner; 2) Inspired by the idea of TS algorithm developed in artificial intelligence, they proposed a TS-based precoding/combining scheme to intelligently search the best prec
Abstract: For millimeter-wave (mmWave) massive MIMO systems, the codebook-based analog beamforming (including transmit precoding and receive combining) is usually used to compensate the severe attenuation of mmWave signals. However, conventional beamforming schemes involve complicated search among pre-defined codebooks to find out the optimal pair of analog precoder and analog combiner. To solve this problem, by exploring the idea of turbo equalizer together with tabu search (TS) algorithm, we propose a Turbo-like beamforming scheme based on TS, which is called Turbo-TS beamforming in this paper, to achieve the near-optimal performance with low complexity. Specifically, the proposed Turbo-TS beamforming scheme is composed of the following two key components: 1) Based on the iterative information exchange between the base station and the user, we design a Turbo-like joint search scheme to find out the near-optimal pair of analog precoder and analog combiner; 2) Inspired by the idea of TS algorithm developed in artificial intelligence, we propose a TS-based precoding/combining scheme to intelligently search the best precoder/combiner in each iteration of Turbo-like joint search with low complexity. Analysis shows that the proposed Turbo-TS beamforming can considerably reduce the searching complexity, and simulation results verify that it can achieve the near-optimal performance.

Journal ArticleDOI
TL;DR: A trellis-extended codebook (TEC) that can be easily harmonized with current wireless standards, such as LTE or LTE-Advanced, because it can allow standardized codebooks designed for two, four, or eight antennas to be extended to larger arrays by using atrellis structure and can solve both the complexity and the feedback overhead issues of CSI quantization in massive MIMO systems.
Abstract: It is of great interest to develop efficient ways to acquire accurate channel state information (CSI) for massive multiple-input–multiple-output (MIMO) systems using frequency division duplexing (FDD). It is theoretically well known that the codebook size (in bits) for CSI quantization should be increased as the number of transmit antennas becomes larger, and 3GPP Long Term Evolution (LTE) and LTE-Advanced codebooks have sizes that scale according to this rule. It is hard to apply the conventional approach of using unstructured and predefined vector quantization codebooks for CSI quantization in massive MIMO because of the codeword search complexity. In this paper, we propose a trellis-extended codebook (TEC) that can be easily harmonized with current wireless standards, such as LTE or LTE-Advanced, because it can allow standardized codebooks designed for two, four, or eight antennas to be extended to larger arrays by using a trellis structure. TEC exploits a Viterbi decoder for CSI quantization and a convolutional encoder for CSI reconstruction. By quantizing multiple channel entries simultaneously using standardized codebooks in a state transition of a trellis search, TEC can achieve a fractional number of bits per channel entry quantization and a practical feedback overhead. Thus, TEC can solve both the complexity and the feedback overhead issues of CSI quantization in massive MIMO systems. We also develop trellis-extended successive phase adjustment (TE-SPA), which works as a differential codebook for TEC. This is similar to the dual codebook concept of LTE-Advanced. TE-SPA can reduce CSI quantization error with lower feedback overhead in temporally and spatially correlated channels. Numerical results verify the effectiveness of the proposed schemes in FDD massive MIMO systems.

Journal ArticleDOI
TL;DR: A visual secret image sharing threshold scheme based on random grids and Boolean operations with the abilities of AND and XOR decryptions is proposed and has several superior performances such as (k, n) threshold, no codebook design, avoiding the pixel expansion problem and the same color representation as digital images (digital color).
Abstract: In this paper, a visual secret image sharing threshold scheme based on random grids and Boolean operations with the abilities of AND and XOR decryptions is proposed. When no light-weight computation device the secret could be revealed by human visual system with no cryptographic computation based on Boolean AND operation (stacking). On the other hand, if the light-weight computation device is available the secret could be revealed with better visual quality based on Boolean AND or XOR operation and could be losslessly revealed when sufficient shadow images are collected for a general k out of n scheme. Furthermore, the proposed scheme has several superior performances such as (k, n) threshold, no codebook design, avoiding the pixel expansion problem and the same color representation as digital images (digital color). Experiments are conducted to show the security and efficiency of the proposed scheme. Comparisons with previous approaches show the advantages of the proposed scheme.

Proceedings ArticleDOI
08 Jun 2015
TL;DR: A codebook design algorithm for the beamforming by considering the directional characteristic of mmWave links is proposed and shown that the proposed codebook outperforms previously reported codebooks for mmWave systems.
Abstract: Small cell networks utilizing millimeter wave (mmWave) links are expected to enhance system throughput, since the wide bandwidths at mmWave frequencies can afford high data rates. Due to mmWave's unfavorable channel conditions, it is necessary for mmWave communication systems to use beamforming with a large number of antennas to generate sharp and strong beams. In this paper, we propose a codebook design algorithm for the beamforming by considering the directional characteristic of mmWave links. The proposed codebook design algorithm can be easily adapted to different kinds of antenna arrays. Simulation results show that the proposed codebook outperforms previously reported codebooks for mmWave systems.

Journal ArticleDOI
TL;DR: A novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT), which is independent of image collections.
Abstract: Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

Journal ArticleDOI
Tong He1, Zhenyu Xiao1
TL;DR: In order to improve the efficiency of beamforming for millimeter-wave communications, a codebook is designed and a corresponding beam search algorithm is proposed in this paper, which dramatically decreases the search time.
Abstract: In order to improve the efficiency of beamforming for millimeter-wave communications, a codebook is designed and a corresponding beam search algorithm is proposed in this paper. The codebook is designed with a feature of hierarchy, and can be organized into a binary tree, which makes the implementation of binary search possible. Meanwhile, based on the designed codebook, a suboptimal binary search like (BSL) algorithm is proposed for beamforming. Theoretical analysis and simulation results show that, compared with the state-of-the-art search schemes, the proposed scheme dramatically decreases the search time.

Journal ArticleDOI
TL;DR: A novel approach is proposed for off-line writer identification that utilizes an ensemble of codebook grapheme features and Kernel discriminant analysis using spectral regression (SR-KDA) is used as a dimensionality reduction technique in order to avoid over-fitting.

Proceedings ArticleDOI
08 Jun 2015
TL;DR: An effective codebook is designed, using a genetic algorithm that achieves a near-optimal array gain in all directions, and a low complexity channel estimation scheme is proposed that requires less signalling overhead and is effective with low-resolution phase shifters.
Abstract: Millimeter wave (mmWave) technologies can enable current mobile communication systems to achieve higher data rates. However, wireless channels at mmWave frequencies experience higher isotropic path loss. Therefore, employing a suitable beamforming algorithm is an indispensable element of any mmWave system. Traditional multiple-input multiple-output (MIMO) systems employ digital beamforming where each antenna element is equipped with one RF chain. In case of mmWave systems, however, the power consumption, signalling and hardware cost impose the designers to deploy analog or hybrid beamforming strategies. This paper addresses two key problems in beamforming for millimeter wave communication systems. First, an effective codebook is designed, using a genetic algorithm that achieves a near-optimal array gain in all directions. This RF codebook is shown to perform better compared to the state-of-the-art RF codebooks with fewer RF chains and lower resolution phase shifters. Secondly, a low complexity channel estimation scheme is proposed that requires less signalling overhead and is effective with low-resolution phase shifters. Finally, the performance of the proposed RF codebook and channel estimation scheme is thoroughly investigated in terms of spectral efficiency.

Journal ArticleDOI
TL;DR: Experimental results on three 3-D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment and low-complexity pooling.
Abstract: The field of assessing three-dimensional (3-D) visual experience is challenging. In this paper, we propose a new blind image quality assessment for stereoscopic images by using binocular guided quality lookup and visual codebook. To be more specific, in the training stage, we construct phase-tuned quality lookup (PTQL) and phase-tuned visual codebook (PTVC) from the binocular energy responses based on stimuli from different spatial frequencies, orientations, and phase shifts. In the test stage, blind quality pooling can be easily achieved by searching the PTQL and PTVC, and the quality score is obtained by averaging the largest values of all patch’s quality. Experimental results on three 3-D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment and low-complexity pooling.

Proceedings ArticleDOI
01 Dec 2015
TL;DR: A beam training algorithm is proposed that efficiently designs the beamforming vectors with low training overhead and achieves a comparable rate to that obtained by exhaustive search solutions while requiring lower training overhead when compared to prior work.
Abstract: Millimeter wave (mmWave) communication is one solution to provide more spectrum than available at lower carrier frequencies. To provide sufficient link budget, mmWave systems will use beamforming with large antenna arrays at both the transmitter and receiver. Training these large arrays using conventional approaches taken at lower carrier frequencies, however, results in high overhead. In this paper, we propose a beam training algorithm that efficiently designs the beamforming vectors with low training overhead. Exploiting mmWave channel reciprocity, the proposed algorithm relaxes the need for an explicit feedback channel, and opportunistically terminates the training process when a desired quality of service is achieved. To construct the training beamforming vectors, a new multi-resolution codebook is developed for hybrid analog/digital architectures. Simulation results show that the proposed algorithm achieves a comparable rate to that obtained by exhaustive search solutions while requiring lower training overhead when compared to prior work.

Journal ArticleDOI
TL;DR: It is demonstrated that the underlying imaging modality and the irrelevance of illumination and scale invariance within the transmission imagery context considered here result in the favourable performance of simpler density histogram descriptors over 3D extensions of the well-established SIFT and RIFT feature descriptor approaches.

Journal ArticleDOI
TL;DR: A reversible data-hiding scheme for vector quantization (VQ)-compressed images that can achieve a high embedding capacity and a high compression bit rate is presented.

Proceedings ArticleDOI
19 Apr 2015
TL;DR: This work proposes to split each row vector of weight matrices into sub-vectors, and quantize them into a set of codewords using a split vector quantization (split-VQ) algorithm, and demonstrates that this method can further reduce the model size and save 10% to 50% computation on top of an already very compact SVD-DNN without a noticeable performance degradation.
Abstract: Due to a large number of parameters in deep neural networks (DNNs), it is challenging to design a small-footprint DNN-based speech recognition system while maintaining a high recognition performance. Even with a singular value matrix decomposition (SVD) method and scalar quantization, the DNN model is still too large to be deployed on many mobile devices. Common practices like reducing the number of hidden nodes often result in significant accuracy loss. In this work, we propose to split each row vector of weight matrices into sub-vectors, and quantize them into a set of codewords using a split vector quantization (split-VQ) algorithm. The codebook can be fine-tuned using back-propagation when an aggressive quantization is performed. Experimental results demonstrate that the proposed method can further reduce the model size by 75% to 80% and save 10% to 50% computation on top of an already very compact SVD-DNN without a noticeable performance degradation. This results in a 3.2 MB-footprint DNN giving similar recognition performance as what a 59.1 MB standard DNN can achieve.

Journal ArticleDOI
TL;DR: A set of steganalysis features of the probability of same pulse position of the adaptive multirate (AMR) audio steganography schemes, applied to the proposed features and used as the steganalyzer.
Abstract: This paper presents a method for detection of adaptive multirate (AMR) audio steganography. AMR audio codec is an audio data compression scheme optimized for speech coding, and widely used in some mobile telecommunications system. The AMR audio steganography schemes are emerging recently and they embed secret messages by modifying the nonzero pulse positions which are determined by fixed codebook search in AMR compression procedure. Those methods have high embedding capacity and good imperceptivity. We have observed that those steganography schemes will cause the probability of same pulse positions in the same track increasing. Based on this phenomenon, this paper presents a set of steganalysis features of the probability of same pulse position. The support vector machine is applied to the proposed features and used as the steganalyzer. The performance of the scheme is tested on a database containing $\sim 140$ 714 audios. Experimental results show that the correct detection rate of our proposed method is >90% when the embedding bit rate is 30% or above, and can reach above 85% for cover audios.

Patent
08 Oct 2015
TL;DR: In this article, the authors propose a method for efficiently calculating inner products between a query item and a database of items, where the search items are represented as vectors of elements, a subspace being a block of elements from each search item that occur at the same vector position, generating a codebook for each subspace within soft constraints based on example queries.
Abstract: Implementations provide an improved system for efficiently calculating inner products between a query item and a database of items. An example method includes generating a plurality of subspaces from search items in a database, the search items being represented as vectors of elements, a subspace being a block of elements from each search item that occur at the same vector position, generating a codebook for each subspace within soft constraints that are based on example queries, assigning each subspace of each search item an entry in the codebook for the subspace, the assignments for all subspaces of a search item representing a quantized search item, and storing the codebooks and the quantized search items. Generating a codebook for a particular subspace can include clustering the search item subspaces that correspond to the particular subspace, finding a cluster center for each cluster, and storing the cluster center as the codebook entry.

Journal ArticleDOI
TL;DR: This paper proposes downlink multi user MIMO-LTE advanced networks using SINR approximation and hierarchical CSI feedback and shows that the proposed technique improves the throughput and reduces the overhead.
Abstract: In multi user MIMO-LTE advanced networks, the main issue is related to signal to noise ratio SINR mismatch which can result in reduced throughput and performance. Also the conventional code books may result in feedback overhead when the channels change slowly. Hence in this paper, we propose downlink multi user MIMO-LTE advanced networks using SINR approximation and hierarchical CSI feedback. Initially, signal and spatially correlated flat fading channels model of MU-MIMO is defined. Then a signal to noise ratio SINR approximation techniques is employed which utilises the channel state information at the base station. An advance structural codebook and an idea of hierarchical feedback are introduced. The main idea of hierarchical feedback is that if the channel is altered slowly, the channel state information feedback can be aggregated over multiple feedback intervals so that the aggregated bits index a larger codebook. There are pre-defined numbers of levels in a hierarchical codebook tree. This increased codebook size can effectively improve the performance of MU-MIMO. By simulation results, we show that the proposed technique improves the throughput and reduces the overhead.

Proceedings ArticleDOI
19 Oct 2015
TL;DR: This work proposes that accelerometer and magnetometer sensors which are commonly available on mobile devices can be used to better account for mobility, and perform near-real time beam width adaptation and beam switching to address the mobility challenges in 60 GHz WLANs.
Abstract: The potential to provide multi-gbps throughput has made 60 GHz communication an attractive choice for next-generation WLANs. Due to highly directional nature of the communication, a 60 GHz link faces frequent outages in the presence of mobility. In this work, we present a sensor-assisted multi-level codebook-based beam width adaptation and beam switching to address the mobility challenges in 60 GHz WLANs. First, we show that by combining antenna element selection with codebook design, it is possible to generate a multilevel codebook that can cover different beam forming directions with many possible beam widths and directive gain. Second, we propose that accelerometer and magnetometer sensors which are commonly available on mobile devices can be used to better account for mobility, and perform near-real time beam width adaptation and beam switching. We evaluate the sensor-assisted multi-level codebook-based beam forming with trace-driven simulations using real mobility traces. Numeric evaluation shows that such beam forming can maintain the connectivity over 84% of the time even in presence of high device mobility.

Journal ArticleDOI
TL;DR: This work presents a novel reversible data hiding scheme based on the search-order coding (SOC) algorithm and side match vector quantization (SMVQ) that yields a higher embedding rate than the schemes of Yang and Lin and Yang et al.

Proceedings ArticleDOI
23 Aug 2015
TL;DR: Experimental results show that the proposed method outperforms the state-of-the-art algorithms and archives the best performance.
Abstract: This paper presents a method for text-independent writer identification using SIFT descriptor and contour-directional feature (CDF). The proposed method contains two stages. In the first stage, a codebook of local texture patterns is constructed by clustering a set of SIFT descriptors extracted from images. Using this codebook, the occurrence histograms are calculated to determine the similarities between different images. For each image, we obtain a candidate list of reference images. The next stage is to refine the candidate list using the contour-directional feature and SIFT descriptor. The proposed method is evaluated with two datasets: the ICFHR2012-Latin dataset and the ICDAR2013 dataset. Experimental results show that the proposed method outperforms the state-of-the-art algorithms and archives the best performance.