Showing papers in "IEEE Transactions on Information Forensics and Security in 2014"
TL;DR: This paper presents a robust face alignment technique, which explicitly considers the uncertainties of facial feature detectors, and describes the dropout-support vector machine approach used by the system for face attribute estimation, in order to avoid over-fitting.
Abstract: This paper concerns the estimation of facial attributes—namely, age and gender—from images of faces acquired in challenging, in the wild conditions This problem has received far less attention than the related problem of face recognition, and in particular, has not enjoyed the same dramatic improvement in capabilities demonstrated by contemporary face recognition systems Here, we address this problem by making the following contributions First, in answer to one of the key problems of age estimation research—absence of data—we offer a unique data set of face images, labeled for age and gender, acquired by smart-phones and other mobile devices, and uploaded without manual filtering to online image repositories We show the images in our collection to be more challenging than those offered by other face-photo benchmarks Second, we describe the dropout-support vector machine approach used by our system for face attribute estimation, in order to avoid over-fitting This method, inspired by the dropout learning techniques now popular with deep belief networks, is applied here for training support vector machines, to the best of our knowledge, for the first time Finally, we present a robust face alignment technique, which explicitly considers the uncertainties of facial feature detectors We report extensive tests analyzing both the difficulty levels of contemporary benchmarks as well as the capabilities of our own system These show our method to outperform state-of-the-art by a wide margin
TL;DR: This paper thoroughly analyzes the permission-induced risk in Android apps on three levels in a systematic manner, and evaluates the usefulness of risky permissions for malapp detection with support vector machine, decision trees, as well as random forest.
Abstract: Android has been a major target of malicious applications (malapps). How to detect and keep the malapps out of the app markets is an ongoing challenge. One of the central design points of Android security mechanism is permission control that restricts the access of apps to core facilities of devices. However, it imparts a significant responsibility to the app developers with regard to accurately specifying the requested permissions and to the users with regard to fully understanding the risk of granting certain combinations of permissions. Android permissions requested by an app depict the app’s behavioral patterns. In order to help understanding Android permissions, in this paper, we explore the permission-induced risk in Android apps on three levels in a systematic manner. First, we thoroughly analyze the risk of an individual permission and the risk of a group of collaborative permissions. We employ three feature ranking methods, namely, mutual information, correlation coefficient, and T-test to rank Android individual permissions with respect to their risk. We then use sequential forward selection as well as principal component analysis to identify risky permission subsets. Second, we evaluate the usefulness of risky permissions for malapp detection with support vector machine, decision trees, as well as random forest. Third, we in depth analyze the detection results and discuss the feasibility as well as the limitations of malapp detection based on permission requests. We evaluate our methods on a very large official app set consisting of 310 926 benign apps and 4868 real-world malapps and on a third-party app sets. The empirical results show that our malapp detectors built on risky permissions give satisfied performance (a detection rate as 94.62% with a false positive rate as 0.6%), catch the malapps’ essential patterns on violating permission access regulations, and are universally applicable to unknown malapps (detection rate as 74.03%).
TL;DR: This paper inspects the spoofing potential of subject-specific 3D facial masks for different recognition systems and addresses the detection problem of this more complex attack type.
Abstract: Spoofing is the act of masquerading as a valid user by falsifying data to gain an illegitimate access. Vulnerability of recognition systems to spoofing attacks (presentation attacks) is still an open security issue in biometrics domain and among all biometric traits, face is exposed to the most serious threat, since it is particularly easy to access and reproduce. In this paper, many different types of face spoofing attacks have been examined and various algorithms have been proposed to detect them. Mainly focusing on 2D attacks forged by displaying printed photos or replaying recorded videos on mobile devices, a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor. However, with the advancements in 3D reconstruction and printing technologies, this assumption can no longer be maintained. In this paper, we aim to inspect the spoofing potential of subject-specific 3D facial masks for different recognition systems and address the detection problem of this more complex attack type. In order to assess the spoofing performance of 3D masks against 2D, 2.5D, and 3D face recognition and to analyze various texture-based countermeasures using both 2D and 2.5D data, a parallel study with comprehensive experiments is performed on two data sets: the Morpho database which is not publicly available and the newly distributed 3D mask attack database.
TL;DR: A class of new distortion functions known as uniform embedding distortion function (UED) is presented for both side-informed and non side- informed secure JPEG steganography, which tries to spread the embedding modification uniformly to quantized discrete cosine transform (DCT) coefficients of all possible magnitudes.
Abstract: Steganography is the science and art of covert communication, which aims to hide the secret messages into a cover medium while achieving the least possible statistical detectability. To this end, the framework of minimal distortion embedding is widely adopted in the development of the steganographic system, in which a well designed distortion function is of vital importance. In this paper, a class of new distortion functions known as uniform embedding distortion function (UED) is presented for both side-informed and non side-informed secure JPEG steganography. By incorporating the syndrome trellis coding, the best codeword with minimal distortion for a given message is determined with UED, which, instead of random modification, tries to spread the embedding modification uniformly to quantized discrete cosine transform (DCT) coefficients of all possible magnitudes. In this way, less statistical detectability is achieved, owing to the reduction of the average changes of the first- and second-order statistics for DCT coefficients as a whole. The effectiveness of the proposed scheme is verified with evidence obtained from exhaustive experiments using popular steganalyzers with various feature sets on the BOSSbase database. Compared with prior arts, the proposed scheme gains favorable performance in terms of secure embedding capacity against steganalysis.
TL;DR: It is shown that the proposed approach boosts the likelihood of correctly identifying the person of interest through the use of different fusion schemes, 3-D face models, and incorporation of quality measures for fusion and video frame selection.
Abstract: As face recognition applications progress from constrained sensing and cooperative subjects scenarios (e.g., driver’s license and passport photos) to unconstrained scenarios with uncooperative subjects (e.g., video surveillance), new challenges are encountered. These challenges are due to variations in ambient illumination, image resolution, background clutter, facial pose, expression, and occlusion. In forensic investigations where the goal is to identify a person of interest, often based on low quality face images and videos, we need to utilize whatever source of information is available about the person. This could include one or more video tracks, multiple still images captured by bystanders (using, for example, their mobile phones), 3-D face models constructed from image(s) and video(s), and verbal descriptions of the subject provided by witnesses. These verbal descriptions can be used to generate a face sketch and provide ancillary information about the person of interest (e.g., gender, race, and age). While traditional face matching methods generally take a single media (i.e., a still face image, video track, or face sketch) as input, this paper considers using the entire gamut of media as a probe to generate a single candidate list for the person of interest. We show that the proposed approach boosts the likelihood of correctly identifying the person of interest through the use of different fusion schemes, 3-D face models, and incorporation of quality measures for fusion and video frame selection.
TL;DR: This paper proposes two novel algorithms to detect the contrast enhancement involved manipulations in digital images, focusing on the detection of global contrast enhancement applied to the previously JPEG-compressed images, which are widespread in real applications.
Abstract: As a retouching manipulation, contrast enhancement is typically used to adjust the global brightness and contrast of digital images. Malicious users may also perform contrast enhancement locally for creating a realistic composite image. As such it is significant to detect contrast enhancement blindly for verifying the originality and authenticity of the digital images. In this paper, we propose two novel algorithms to detect the contrast enhancement involved manipulations in digital images. First, we focus on the detection of global contrast enhancement applied to the previously JPEG-compressed images, which are widespread in real applications. The histogram peak/gap artifacts incurred by the JPEG compression and pixel value mappings are analyzed theoretically, and distinguished by identifying the zero-height gap fingerprints. Second, we propose to identify the composite image created by enforcing contrast adjustment on either one or both source regions. The positions of detected blockwise peak/gap bins are clustered for recognizing the contrast enhancement mappings applied to different source regions. The consistency between regional artifacts is checked for discovering the image forgeries and locating the composition boundary. Extensive experiments have verified the effectiveness and efficacy of the proposed techniques.
TL;DR: The issues, which represent an obstacle toward the deployment of biometric systems based on the analysis of brain activity in real life applications are speculated on and a critical and comprehensive review of state-of-the-art methods for electroencephalogram-based automatic user recognition is provided.
Abstract: Brain signals have been investigated within the medical field for more than a century to study brain diseases like epilepsy, spinal cord injuries, Alzheimer's, Parkinson's, schizophrenia, and stroke among others. They are also used in both brain computer and brain machine interface systems with assistance, rehabilitative, and entertainment applications. Despite the broad interest in clinical applications, the use of brain signals has been only recently investigated by the scientific community as a biometric characteristic to be used in automatic people recognition systems. However, brain signals present some peculiarities, not shared by the most commonly used biometrics, such as face, iris, and fingerprints, with reference to privacy compliance, robustness against spoofing attacks, possibility to perform continuous identification, intrinsic liveness detection, and universality. These peculiarities make the use of brain signals appealing. On the other hand, there are many challenges which need to be properly addressed. The understanding of the level of uniqueness and permanence of brain responses, the design of elicitation protocols, and the invasiveness of the acquisition process are only few of the challenges which need to be tackled. In this paper, we further speculate on those issues, which represent an obstacle toward the deployment of biometric systems based on the analysis of brain activity in real life applications and intend to provide a critical and comprehensive review of state-of-the-art methods for electroencephalogram-based automatic user recognition, also reporting neurophysiological evidences related to the performed claims.
TL;DR: Experimental results show the effectiveness of the proposed discriminative multimetric learning method for kinship verification via facial image analysis over the existing single-metric and multimetricLearning methods.
Abstract: In this paper, we propose a new discriminative multimetric learning method for kinship verification via facial image analysis. Given each face image, we first extract multiple features using different face descriptors to characterize face images from different aspects because different feature descriptors can provide complementary information. Then, we jointly learn multiple distance metrics with these extracted multiple features under which the probability of a pair of face image with a kinship relation having a smaller distance than that of the pair without a kinship relation is maximized, and the correlation of different features of the same face sample is maximized, simultaneously, so that complementary and discriminative information is exploited for verification. Experimental results on four face kinship data sets show the effectiveness of our proposed method over the existing single-metric and multimetric learning methods.
TL;DR: A sparse reconstruction based metric learning method is proposed to learn a distance metric to minimize the intra-class sparse reconstruction errors and maximize the inter-class dense reconstruction errors simultaneously, so that discriminative information can be exploited for recognition.
Abstract: We investigate the problem of human identity and gender recognition from gait sequences with arbitrary walking directions. Most current approaches make the unrealistic assumption that persons walk along a fixed direction or a pre-defined path. Given a gait sequence collected from arbitrary walking directions, we first obtain human silhouettes by background subtraction and cluster them into several clusters. For each cluster, we compute the cluster-based averaged gait image as features. Then, we propose a sparse reconstruction based metric learning method to learn a distance metric to minimize the intra-class sparse reconstruction errors and maximize the inter-class sparse reconstruction errors simultaneously, so that discriminative information can be exploited for recognition. The experimental results show the efficacy of our approach.
TL;DR: This paper evaluates the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware), and proposes possible remedies for improving the current state of malware detection on mobile devices.
Abstract: Mobile malware threats (e.g., on Android) have recently become a real concern. In this paper, we evaluate the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware). Such an evaluation is important for not only measuring the available defense against mobile malware threats, but also proposing effective, next-generation solutions. We developed DroidChameleon, a systematic framework with various transformation techniques, and used it for our study. Our results on 10 popular commercial anti-malware applications for Android are worrisome: none of these tools is resistant against common malware transformation techniques. In addition, a majority of them can be trivially defeated by applying slight transformation over known malware with little effort for malware authors. Finally, in light of our results, we propose possible remedies for improving the current state of malware detection on mobile devices.
TL;DR: The results show that the performance of the proposed technique is comparable and often superior to state-of-the-art algorithms despite its simplicity and efficiency.
Abstract: This paper studies online signature verification on touch interface-based mobile devices. A simple and effective method for signature verification is developed. An online signature is represented with a discriminative feature vector derived from attributes of several histograms that can be computed in linear time. The resulting signature template is compact and requires constant space. The algorithm was first tested on the well-known MCYT-100 and SUSIG data sets. The results show that the performance of the proposed technique is comparable and often superior to state-of-the-art algorithms despite its simplicity and efficiency. In order to test the proposed method on finger drawn signatures on touch devices, a data set was collected from an uncontrolled environment and over multiple sessions. Experimental results on this data set confirm the effectiveness of the proposed algorithm in mobile settings. The results demonstrate the problem of within-user variation of signatures across multiple sessions and the effectiveness of cross session training strategies to alleviate these problems.
TL;DR: A highly efficient image encryption-then-compression (ETC) system, where both lossless and lossy compression are considered, and the proposed image encryption scheme operated in the prediction error domain is shown to be able to provide a reasonably high level of security.
Abstract: In many practical scenarios, image encryption has to be conducted prior to image compression. This has led to the problem of how to design a pair of image encryption and compression algorithms such that compressing the encrypted images can still be efficiently performed. In this paper, we design a highly efficient image encryption-then-compression (ETC) system, where both lossless and lossy compression are considered. The proposed image encryption scheme operated in the prediction error domain is shown to be able to provide a reasonably high level of security. We also demonstrate that an arithmetic coding-based approach can be exploited to efficiently compress the encrypted images. More notably, the proposed compression approach applied to encrypted images is only slightly worse, in terms of compression efficiency, than the state-of-the-art lossless/lossy image coders, which take original, unencrypted images as inputs. In contrast, most of the existing ETC solutions induce significant penalty on the compression efficiency.
TL;DR: A new max-ratio relay selection policy is proposed to optimize the secrecy transmission by considering all the possible source-to-relay and relay- to-destination links and selecting the relay having the link which maximizes the signal to eavesdropper channel gain ratio.
Abstract: This paper considers the security of transmission in buffer-aided decode-and-forward cooperative wireless networks. An eavesdropper which can intercept the data transmission from both the source and relay nodes is considered to threaten the security of transmission. Finite size data buffers are assumed to be available at every relay in order to avoid having to select concurrently the best source-to-relay and relay-to-destination links. A new max-ratio relay selection policy is proposed to optimize the secrecy transmission by considering all the possible source-to-relay and relay-to-destination links and selecting the relay having the link which maximizes the signal to eavesdropper channel gain ratio. Two cases are considered in terms of knowledge of the eavesdropper channel strengths: exact and average gains, respectively. Closed-form expressions for the secrecy outage probability for both cases are obtained, which are verified by simulations. The proposed max-ratio relay selection scheme is shown to outperform one based on a max-min-ratio relay scheme.
TL;DR: Large-scale experiments on simulated and real forgeries show that the proposed technique largely improves upon the current state of the art, and that it can be applied with success to a wide range of practical situations.
Abstract: Graphics editing programs of the last generation provide ever more powerful tools, which allow for the retouching of digital images leaving little or no traces of tampering. The reliable detection of image forgeries requires, therefore, a battery of complementary tools that exploit different image properties. Techniques based on the photo-response non-uniformity (PRNU) noise are among the most valuable such tools, since they do not detect the inserted object but rather the absence of the camera PRNU, a sort of camera fingerprint, dealing successfully with forgeries that elude most other detection strategies. In this paper, we propose a new approach to detect image forgeries using sensor pattern noise. Casting the problem in terms of Bayesian estimation, we use a suitable Markov random field prior to model the strong spatial dependences of the source, and take decisions jointly on the whole image rather than individually for each pixel. Modern convex optimization techniques are then adopted to achieve a globally optimal solution and the PRNU estimation is improved by resorting to nonlocal denoising. Large-scale experiments on simulated and real forgeries show that the proposed technique largely improves upon the current state of the art, and that it can be applied with success to a wide range of practical situations.
TL;DR: It is shown that SybilBelief is able to accurately identify Sybil nodes with low false positive rates and low false negative rates, and is resilient to noise in the authors' prior knowledge about known benign andSybil nodes.
Abstract: Sybil attacks are a fundamental threat to the security of distributed systems. Recently, there has been a growing interest in leveraging social networks to mitigate Sybil attacks. However, the existing approaches suffer from one or more drawbacks, including bootstrapping from either only known benign or known Sybil nodes, failing to tolerate noise in their prior knowledge about known benign or Sybil nodes, and not being scalable. In this paper, we aim to overcome these drawbacks. Toward this goal, we introduce SybilBelief, a semi-supervised learning framework, to detect Sybil nodes. SybilBelief takes a social network of the nodes in the system, a small set of known benign nodes, and, optionally, a small set of known Sybils as input. Then, SybilBelief propagates the label information from the known benign and/or Sybil nodes to the remaining nodes in the system. We evaluate SybilBelief using both synthetic and real-world social network topologies. We show that SybilBelief is able to accurately identify Sybil nodes with low false positive rates and low false negative rates. SybilBelief is resilient to noise in our prior knowledge about known benign and Sybil nodes. Moreover, SybilBelief performs orders of magnitudes better than existing Sybil classification mechanisms and significantly better than existing Sybil ranking mechanisms.
TL;DR: A novel algorithm based on an image descriptor and a nonlinear classification method, following a learning period characterizing the normal behavior of training frames, detects abnormal events in the current frame.
Abstract: The aim of this paper is to detect abnormal events in video streams, a challenging but important subject in video surveillance. We propose a novel algorithm to address this problem. The algorithm is based on an image descriptor and a nonlinear classification method. We introduce a histogram of optical flow orientation as a descriptor encoding the moving information of each video frame. The nonlinear one-class support vector machine classification algorithm, following a learning period characterizing the normal behavior of training frames, detects abnormal events in the current frame. Further, a fast version of the detection algorithm is designed by fusing the optical flow computation with a background subtraction step. We finally apply the method to detect abnormal events on several benchmark data sets, and show promising results.
TL;DR: This paper presents a novel lens detection algorithm that can be used to reduce the effect of contact lenses and outperforms other lens detection algorithms on the two databases and shows improved iris recognition performance.
Abstract: The presence of a contact lens, particularly a textured cosmetic lens, poses a challenge to iris recognition as it obfuscates the natural iris patterns. The main contribution of this paper is to present an in-depth analysis of the effect of contact lenses on iris recognition. Two databases, namely, the IIIT-D Iris Contact Lens database and the ND-Contact Lens database, are prepared to analyze the variations caused due to contact lenses. We also present a novel lens detection algorithm that can be used to reduce the effect of contact lenses. The proposed approach outperforms other lens detection algorithms on the two databases and shows improved iris recognition performance.
TL;DR: Experimental results based on the Southampton multibiometric tunnel database show that the use of soft biometric traits is able to improve the performance of face recognition based on sparse representation on real and ideal scenarios by adaptive fusion rules.
Abstract: Soft biometric information extracted from a human body (e.g., height, gender, skin color, hair color, and so on) is ancillary information easily distinguished at a distance but it is not fully distinctive by itself in recognition tasks. However, this soft information can be explicitly fused with biometric recognition systems to improve the overall recognition when confronting high variability conditions. One significant example is visual surveillance, where face images are usually captured in poor quality conditions with high variability and automatic face recognition systems do not work properly. In this scenario, the soft biometric information can provide very valuable information for person recognition. This paper presents an experimental study of the benefits of soft biometric labels as ancillary information based on the description of human physical features to improve challenging person recognition scenarios at a distance. In addition, we analyze the available soft biometric information in scenarios of varying distance between camera and subject. Experimental results based on the Southampton multibiometric tunnel database show that the use of soft biometric traits is able to improve the performance of face recognition based on sparse representation on real and ideal scenarios by adaptive fusion rules.
TL;DR: This paper proposes a novel CP-ABE scheme with constant-size decryption keys independent of the number of attributes, which can be as small as 672 bits, and is the only CP- ABE with expressive access structures, which is suitable for CP-abE key storage in lightweight devices.
Abstract: Lightweight devices, such as radio frequency identification tags, have a limited storage capacity, which has become a bottleneck for many applications, especially for security applications. Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic tool, where the encryptor can decide the access structure that will be used to protect the sensitive data. However, current CP-ABE schemes suffer from the issue of having long decryption keys, in which the size is linear to and dependent on the number of attributes. This drawback prevents the use of lightweight devices in practice as a storage of the decryption keys of the CP-ABE for users. In this paper, we provide an affirmative answer to the above long standing issue, which will make the CP-ABE very practical. We propose a novel CP-ABE scheme with constant-size decryption keys independent of the number of attributes. We found that the size can be as small as 672 bits. In comparison with other schemes in the literature, the proposed scheme is the only CP-ABE with expressive access structures, which is suitable for CP-ABE key storage in lightweight devices.
TL;DR: An improved mutual foreground LBP method is presented for achieving a better matching performance for contactless palm vein recognition, using the normalized gradient-based maximal principal curvature algorithm and k -means method for texture extraction and the matched pixel ratio was adopted to determine the best matching region (BMR).
Abstract: Local binary pattern (LBP) is popular for the texture representation owing to its discrimination ability and computational efficiency, but when used to describe the sparse texture in palm vein images, the discrimination ability is diluted, leading to lower performance, especially for contactless palm vein matching. In this paper, an improved mutual foreground LBP method is presented for achieving a better matching performance for contactless palm vein recognition. First, the normalized gradient-based maximal principal curvature algorithm and $k$ -means method are utilized for texture extraction, which can effectively suppress noise and improve accuracy and robustness. Then, an LBP matching strategy was adopted for similarity measurements on the basis of extracted palm veins and their neighborhoods, which include the vast majority of useful distinctive information for identification while eliminating interference by excluding the background. To further improve the LBP performance, the matched pixel ratio was adopted to determine the best matching region (BMR). Finally, the matching score obtained in the process of finding the BMR was fused with results of LBP matching at the score level to further improve the identification performance. A series of rigorous contrast experiments using the palm vein data set in the CASIA multispectral palmprint image database were conducted. The obtained low equal error rate (0.267%) and comparisons with the most state-of-the-art approaches demonstrate that our method is feasible and effective for contactless palm vein recognition.
TL;DR: This paper proposes a novel facial feature extraction method named Gabor ordinal measures (GOM), which integrates the distinctiveness of Gabor features and the robustness of Ordinal measures as a promising solution to jointly handle inter-person similarity and intra-person variations in face images.
Abstract: Great progress has been achieved in face recognition in the last three decades. However, it is still challenging to characterize the identity related features in face images. This paper proposes a novel facial feature extraction method named Gabor ordinal measures (GOM), which integrates the distinctiveness of Gabor features and the robustness of ordinal measures as a promising solution to jointly handle inter-person similarity and intra-person variations in face images. In the proposal, different kinds of ordinal measures are derived from magnitude, phase, real, and imaginary components of Gabor images, respectively, and then are jointly encoded as visual primitives in local regions. The statistical distributions of these visual primitives in face image blocks are concatenated into a feature vector and linear discriminant analysis is further used to obtain a compact and discriminative feature representation. Finally, a two-stage cascade learning method and a greedy block selection method are used to train a strong classifier for face recognition. Extensive experiments on publicly available face image databases, such as FERET, AR, and large scale FRGC v2.0, demonstrate state-of-the-art face recognition performance of GOM.
TL;DR: A new analytical framework is developed to characterize the average secrecy capacity as the principal security performance metric and the performance gap between N and N+1 antennas based on their respective secrecy array gains is examined.
Abstract: This paper advocates physical layer security of maximal ratio combining (MRC) in wiretap two-wave with diffuse power fading channels. In such a wiretap channel, we consider that confidential messages transmitted from a single antenna transmitter to an M-antenna receiver are overheard by an N-antenna eavesdropper. The receiver adopts MRC to maximize the probability of secure transmission, whereas the eavesdropper adopts MRC to maximize the probability of successful eavesdropping. We derive the secrecy performance for two practical scenarios: 1) the eavesdropper's channel state information (CSI) is available at the transmitter and 2) the eavesdropper's CSI is not available at the transmitter. For the first scenario, we develop a new analytical framework to characterize the average secrecy capacity as the principal security performance metric. Specifically, we derive new closed-form expressions for the exact and asymptotic average secrecy capacity. Based on these, we determine the high signal-to-noise ratio power offset to explicitly quantify the impacts of the main channel and the eavesdropper's channel on the average secrecy capacity. For the second scenario, the secrecy outage probability is the primary security performance metric. Here, we derive new closed-form expressions for the exact and asymptotic secrecy outage probability. We also derive the probability of nonzero secrecy capacity. The asymptotic secrecy outage probability explicitly indicates that the positive impact of M is reflected in the secrecy diversity order and the negative impact of N is reflected in the secrecy array gain. Motivated by this, we examine the performance gap between N and N+1 antennas based on their respective secrecy array gains.
TL;DR: A five-step cost assignment scheme capable of better resisting steganalysis equipped with high-dimensional rich model features is proposed and some rules for ranking the priority profile for spatial images are proposed.
Abstract: Relating the embedding cost in a distortion function to statistical detectability is an open vital problem in modern steganography. In this paper, we take one step forward by formulating the process of cost assignment into two phases: 1) determining a priority profile and 2) specifying a cost-value distribution. We analytically show that the cost-value distribution determines the change rate of cover elements. Furthermore, when the cost-values are specified to follow a uniform distribution, the change rate has a linear relation with the payload, which is a rare property for content-adaptive steganography. In addition, we propose some rules for ranking the priority profile for spatial images. Following such rules, we propose a five-step cost assignment scheme. Previous steganographic schemes, such as HUGO, WOW, S-UNIWARD, and MG, can be integrated into our scheme. Experimental results demonstrate that the proposed scheme is capable of better resisting steganalysis equipped with high-dimensional rich model features.
TL;DR: A scalable certificateless remote authentication protocol with anonymity and forward security for WBANs that not only provides mutual authentication, session key establishment, anonymity, unlinkability, and nonrepudiation, but also achieves forward security, key escrow resilience, and scalability.
Abstract: Existing anonymous remote authentication protocols to secure wireless body area networks (WBANs) raise challenges such as eliminating the need for distributing clients’ account information to the application providers and achieving forward security. This paper efficiently addresses these challenges by devising a scalable certificateless remote authentication protocol with anonymity and forward security for WBANs. Different from the previous protocols in this field, our protocol not only provides mutual authentication, session key establishment, anonymity, unlinkability, and nonrepudiation, but also achieves forward security, key escrow resilience, and scalability. Performance evaluation demonstrates that compared with the most efficient ID-based remote anonymous authentication protocol, our protocol reduces at least 52.6% and 17.6% of the overall running time and communication overhead, respectively, and the reduction in the computation cost and communication overhead achieves at least 73.8% and 55.8%, respectively, compared with up-to-date certificateless remote authentication protocol with anonymity.
TL;DR: A novel scheme of data hiding directly in the encrypted version of H.264/AVC video stream is proposed that is more efficient without decryption followed by data hiding and re-encryption, and video file size is strictly preserved even after encryption and data embedding.
Abstract: Digital video sometimes needs to be stored and processed in an encrypted format to maintain security and privacy. For the purpose of content notation and/or tampering detection, it is necessary to perform data hiding in these encrypted videos. In this way, data hiding in encrypted domain without decryption preserves the confidentiality of the content. In addition, it is more efficient without decryption followed by data hiding and re-encryption. In this paper, a novel scheme of data hiding directly in the encrypted version of H.264/AVC video stream is proposed, which includes the following three parts, i.e., H.264/AVC video encryption, data embedding, and data extraction. By analyzing the property of H.264/AVC codec, the codewords of intraprediction modes, the codewords of motion vector differences, and the codewords of residual coefficients are encrypted with stream ciphers. Then, a data hider may embed additional data in the encrypted domain by using codeword substitution technique, without knowing the original video content. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Furthermore, video file size is strictly preserved even after encryption and data embedding. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
TL;DR: A new attack scenario is discovered, called the sequential attack, which assumes that substations/transmission lines can be removed sequentially, not synchronously, and can discover many combinations of substation whose failures can cause large blackout size.
Abstract: The modern society increasingly relies on electrical service, which also brings risks of catastrophic consequences, e.g., large-scale blackouts. In the current literature, researchers reveal the vulnerability of power grids under the assumption that substations/transmission lines are removed or attacked synchronously. In reality, however, it is highly possible that such removals can be conducted sequentially. Motivated by this idea, we discover a new attack scenario, called the sequential attack, which assumes that substations/transmission lines can be removed sequentially, not synchronously. In particular, we find that the sequential attack can discover many combinations of substation whose failures can cause large blackout size. Previously, these combinations are ignored by the synchronous attack. In addition, we propose a new metric, called the sequential attack graph (SAG), and a practical attack strategy based on SAG. In simulations, we adopt three test benchmarks and five comparison schemes. Referring to simulation results and complexity analysis, we find that the proposed scheme has strong performance and low complexity.
TL;DR: This paper maximize the secrecy throughput of the PU by designing and optimizing the beamforming, rate parameters of the wiretap code adopted by the PU, and power allocation between the information signal and the artificial noise of the SU, subjected to the secrecy outage constraint at the PU and a throughput constraints at the SU.
Abstract: This paper studies the secure multiple-antenna transmission in slow fading channels for the cognitive radio network, where a multiple-input, single-output, multieavesdropper (MISOME) primary network coexisting with a multiple-input single-output secondary user (SU) pair. The SU can get the transmission opportunity to achieve its own data traffic by providing the secrecy guarantee for the PU with artificial noise. Different from the existing works, which adopt the instantaneous secrecy rate as the performance metric, with only the statistical channel state information (CSI) of the eavesdroppers, we maximize the secrecy throughput of the PU by designing and optimizing the beamforming, rate parameters of the wiretap code adopted by the PU, and power allocation between the information signal and the artificial noise of the SU, subjected to the secrecy outage constraint at the PU and a throughput constraint at the SU. We propose two design strategies: 1) nonadaptive secure transmission strategy (NASTS) and 2) adaptive secure transmission strategy, which are based on the statistical and instantaneous CSIs of the primary and secondary links, respectively. For both strategies, the exact rate parameters can be optimized through numerical methods. Moreover, we derive an explicit approximation for the optimal rate parameters of the NASTS at high SNR regime. Numerical results are illustrated to show the efficiency of the proposed schemes.
TL;DR: Experimental results on six public data sets demonstrate that the proposed method outperforms the state-of-the-art algorithms.
Abstract: This paper proposes a novel offline text-independent writer identification method based on scale invariant feature transform (SIFT), composed of training, enrollment, and identification stages. In all stages, an isotropic LoG filter is first used to segment the handwriting image into word regions (WRs). Then, the SIFT descriptors (SDs) of WRs and the corresponding scales and orientations (SOs) are extracted. In the training stage, an SD codebook is constructed by clustering the SDs of training samples. In the enrollment stage, the SDs of the input handwriting are adopted to form an SD signature (SDS) by looking up the SD codebook and the SOs are utilized to generate a scale and orientation histogram (SOH). In the identification stage, the SDS and SOH of the input handwriting are extracted and matched with the enrolled ones for identification. Experimental results on six public data sets (including three English data sets, one Chinese data set, and two hybrid-language data sets) demonstrate that the proposed method outperforms the state-of-the-art algorithms.
TL;DR: This paper proposed an approach to examine the vulnerability of a specific type of complex network, i.e., the power system, against cascading failure threats by adopting a model called extended betweenness that combines network structure with electrical characteristics to define the load of power grid components.
Abstract: The security issue of complex networks has drawn significant concerns recently. While pure topological analyzes from a network security perspective provide some effective techniques, their inability to characterize the physical principles requires a more comprehensive model to approximate failure behavior of a complex network in reality. In this paper, based on an extended topological metric, we proposed an approach to examine the vulnerability of a specific type of complex network, i.e., the power system, against cascading failure threats. The proposed approach adopts a model called extended betweenness that combines network structure with electrical characteristics to define the load of power grid components. By using this power transfer distribution factor-based model, we simulated attacks on different components (buses and branches) in the grid and evaluated the vulnerability of the system components with an extended topological cascading failure simulator. Influence of different loading and overloading situations on cascading failures was also evaluated by testing different tolerance factors. Simulation results from a standard IEEE 118-bus test system revealed the vulnerability of network components, which was then validated on a dc power flow simulator with comparisons to other topological measurements. Finally, potential extensions of the approach were also discussed to exhibit both utility and challenge in more complex scenarios and applications.
TL;DR: Zhang et al. as mentioned in this paper proposed a novel image set-based collaborative representation and classification method for ISFR by modeling the query set as a convex or regularized hull, and represent this hull collaboratively over all the gallery sets.
Abstract: With the rapid development of digital imaging and communication technologies, image set-based face recognition (ISFR) is becoming increasingly important. One key issue of ISFR is how to effectively and efficiently represent the query face image set using the gallery face image sets. The set-to-set distance-based methods ignore the relationship between gallery sets, whereas representing the query set images individually over the gallery sets ignores the correlation between query set images. In this paper, we propose a novel image set-based collaborative representation and classification method for ISFR. By modeling the query set as a convex or regularized hull, we represent this hull collaboratively over all the gallery sets. With the resolved representation coefficients, the distance between the query set and each gallery set can then be calculated for classification. The proposed model naturally and effectively extends the image-based collaborative representation to an image set based one, and our extensive experiments on benchmark ISFR databases show the superiority of the proposed method to state-of-the-art ISFR methods under different set sizes in terms of both recognition rate and efficiency.