scispace - formally typeset
Search or ask a question
Author

Zhuhong Shao

Other affiliations: Southeast University
Bio: Zhuhong Shao is an academic researcher from Capital Normal University. The author has contributed to research in topics: Encryption & Computer science. The author has an hindex of 14, co-authored 33 publications receiving 621 citations. Previous affiliations of Zhuhong Shao include Southeast University.

Papers
More filters
Journal ArticleDOI
TL;DR: A new approach to predict the Beck Depression Inventory II (BDI-II) values from video data is proposed based on the deep networks, designed in a two stream manner, aiming at capturing both the facial appearance and dynamics.
Abstract: As a severe psychiatric disorder disease, depression is a state of low mood and aversion to activity, which prevents a person from functioning normally in both work and daily lives. The study on automated mental health assessment has been given increasing attentions in recent years. In this paper, we study the problem of automatic diagnosis of depression. A new approach to predict the Beck Depression Inventory II (BDI-II) values from video data is proposed based on the deep networks. The proposed framework is designed in a two stream manner, aiming at capturing both the facial appearance and dynamics. Further, we employ joint tuning layers that can implicitly integrate the appearance and dynamic information. Experiments are conducted on two depression databases, AVEC2013 and AVEC2014. The experimental results show that our proposed approach significantly improve the depression prediction performance, compared to other visual-based approaches.

145 citations

Journal ArticleDOI
TL;DR: A robust watermarking scheme based on orthogonal Fourier-Mellin moments and chaotic map is introduced, which achieves copyright authentication for double images simultaneously and is more robust than previous schemes.

90 citations

Journal ArticleDOI
TL;DR: Experimental results show that quaternion Bessel-Fourier moments lead to better performance for color image reconstruction than the other quaternION orthogonal moments such as quaternions Zernike moments, quaternio pseudo-Zernike Moments and quaternia orthogonic Fourier-Mellin moments.

73 citations

Journal ArticleDOI
TL;DR: Experimental results show that better watermark robustness and imperceptibility are achieved by adjusting the fractional orders in the FrKT.
Abstract: This paper proposes a novel fractional transform, denoted as the fractional Krawtchouk transform (FrKT), a generalization of the Krawtchouk transform. The derivation of the FrKT uses the eigenvalue decomposition method. We determine the eigenvalues and the corresponding multiplicity of the Krawtchouk transform matrix. Moreover, the orthonormal eigenvectors of the transform matrix are derived. For validation purpose only and as a first illustration of the interest of FrKT, a watermarking example was chosen. Experimental results show that better watermark robustness and imperceptibility are achieved by adjusting the fractional orders in the FrKT.

66 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a quaternion principal component analysis network (QPCANet) for color image classification, which takes into account the spatial distribution information of RGB channels in color images.

66 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI
TL;DR: A novel four-image encryption scheme based on the quaternion Fresnel transforms (QFST), computer generated hologram and the two-dimensional Logistic-adjusted-Sine map (LASM) is presented and the validity of the proposed image encryption technique is demonstrated.
Abstract: A novel four-image encryption scheme based on the quaternion Fresnel transforms (QFST), computer generated hologram and the two-dimensional (2D) Logistic-adjusted-Sine map (LASM) is presented. To treat the four images in a holistic manner, two types of the quaternion Fresnel transform (QFST) are defined and the corresponding calculation method for a quaternion matrix is derived. In the proposed method, the four original images, which are represented by quaternion algebra, are processed holistically in a vector manner by using QFST first. Then the input complex amplitude, which is constructed by the components of the QFST-transformed plaintext images, is encoded by Fresnel transform with two virtual independent random phase masks (RPM). In order to avoid sending entire RPMs to the receiver side for decryption, the RPMs are generated by utilizing 2D–LASM, which results that the amount of the key data is reduced dramatically. Subsequently, by using Burch’s method and the phase-shifting interferometry, the encrypted computer generated hologram is fabricated. To improve the security and weaken the correlation, the encrypted hologram is scrambled base on 2D–LASM. Experiments demonstrate the validity of the proposed image encryption technique.

212 citations

Journal ArticleDOI
TL;DR: A robust blind watermarking technique, based on block-based DCT coefficient modification, which has a higher degree of robustness against various singular and hybrid attacks and a watermark of good quality is extracted even after various simultaneous attacks.

186 citations

Journal ArticleDOI
TL;DR: Experiments demonstrate that the practical fault-tolerant results of previous robust steganography methods consist with the theoretical derivation results, which provides a theory support for coding parameter selection and message extraction integrity to the robust Steganography based on “Compression-resistant Domain Constructing + RS-STC Codes”.

177 citations