scispace - formally typeset
Search or ask a question
Patent

Human face age estimation method based on fusion of deep characteristics and shallow characteristics

TL;DR: Zhang et al. as discussed by the authors proposed a human face age estimation method based on the fusion of deep characteristics and shallow characteristics, which consists of preprocessing each human face sample image and outputting the multi-level age characteristics as the deep characteristics.
Abstract: The invention discloses a human face age estimation method based on the fusion of deep characteristics and shallow characteristics. The method comprises the following steps that: preprocessing each human face sample image in a human face sample dataset; training a constructed initial convolutional neural network, and selecting a convolutional neural network used for human face recognition; utilizing a human face dataset with an age tag value to carry out fine tuning processing on the selected convolutional neural network, and obtaining a plurality of convolutional neural networks used for age estimation; carrying out extraction to obtain multi-level age characteristics corresponding to the human face, and outputting the multi-level age characteristics as the deep characteristics; extracting the HOG (Histogram of Oriented Gradient) characteristic and the LBP (Local Binary Pattern) characteristic of the shallow characteristics of each human face image; constructing a deep belief network to carry out fusion on the deep characteristics and the shallow characteristics; and according to the fused characteristics in the deep belief network, carrying out the age regression estimation of the human face image to obtain an output an age estimation result. By sue of the method, age estimation accuracy is improved, and the method owns a human face image age estimation capability with high accuracy.
Citations
More filters
Patent
27 Oct 2017
TL;DR: In this article, a deep learning-based image high-density population counting method was proposed, which comprises the following steps of S1, establishing a depth complementation convolutional neural network by utilizing deep learning framework; S2, performing image data enhancement on an image according to operations of angle rotation, image multi-scale zooming, image mirroring and image pyramid zooming; S3, performing Gaussian kernel fussy normalization processing on the enhanced image data to obtain a real crowd density graph, outputting an estimated density graph and the real density graph by the
Abstract: The invention discloses a deep learning-based image high-density population counting method. The method comprises the following steps of S1, establishing a depth complementation convolutional neural network by utilizing a deep learning framework caffe; S2, performing image data enhancement on an image according to operations of angle rotation, image multi-scale zooming, image mirroring and image pyramid zooming; S3, performing Gaussian kernel fussy normalization processing on the enhanced image data to obtain a real crowd density graph, outputting an estimated density graph and the real density graph by the network, and performing continuous iterative training optimization on the whole network structure according to a loss function; and S4, inputting a crowd picture and a tag picture to the network for training, and performing continuous iterative optimization to obtain a trained network model finally. According to the method, the end-to-end convolutional neural network is designed; a picture is given and input, and the estimated density graph corresponding to the picture is output, so that an estimated crowd number is obtained; and by outputting the density graph, more useful information is reserved.

19 citations

Patent
06 Mar 2018
TL;DR: In this article, a feature fusion coefficient learnable image semantic segmentation method is proposed, which mainly comprises the following steps: to begin with, training a deep convolution networkclassification model from image to category label in an image classification data set, converting a full connection layer type in the classification model into a convolutional layer type to obtain a full convolution deep neural network model for category prediction at the pixel level; expanding convolution layer branch, and setting a coefficient for each branch, feature fusion layers being fused according to coefficient proportion, and the coefficient being set in
Abstract: The invention relates to a feature fusion coefficient learnable image semantic segmentation method. The method mainly comprises the following steps: to begin with, training a deep convolution networkclassification model from image to category label in an image classification data set; converting a full connection layer type in the classification model into a convolutional layer type to obtain a full convolution deep neural network model for category prediction at the pixel level; expanding convolutional layer branch, and setting a coefficient for each branch, feature fusion layers being fusedaccording to coefficient proportion, and the coefficient being set in a learnable state; then, carrying out fine-tuning training in an image semantic segmentation data set, and meanwhile, carrying out coefficient learning to obtain a semantic segmentation model; carrying out fine-tuning training and fusion coefficient learning to obtain 1-20 groups of fusion coefficients; and finally, selecting the branch, the coefficient of which is largest, from each group, carrying out final combination, and carrying out fine-tuning training and coefficient learning again to obtain a final semantic segmentation model. The method enables the feature fusion effect to reach a best state.

9 citations

Patent
08 May 2018
TL;DR: In this paper, a CNN model is used to extract deep features from multiple shallow features, and further a next layer is connected in a full connection mode, image features are outputted and classified, and the parameters of the CNN model are updated with the combination of a classification result and a label layer.
Abstract: The embodiment of the invention discloses a CNN model training method and device and a face recognition method and device. In the scheme, after a CNN model is used to extract deep features from multiple shallow features, firstly the multiple extracted features are serially fused, further a next layer is connected in a full-connection mode, image features are outputted and classified, and the parameters of the CNN model are updated with the combination of a classification result and a label layer. In the scheme, since the full-connection mode is used to connect the next layer for the serially fused features, the effect of redundant features is reduced, features conducive to classification are extracted, and thus the redundancy of the extracted features is reduced. According to the CNN model, the end-to-end training between features and labels can be realized, effective features can be automatically extracted, the effect of the redundant features is weakened, the CNN model obtained by training can be used in face recognition, and the face recognition effect can be improved.

7 citations

Patent
24 Apr 2018
TL;DR: In this article, a face recognition method based on fusion of multiple frames of face features in a video is presented. But the method is not suitable for multi-dynamic video acquisition environment and can effectively improve the face recognition accuracy.
Abstract: The invention discloses a face recognition method and a face recognition device based on fusion of multiple frames of face features in a video. The face recognition method comprises the following steps: acquiring n frames of face images to be recognized in a monitoring video, wherein n is greater than or equal to 1; selecting m frames of face images from the n frames of face images, and performingfeature extraction on the m frames of face images to generate feature vectors {fi} in one-to-one correspondence with the m frames of face images, wherein i is 1, 2, ...., m, and m is greater than orequal to 1 and smaller than or equal to n; fusing the m feature vectors {fi} into a feature vector r, and comparing the feature vector r with face features in a database so as to recognize a face identity in the monitoring video. By the face recognition method provided by the invention, multiple frames of face images in the monitoring video are detected, feature extraction is performed thereon, and the extracted multiple face features are fused into one face feature for recognition, so that not only is the number of feature comparisons reduced, but also influence of face angle deflection, motion blur, backlight and the like on face image feature extraction is reduced; the face recognition method and the face recognition device are applied to a multi-dynamic video acquisition environment and can effectively improve the face recognition accuracy.

7 citations

Patent
16 Jan 2018
TL;DR: In this article, an age estimation method based on deep learning is proposed, which comprises steps of (1) constructing an age database, (2) performing pre-processing on images of the constructed age database and (3) performing unification and normalization on sizes of aligned images to obtain images with a size being 64*64.
Abstract: The invention discloses an age estimation method based on deep learning. The age estimation method based on deep learning comprises steps of (1) constructing an age database, (2) performing pre-processing on images of the constructed age database, (3) performing unification and normalization on sizes of aligned images to obtain images with a size being 64*64, (4) taking the obtained images and corresponding labels as inputs of a deep model, using a CNN convolution deep network to train an age estimation model, (5) inputting an age estimation model into a tested image to obtain similarity values of tested images on various kinds of labels, (6) multiplying the obtained corresponding label with the obtained similarity value and then adding obtained corresponding label and the obtained similarity value to obtain a final age estimation result. The age estimation method based on deep learning can obtain a smaller deep model and is fast in operation time and high in an age estimation identification rate. The database comprises massive samples of children and senior people and can effectively identify age of a special group.

5 citations

References
More filters
Patent
31 May 2010
TL;DR: In this paper, the authors presented a system for determining personal characteristics from images by generating a baseline gender model and an age estimation model using one or more convolutional neural networks (CNNs).
Abstract: Systems and methods are disclosed for determining personal characteristics from images by generating a baseline gender model and an age estimation model using one or more convolutional neural networks (CNNs); capturing correspondences of faces by face tracking, and applying incremental learning to the CNNs and enforcing correspondence constraint such that CNN outputs are consistent and stable for one person.

114 citations

Patent
08 Apr 2015
TL;DR: In this paper, an age classification method for face images is proposed, which consists of manually annotating an age type of all the sample images and presetting standard face graphs; aligning and adjusting the key points of the image or image to be detected; and then performing age classification on the face contour graphs of the images to be extracted by a classification model.
Abstract: The invention discloses an age classification method for face images. The age classification method comprises the following steps: collecting sample images and acquiring images to be detected; then manually annotating an age type of all the sample images and presetting standard face graphs; comparing face feature key points of the sample images or the images to be detected with face feature key points of the standard face graphs; aligning and adjusting the key points of the sample images or the images to be detected, carrying out contour extraction on the adjusted sample images or the images to be detected to obtain face contour graphs of the sample images or the images to be detected, and finally performing age classification on the face contour graphs of the images to be detected by a classification model to obtain the age type of the images to be detected. The age type annotation is carried out by a manual and machine matching mode; the learning precision of a convolutional neural network is improved; furthermore, the sample images are processed by key point alignment and contour extraction, so that obtained training data are more uniform images; therefore the precision of a training model is improved, and the age classification is more accurate.

36 citations

Patent
20 Apr 2016
TL;DR: In this article, a multi-mode-characteristic-fusion-based remote-sensing image classification method is proposed, where characteristics of at least two modes are extracted; the obtained characteristics of the modes are inputted into an RBM model to carry out fusion to obtain a combined expression of characteristics of different modes; and according to the combined expression, type estimation is carried out on each super-pixel area, thereby realizing remote sensing image classification.
Abstract: The invention, which belongs to the technical field of remote-sensing image classification, relates to a multi-mode-characteristic-fusion-based remote-sensing image classification method Characteristics of at least two modes are extracted; the obtained characteristics of the modes are inputted into an RBM model to carry out fusion to obtain a combined expression of characteristics of the modes; and according to the combined expression, type estimation is carried out on each super-pixel area, thereby realizing remote-sensing image classification According to the invention, the characteristics, including a superficial-layer mode characteristic and a deep-layer mode characteristic, of various modes are combined by the RBM model to obtain the corresponding combined expression, wherein the combined expression not only includes the layer expression of the remote-sensing image deep-layer mode characteristic but also includes external visible similarity of the superficial-layer mode characteristic Therefore, the distinguishing capability is high and the classification precision of remote-sensing images is improved

30 citations

Patent
08 Oct 2014
TL;DR: Zhang et al. as mentioned in this paper proposed an image feature extraction and similarity measurement method for three-dimensional city model retrieval based on images, which has the advantages that the three-layer frame for image feature extractions and similarity measurements is provided, multiple layers of multi-scale convolutional neural network models with spatial constraints are designed in the frame, and distinguishable features with invariable displacement, scales and deformation are obtained.
Abstract: The invention relates to an image feature extraction and similarity measurement method used for three-dimensional city model retrieval. Features extracted through most image and three-dimensional model retrieval methods lack or ignore description of model details, and accordingly, the three-dimensional model retrieval precision is not high. The invention provides a three-dimensional city model retrieval frame based on images. Firstly, retrieval targets on the images are obtained through division, meanwhile, a light field is used for conducting two-dimensional exchanging on three-dimensional city models, features of query targets and features of the retrieval model images are extracted, finally, the similarity between the features is measured through the similarity distance, and three-dimensional city model retrieval is realized. The image feature extraction and similarity measurement method has the advantages that the three-layer frame for image feature extraction and similarity measurement is provided, multiple layers of multi-scale convolutional neural network models with spatial constraints are designed in the frame, and the distinguishable features with invariable displacement, scales and deformation are obtained; a novel similarity measurement method is provided, and similarity matching between the targets is better realized. Compared with an existing method, the efficiency and the precision of the method in three-dimensional city model retrieval are greatly improved.

16 citations

Patent
28 Sep 2016
TL;DR: In this paper, an age estimation method based on a multi-output convolution neural network and ordered regression was proposed, where the ordered regression and a deep learning method are combined so that accuracy of age prediction performance is greatly increased.
Abstract: The invention discloses an age estimation method based on a multi-output convolution neural network and ordered regression. The method comprises the following steps of 1, establishing an Asian face age data set (AFAD); 2, establishing training data used for a dichotomy; 3, training a depth convolution neural network; 4, inputting a test sample into a trained convolution neural network; and 5, acquiring age estimation of the test sample. The invention provides a method of sorting the ages. The ordered regression and a deep learning method are combined so that accuracy of age prediction performance is greatly increased. In an existing age estimation method, characteristic extraction and regression modeling are performed independently and optimization is insufficient. By using the method of the invention, the above problems are solved; a sequence relation of age labels can be fully used to carry out ordered regression of age estimation; age estimation accuracy is increased; a large scale database is established for the age estimation of the Asian faces and a database basis is provided for face age estimation research. The method can be widely used for age estimation of face images.

12 citations