Bio: Qiaosong Chen is an academic researcher from Chongqing University of Posts and Telecommunications. The author has contributed to research in topics: Convolutional neural network & Feature extraction. The author has an hindex of 4, co-authored 18 publications receiving 57 citations.
TL;DR: An undulatory locomotion model of C. elegans to achieve the chemotaxis behaviors based on the biological neuronal and neuromuscular structure is provided to verify the realness and effectiveness of the model, which could serve as a prototype for other footless animals.
Abstract: This paper provides an undulatory locomotion model of C. elegans to achieve the chemotaxis behaviors based on the biological neuronal and neuromuscular structure. The on-cell and off-cell mechanism, as well as the proprioceptive mechanism is incorporated into the locomotion model. The nervous system of C. elegans is modeled by a dynamic neural network (DNN) that involves two parts: head DNN and motor neurons. The head DNN perceives the outside concentrations and generates the undulatory wave to the body. The motor neurons are responsible for transiting the undulatory wave along the body. The body of C. elegans is represented as a multi-joint rigid link model with 11 links. The undulatory locomotion behavior is achieved by using the DNN to control the lengths of muscles on ventral and dorsal sides, and then using the muscle lengths to control the angles between two consecutive links. In this work, the relations between the outputs of DNN and muscle lengths, as well as the muscle lengths and the angles between two consecutive links, are determined. Furthermore, owing to the learning capability of DNN, a set of nonlinear functions that are designed to represent the chemotaxis behaviors of C. elegans are learned by the head DNN. The testing results show good performance of the locomotion model for the chemotaxis behaviors of finding food and avoiding toxin, as well as slight and ? turns. At last, quantitative analyses by comparing with the experiment results are provided to verify the realness and effectiveness of the locomotion model, which could serve as a prototype for other footless animals.
••01 Oct 2017
TL;DR: The proposed model, a model named Multi-Scale Fusion Convolutional Neural Network (MSF-CNN) is proposed to train the face detector and the performance of results outperforms the previous methods in some well-known face detection benchmark datasets.
Abstract: Nowadays, more and more methods have been proposed to solve the problem of face detection based on computer implementation. Due to the variations in background, illumination, pose and facial expressions, the problem of machine face detection is complex. Recently, deep learning approaches achieve an impressive performance on face detection. In this paper, a model named Multi-Scale Fusion Convolutional Neural Network (MSF-CNN) is proposed to train the face detector. The model is trained by Convolutional Neural Network and detecting is based on the Viola & Jones detector's sliding windows structure. Particularly, in the process of feature extraction, we adopt the design of multi-scale feature fusion with different scale convolution kernels. The results are as follows: First, the fusion of multi-scale features are rich in the characteristics of learning, and the classification accuracy is higher than the single-scale. Second, we decrease the model of complexity compared with existed methods of the cascaded CNN. Third, we achieve end-to-end learning compared with cascaded separate training. Meanwhile, the proposed model has showed that the performance of results outperforms the previous methods in some well-known face detection benchmark datasets.
••01 Oct 2014
TL;DR: Experimental results show that the proposed multi-feature fusion and sparse coding based framework for image retrieval is much more effective than the state-of-the-art methods not only in traditional image dataset but also in varying image dataset.
Abstract: In traditional image retrieval techniques, the query results are severely affected when the images of varying illumination and scale, as well as occlusion and corrosion. Seeking to solve this problem, this paper proposed a novel multi-feature fusion and sparse coding based framework for image retrieval. In the framework, firstly, inherent features of an image are extracted, and then dictionary learning method is utilized to construct them to be dictionary features. Finally, the proposed framework introduces sparse representation model to measure the similarity between two images. The merit is that a feature descriptor is coded as a sparse linear combination with respect to dictionary feature so as to achieve efficient feature representation and robust similarity measure. In order to check the validity of the framework, this paper conducted two groups of experiments on Corel-1000 image dataset and the Stirmark benchmark based database respectively. Experimental results show that the proposed framework is much more effective than the state-of-the-art methods not only in traditional image dataset but also in varying image dataset.
03 Aug 2016
TL;DR: In this article, a rapid tone mapping system and method based on multi-scale Gauss filters is proposed, which is suitable for mobile phones and can be used for tone mapping.
Abstract: The invention relates to a rapid tone mapping system and method based on multi-scale Gauss filters The system comprises a multi-scale decomposition module, a rough layer module, a detail layer module, a fusion module, a chroma processing module, a gamma correction module and a terminal display module The multi-scale Gauss filters are used to decompose images of high dynamic range to obtain rough images and detail images; the fusion module combines the rough images with the detail images linearly to form novel images of low dynamic range; the chroma processing module compensates chroma information; and the gamma correction module uses gamma correction to compensate the nonlinear relation between input signals and output signals of a display system in advance According to the invention, the image of high dynamic range can be effectively compressed, image information is effectively reserved, the algorithm efficiency is high, consumption time is short, and the tone mapping system and method are suitable for mobile phones
••01 Oct 2018
TL;DR: A Region-based Fully Convolutional Networks (R-FCN) based deep face detection framework is proposed and performance outperforms most of previous methods, especially for addressing heavy occlusion, part deformation and complex perspective.
Abstract: The recent years witnessed great improvements in systems of region-based face detection. However, the variations in occlusion, scale, illumination, pose and facial expressions make face detection in the wild still a challenge to be solved. In this paper, a Region-based Fully Convolutional Networks (R-FCN) based deep face detection framework is proposed. Several new techniques are utilized in our framework, including Deformable Convolutional Networks (DCN), Feature Pyramid Networks (FPN) and Focal Loss. Experiment results on three common challenging face detection benchmarks, FDDB, AFW and WIDER FACE, show the proposed approach is robust and performance outperforms most of previous methods, especially for addressing heavy occlusion, part deformation and complex perspective.
TL;DR: The final experimental results indicate that the MR-CNN is superior at detecting small traffic signs, and that it achieves the state-of-the-art performance compared with other methods.
Abstract: Small traffic sign recognition is a challenging problem in computer vision, and its accuracy is important to the safety of intelligent transportation systems (ITS). In this paper, we propose the multi-scale region-based convolutional neural network (MR-CNN). At the detection stage, MR-CNN uses a multi-scale deconvolution operation to up-sample the features of the deeper convolution layers and concatenates them to those of the shallow layer to construct the fused feature map. The fused feature map has the ability to generate fewer region proposals and achieve higher recall values. At the classification stage, we leverage the multi-scale contextual regions to exploit the information surrounding a given object proposal and construct the fused feature for the fully connected layers. The fused feature map inside the region proposal network (RPN) focuses primarily on improving the image resolution and semantic information for small traffic sign detection, while outside the RPN, the fused feature enhances the feature representation by leveraging the contextual information. Finally, we evaluated MR-CNN on the largest dataset, Tsinghua-Tencent 100K, which is suitable for our problem and more challenging than the GTSDB and GTSRB datasets. The final experimental results indicate that the MR-CNN is superior at detecting small traffic signs, and that it achieves the state-of-the-art performance compared with other methods.
TL;DR: An improved (Single Shot Detector) SSD algorithm via multi-feature fusion and enhancement, named MF-SSD, for traffic sign recognition, achieves higher detection accuracy, better efficiency, and better robustness in complex traffic environment.
Abstract: Road traffic sign detection and recognition play an important role in advanced driver assistance systems (ADAS) by providing real-time road sign perception information. In this paper, we propose an improved (Single Shot Detector) SSD algorithm via multi-feature fusion and enhancement, named MF-SSD, for traffic sign recognition. First, low-level features are fused into high-level features to improve the detection performance of small targets in the SSD. We then enhance the features in different channels to detect the target by enhancing effective channel features and suppressing invalid channel features. Our algorithm gets good results in domestic real-time traffic signs. The proposed MF-SSD algorithm is evaluated with the German Traffic Sign Recognition Benchmark (GTSRB) dataset. The experimental results show that the MF-SSD algorithm has advantages in detecting small traffic signs. Compared with existing methods, it achieves higher detection accuracy, better efficiency, and better robustness in complex traffic environment.
TL;DR: Analysis of the development and analysis of a model of forward locomotion that integrates the neuroanatomy, neurophysiology and body mechanics of the worm revealed that head motoneurons SMD and RMD are sufficient to drive dorsoventral undulations in the head and neck and that short-range posteriorly directed proprioceptive feedback is sufficient to propagate the wave along the rest of the body.
Abstract: With 302 neurons and a near-complete reconstruction of the neural and muscle anatomy at the cellular level, Caenorhabditis elegans is an ideal candidate organism to study the neuromechanical basis ...
01 Jan 1999
TL;DR: Recent progress on both optogenetic techniques for imaging and manipulating neural activity and neuromechanical modeling in the nematode worm Caenorhabditis elegans is reviewed.
Abstract: Brain, body and environment are in continuous dynamical interaction, and it is becoming increasingly clear that an animal's behavior must be understood as a product not only of its nervous system, but also of the ongoing feedback of this neural activity through the biomechanics of its body and the ecology of its environment. Modeling has an essential integrative role to play in such an understanding. But successful whole-animal modeling requires an animal for which detailed behavioral, biomechanical and neural information is available and a modeling methodology which can gracefully cope with the constantly changing balance of known and unknown biological constraints. Here we review recent progress on both optogenetic techniques for imaging and manipulating neural activity and neuromechanical modeling in the nematode worm Caenorhabditis elegans. This work demonstrates both the feasibility and challenges of whole-animal modeling.