Bio: Xin Deng is an academic researcher from Chongqing University of Posts and Telecommunications. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 5, co-authored 19 publications receiving 67 citations. Previous affiliations of Xin Deng include Chongqing University & National University of Singapore.
TL;DR: The testing results and the comparison with experiment results verify that the proposed chemotaxis behavioral models can well mimic the Chemotaxis behaviors of C. elegans in different environments.
Abstract: In this paper, the modeling of several complex chemotaxis behaviors of C elegans is explored, which include food attraction, toxin avoidance, and locomotion speed regulation We first model the chemotaxis behaviors of food attraction and toxin avoidance separately Then, an integrated chemotaxis behavioral model is proposed, which performs the two chemotaxis behaviors simultaneously The novelty and the uniqueness of the proposed chemotaxis behavioral models are characterized by several attributes First, all the chemotaxis behavioral model sare on biological basis, namely, the proposed chemotaxis behavior models are constructed by extracting the neural wire diagram from sensory neurons to motor neurons, where sensory neurons are specific for chemotaxis behaviors Second, the chemotaxis behavioral models are able to perform turning and speed regulation Third, chemotaxis behaviors are characterized by a set of switching logic functions that decide the orientation and speed All models are implemented using dynamic neural networks (DNN) and trained using the real time recurrent learning (RTRL) algorithm By incorporating a speed regulation mechanism, C elegans can stop spontaneously when approaching food source or leaving away from toxin The testing results and the comparison with experiment results verify that the proposed chemotaxis behavioral models can well mimic the chemotaxis behaviors of C elegans in different environments
••01 Oct 2017
TL;DR: The proposed model, a model named Multi-Scale Fusion Convolutional Neural Network (MSF-CNN) is proposed to train the face detector and the performance of results outperforms the previous methods in some well-known face detection benchmark datasets.
Abstract: Nowadays, more and more methods have been proposed to solve the problem of face detection based on computer implementation. Due to the variations in background, illumination, pose and facial expressions, the problem of machine face detection is complex. Recently, deep learning approaches achieve an impressive performance on face detection. In this paper, a model named Multi-Scale Fusion Convolutional Neural Network (MSF-CNN) is proposed to train the face detector. The model is trained by Convolutional Neural Network and detecting is based on the Viola & Jones detector's sliding windows structure. Particularly, in the process of feature extraction, we adopt the design of multi-scale feature fusion with different scale convolution kernels. The results are as follows: First, the fusion of multi-scale features are rich in the characteristics of learning, and the classification accuracy is higher than the single-scale. Second, we decrease the model of complexity compared with existed methods of the cascaded CNN. Third, we achieve end-to-end learning compared with cascaded separate training. Meanwhile, the proposed model has showed that the performance of results outperforms the previous methods in some well-known face detection benchmark datasets.
TL;DR: Zhang et al. as discussed by the authors proposed a DNN model for SSVEP target detection, which fuses features of multiple neural networks, and designed a multi-label for each sample and optimized the parameters of FB-EEGNet across multi-stimulus to incorporate the information from non-target stimuli.
Abstract: Steady-state visual evoked potential (SSVEP) is a prevalent paradigm of brain-computer interface (BCI). Recently, deep neural networks (DNNs) have been employed for SSVEP target recognition. However, current DNN models can not fully extract information from SSVEP harmonic components, and ignore the influence of non-target stimuli.To employ information of multiple sub-bands and non-target stimulus data, we propose a DNN model for SSVEP target detection, i.e., FB-EEGNet, which fuses features of multiple neural networks. Additionally, we design a multi-label for each sample and optimize the parameters of FB-EEGNet across multi-stimulus to incorporate the information from non-target stimuli.Under the subject-specific condition, FB-EEGNet achieves the average classification accuracies (information transfer rate (ITR)) of 76.75 % (50.70 bits/min) and 89.14 % (70.45 bits/min) in a time widow of 0.7 s under the public 12-target dataset and our experimental 9-target dataset, respectively. Under the cross-subject condition, FB-EEGNet achieved mean accuracies (ITRs) of 81.72 % (67.99 bits/min) and 92.15 % (76.12 bits/min) on the public and experimental datasets in a time window of 1 s, respectively.FB-EEGNet shows superior performance than CCNN, EEGNet, CCA and FBCCA both for subject-dependent and subject-independent SSVEP target recognition.FB-EEGNet can effectively extract information from multiple sub-bands and cross-stimulus targets, providing a promising way for extracting deep features in SSVEP using neural networks.
••01 Oct 2014
TL;DR: Experimental results show that the proposed multi-feature fusion and sparse coding based framework for image retrieval is much more effective than the state-of-the-art methods not only in traditional image dataset but also in varying image dataset.
Abstract: In traditional image retrieval techniques, the query results are severely affected when the images of varying illumination and scale, as well as occlusion and corrosion. Seeking to solve this problem, this paper proposed a novel multi-feature fusion and sparse coding based framework for image retrieval. In the framework, firstly, inherent features of an image are extracted, and then dictionary learning method is utilized to construct them to be dictionary features. Finally, the proposed framework introduces sparse representation model to measure the similarity between two images. The merit is that a feature descriptor is coded as a sparse linear combination with respect to dictionary feature so as to achieve efficient feature representation and robust similarity measure. In order to check the validity of the framework, this paper conducted two groups of experiments on Corel-1000 image dataset and the Stirmark benchmark based database respectively. Experimental results show that the proposed framework is much more effective than the state-of-the-art methods not only in traditional image dataset but also in varying image dataset.
06 Jun 2017
TL;DR: In this paper, a multi-target tracking method based on an optical flow method and Kalman filtering is proposed, which does not need to train a classifier or construct a target template and can better mark the moving targets with the optical flow information after the clustering.
Abstract: The invention discloses a multi-target tracking method based on an optical flow method and Kalman filtering The multi-target tracking method comprises firstly processing an input video frame by using the optical flow method; secondly, removing stray points by using optical flow clustering; then, achieving the accurate acquisition of moving targets through the morphological expansion and improved median filtering; and finally, according to the acquired target information, processing the subsequent image sequence by using the Kalman filtering method, and predicting the moving targets so as to realize the effective tracking of the moving targets The multi-target tracking method does not need to train a classifier or construct a target template, and can better mark the moving targets with the optical flow information after the clustering
01 Jan 1996
TL;DR: The final experimental results indicate that the MR-CNN is superior at detecting small traffic signs, and that it achieves the state-of-the-art performance compared with other methods.
Abstract: Small traffic sign recognition is a challenging problem in computer vision, and its accuracy is important to the safety of intelligent transportation systems (ITS). In this paper, we propose the multi-scale region-based convolutional neural network (MR-CNN). At the detection stage, MR-CNN uses a multi-scale deconvolution operation to up-sample the features of the deeper convolution layers and concatenates them to those of the shallow layer to construct the fused feature map. The fused feature map has the ability to generate fewer region proposals and achieve higher recall values. At the classification stage, we leverage the multi-scale contextual regions to exploit the information surrounding a given object proposal and construct the fused feature for the fully connected layers. The fused feature map inside the region proposal network (RPN) focuses primarily on improving the image resolution and semantic information for small traffic sign detection, while outside the RPN, the fused feature enhances the feature representation by leveraging the contextual information. Finally, we evaluated MR-CNN on the largest dataset, Tsinghua-Tencent 100K, which is suitable for our problem and more challenging than the GTSDB and GTSRB datasets. The final experimental results indicate that the MR-CNN is superior at detecting small traffic signs, and that it achieves the state-of-the-art performance compared with other methods.
TL;DR: The droplet wets a thin metal trace and generates a force that simultaneously delaminates the trace from the substrate (enhanced by spontaneous electrochemical reactions) while accelerating the droplet along the trace.
Abstract: This paper describes a new method to spontaneously accelerate droplets of liquid metal (eutectic gallium indium, EGaIn) to extremely fast velocities through a liquid medium and along predefined metallic paths. The droplet wets a thin metal trace (a film ∼100 nm thick, ∼ 1 mm wide) and generates a force that simultaneously delaminates the trace from the substrate (enhanced by spontaneous electrochemical reactions) while accelerating the droplet along the trace. The formation of a surface oxide on EGaIn prevents it from moving, but the use of an acidic medium or application of a reducing bias to the trace continuously removes the oxide skin to enable motion. The trace ultimately provides a sacrificial pathway for the metal and provides a mm-scale mimic to the templates used to guide molecular motors found in biology (e.g., actin filaments). The liquid metal can accelerate along linear, curved and U-shaped traces as well as uphill on surfaces inclined by 30 degrees. The droplets can accelerate through a visc...
TL;DR: An improved (Single Shot Detector) SSD algorithm via multi-feature fusion and enhancement, named MF-SSD, for traffic sign recognition, achieves higher detection accuracy, better efficiency, and better robustness in complex traffic environment.
Abstract: Road traffic sign detection and recognition play an important role in advanced driver assistance systems (ADAS) by providing real-time road sign perception information. In this paper, we propose an improved (Single Shot Detector) SSD algorithm via multi-feature fusion and enhancement, named MF-SSD, for traffic sign recognition. First, low-level features are fused into high-level features to improve the detection performance of small targets in the SSD. We then enhance the features in different channels to detect the target by enhancing effective channel features and suppressing invalid channel features. Our algorithm gets good results in domestic real-time traffic signs. The proposed MF-SSD algorithm is evaluated with the German Traffic Sign Recognition Benchmark (GTSRB) dataset. The experimental results show that the MF-SSD algorithm has advantages in detecting small traffic signs. Compared with existing methods, it achieves higher detection accuracy, better efficiency, and better robustness in complex traffic environment.
01 Jan 1999