scispace - formally typeset
Search or ask a question

Showing papers by "Qiang Yang published in 2014"


Journal ArticleDOI
TL;DR: Inspired by the fog-collecting structure on Stenocara beetle's back, a photonic-crystal microchip with hydrophilic-hydrophobic micropattern was fabricated by inkjet printing and used to realize high-sensitive ultratrace detection of fluorescence analytes and fluorophore-based assays.
Abstract: Ultratrace detection attracts great interest because it is still a challenge to the early diagnosis and drug testing. Enriching the targets from highly diluted solutions to the sensitive area is a promising method. Inspired by the fog-collecting structure on Stenocara beetle's back, a photonic-crystal (PC) microchip with hydrophilic-hydrophobic micropattern was fabricated by inkjet printing. This device was used to realize high-sensitive ultratrace detection of fluorescence analytes and fluorophore-based assays. Coupled with the fluorescence enhancement effect of a PC, detection down to 10(-16) mol L(-1) was achieved. This design can be combined with biophotonic devices for the detection of drugs, diseases, and pollutions of the ecosystem.

257 citations


Journal ArticleDOI
Mingsheng Long1, Jianmin Wang1, Guiguang Ding1, Dou Shen2, Qiang Yang3 
TL;DR: Graph Co-Regularized Transfer Learning (GTL) as mentioned in this paper proposes a general framework, referred to as graph co-regularized transfer learning, where various matrix factorization models can be incorporated.
Abstract: Transfer learning is established as an effective technology to leverage rich labeled data from some source domain to build an accurate classifier for the target domain. The basic assumption is that the input domains may share certain knowledge structure, which can be encoded into common latent factors and extracted by preserving important property of original data, e.g., statistical property and geometric structure. In this paper, we show that different properties of input data can be complementary to each other and exploring them simultaneously can make the learning model robust to the domain difference. We propose a general framework, referred to as Graph Co-Regularized Transfer Learning (GTL), where various matrix factorization models can be incorporated. Specifically, GTL aims to extract common latent factors for knowledge transfer by preserving the statistical property across domains, and simultaneously, refine the latent factors to alleviate negative transfer by preserving the geometric structure in each domain. Based on the framework, we propose two novel methods using NMF and NMTF, respectively. Extensive experiments verify that GTL can significantly outperform state-of-the-art learning methods on several public text and image datasets.

161 citations


Journal ArticleDOI
TL;DR: A novel algorithm framework, called TRAMP, is explored, to learn action models with limited training data in a target domain, via transferring as much of the available information from other domains as possible to help the learning task, assuming action models in source domains can be transferred to the target domain.

78 citations


Proceedings ArticleDOI
03 Nov 2014
TL;DR: A Hierarchical Music Emotion Recognition model (HMER) is proposed -- a novel hierarchical Bayesian model using sentence-level music and lyrics features that captures music emotion dynamics with a song-segment-sentence hierarchical structure.
Abstract: Music emotion recognition, which aims to automatically recognize the affective content of a piece of music, has become one of the key components of music searching, exploring, and social networking applications. Although researchers have given more and more attention to music emotion recognition studies, the recognition performance has come to a bottleneck in recent years. One major reason is that experts' labels for music emotion are mostly song-level, while music emotion usually varies within a song. Traditional methods have considered each song as a single instance and have built models based on song-level features. However, they ignored the dynamics of music emotion and failed to capture accurate emotion-feature correlations. In this paper, we model music emotion recognition as a novel multi-label multi-layer multi-instance multi-view learning problem: music is formulated as a hierarchical multi-instance structure (e.g., song-segment-sentence) where multiple emotion labels correspond to at least one of the instances with multiple views of each layer. We propose a Hierarchical Music Emotion Recognition model (HMER) -- a novel hierarchical Bayesian model using sentence-level music and lyrics features. It captures music emotion dynamics with a song-segment-sentence hierarchical structure. HMER also considers emotion correlations between both music segments and sentences. Experimental results show that HMER outperforms several state-of-the-art methods in terms of $F_1$ score and mean average precision.

76 citations


Proceedings ArticleDOI
24 Aug 2014
TL;DR: A novel temporal and personalized topic model that jointly considers temporal dependencies between video semantics, users' interaction in commenting, and users' preferences as prior knowledge is proposed that outperforms several state-of-the-art baselines in video tagging quality.
Abstract: Time-sync video tagging aims to automatically generate tags for each video shot. It can improve the user's experience in previewing a video's timeline structure compared to traditional schemes that tag an entire video clip. In this paper, we propose a new application which extracts time-sync video tags by automatically exploiting crowdsourced comments from video websites such as Nico Nico Douga, where videos are commented on by online crowd users in a time-sync manner. The challenge of the proposed application is that users with bias interact with one another frequently and bring noise into the data, while the comments are too sparse to compensate for the noise. Previous techniques are unable to handle this task well as they consider video semantics independently, which may overfit the sparse comments in each shot and thus fail to provide accurate modeling. To resolve these issues, we propose a novel temporal and personalized topic model that jointly considers temporal dependencies between video semantics, users' interaction in commenting, and users' preferences as prior knowledge. Our proposed model shares knowledge across video shots via users to enrich the short comments, and peels off user interaction and user bias to solve the noisy-comment problem. Log-likelihood analyses and user studies on large datasets show that the proposed model outperforms several state-of-the-art baselines in video tagging quality. Case studies also demonstrate our model's capability of extracting tags from the crowdsourced short and noisy comments.

63 citations


Journal ArticleDOI
TL;DR: A new learning algorithm, called HTNLearn, is proposed to help acquire HTN methods and action models simultaneously from partially observed plan traces, and the experimental results show that the algorithm HTN learn is both effective and efficient.

62 citations


Proceedings Article
27 Jul 2014
TL;DR: This paper proposes a transfer learning framework that effectively selects helpful auxiliary data from an open knowledge space and calls it Source Free Transfer Learning (SFTL), and highlights the automatic constructions of queries and the robustness of the SFTL framework.
Abstract: Transfer learning uses relevant auxiliary data to help the learning task in a target domain where labeled data is usually insufficient to train an accurate model. Given appropriate auxiliary data, researchers have proposed many transfer learning models. How to find such auxiliary data, however, is of little research so far. In this paper, we focus on the problem of auxiliary data retrieval, and propose a transfer learning framework that effectively selects helpful auxiliary data from an open knowledge space (e.g. the World Wide Web). Because there is no need of manually selecting auxiliary data for different target domain tasks, we call our framework Source Free Transfer Learning (SFTL). For each target domain task, SFTL framework iteratively queries for the helpful auxiliary data based on the learned model and then updates the model using the retrieved auxiliary data. We highlight the automatic constructions of queries and the robustness of the SFTL framework. Our experiments on 20NewsGroup dataset and a Google search snippets dataset suggest that the framework is capable of achieving comparable performance to those state-of-the-art methods with dedicated selections of auxiliary data.

49 citations


Proceedings ArticleDOI
24 Aug 2014
TL;DR: This paper proposes a Heterogeneous Translated Hashing (HTH) method with such auxiliary bridge incorporated not only to improve current multi-view search but also to enable similarity search across heterogeneous media which have no direct correspondence.
Abstract: Hashing has enjoyed a great success in large-scale similarity search. Recently, researchers have studied the multi-modal hashing to meet the need of similarity search across different types of media. However, most of the existing methods are applied to search across multi-views among which explicit bridge information is provided. Given a heterogeneous media search task, we observe that abundant multi-view data can be found on the Web which can serve as an auxiliary bridge. In this paper, we propose a Heterogeneous Translated Hashing (HTH) method with such auxiliary bridge incorporated not only to improve current multi-view search but also to enable similarity search across heterogeneous media which have no direct correspondence. HTH simultaneously learns hash functions embedding heterogeneous media into different Hamming spaces, and translators aligning these spaces. Unlike almost all existing methods that map heterogeneous data in a common Hamming space, mapping to different spaces provides more flexible and discriminative ability. We empirically verify the effectiveness and efficiency of our algorithm on two real world large datasets, one publicly available dataset of Flickr and the other MIRFLICKR-Yahoo Answers dataset.

41 citations


Journal ArticleDOI
TL;DR: The proposed composite network-based user behavior models significantly improve the predictive accuracy over a number of existing approaches on several real-world applications, including a very large social networking dataset from Tencent Inc.
Abstract: Accurate prediction of user behaviors is important for many social media applications, including social marketing, personalization, and recommendation. A major challenge lies in that although many previous works model user behavior from only historical behavior logs, the available user behavior data or interactions between users and items in a given social network are usually very limited and sparse (e.g., g 99.9p empty), which makes models overfit the rare observations and fail to provide accurate predictions. We observe that many people are members of several social networks in the same time, such as Facebook, Twitter, and Tencent’s QQ. Importantly, users’ behaviors and interests in different networks influence one another. This provides an opportunity to leverage the knowledge of user behaviors in different networks by considering the overlapping users in different networks as bridges, in order to alleviate the data sparsity problem, and enhance the predictive performance of user behavior modeling. Combining different networks “simply and naively” does not work well. In this article, we formulate the problem to model multiple networks as “adaptive composite transfer” and propose a framework called ComSoc. ComSoc first selects the most suitable networks inside a composite social network via a hierarchical Bayesian model, parameterized for individual users. It then builds topic models for user behavior prediction using both the relationships in the selected networks and related behavior data. With different relational regularization, we introduce different implementations, corresponding to different ways to transfer knowledge from composite social relations. To handle big data, we have implemented the algorithm using Map/Reduce. We demonstrate that the proposed composite network-based user behavior models significantly improve the predictive accuracy over a number of existing approaches on several real-world applications, including a very large social networking dataset from Tencent Inc.

38 citations


Journal ArticleDOI
TL;DR: The numerical result shows that the cascading failure propagation in directed complex networks is highly dependent on the attack strategies and the directionality of the network, as well as other network configurations.

35 citations


Journal ArticleDOI
TL;DR: A novel algorithm is proposed to leverage knowledge from different views and sources collaboratively, by letting different views from different sources complement each other through a co‐training style framework, at the same time, it revises the distribution differences in different domains.
Abstract: Transfer learning, which aims to help learning tasks in a target domain by leveraging knowledge from auxiliary domains, has been demonstrated to be effective in different applications such as text mining, sentiment analysis, and so on. In addition, in many real-world applications, auxiliary data are described from multiple perspectives and usually carried by multiple sources. For example, to help classify videos on Youtube, which include three perspectives: image, voice and subtitles, one may borrow data from Flickr, Last.FM and Google News. Although any single instance in these domains can only cover a part of the views available on Youtube, the piece of information carried by them may compensate one another. If we can exploit these auxiliary domains in a collective manner, and transfer the knowledge to the target domain, we can improve the target model building from multiple perspectives. In this article, we consider this transfer learning problem as Transfer Learning with Multiple Views and Multiple Sources. As different sources may have different probability distributions and different views may compensate or be inconsistent with each other, merging all data in a simplistic manner will not give an optimal result. Thus, we propose a novel algorithm to leverage knowledge from different views and sources collaboratively, by letting different views from different sources complement each other through a co-training style framework, at the same time, it revises the distribution differences in different domains. We conduct empirical studies on several real-world datasets to show that the proposed approach can improve the classification accuracy by up to 8% against different kinds of state-of-the-art baselines.

Proceedings ArticleDOI
01 Jan 2014
TL;DR: A novel algorithm named MixedTransfer is proposed, composed of a cross domain harmonic function to avoid personal biases, a joint transition probability graph of mixed instances and features to model the heterogeneous transfer learning problem, and a random walk process to simulate the label propagation on the graph and avoid the data sparsity problem.
Abstract: Heterogeneous transfer learning has been proposed as a new learning strategy to improve performance in a target domain by leveraging data from other heterogeneous source domains where feature spaces can be different across different domains. In order to connect two different spaces, one common technique is to bridge feature spaces by using some co-occurrence data. For example, annotated images can be used to build feature mapping from words to image features, and then applied on text-to-image knowledge transfer. However, in practice, such co-occurrence data are often from Web, e.g. Flickr, and generated by users. That means these data can be sparse and contain personal biases. Directly building models based on them may fail to provide reliable bridge. To solve these aforementioned problems, in this paper, we propose a novel algorithm named MixedTransfer. It is composed of three components, that is, a cross domain harmonic function to avoid personal biases, a joint transition probability graph of mixed instances and features to model the heterogeneous transfer learning problem, a random walk process to simulate the label propagation on the graph and avoid the data sparsity problem. We conduct experiments on 171 real-world tasks, showing that the proposed approach outperforms four state-of-the-art heterogeneous transfer learning algorithms.

Journal ArticleDOI
TL;DR: The coencapsulation of an anti-CSC agent with conventional chemotherapy drugs in liposomes may be a promising drug delivery strategy for fighting cancer and eradicating tumor stem cells.

Proceedings ArticleDOI
24 Aug 2014
TL;DR: This paper proposes a “temporal skeletonization” approach to proactively reduce the cardinality of the representation for sequences by uncovering significant, hidden temporal structures and shows to greatly alleviate the curse of cardinality in challenging tasks of sequential pattern mining and clustering.
Abstract: Sequential pattern analysis targets on finding statistically relevant temporal structures where the values are delivered in a sequence. With the growing complexity of real-world dynamic scenarios, more and more symbols are often needed to encode a meaningful sequence. This is so-called 'curse of cardinality', which can impose significant challenges to the design of sequential analysis methods in terms of computational efficiency and practical use. Indeed, given the overwhelming scale and the heterogeneous nature of the sequential data, new visions and strategies are needed to face the challenges. To this end, in this paper, we propose a 'temporal skeletonization' approach to proactively reduce the representation of sequences to uncover significant, hidden temporal structures. The key idea is to summarize the temporal correlations in an undirected graph. Then, the 'skeleton' of the graph serves as a higher granularity on which hidden temporal patterns are more likely to be identified. In the meantime, the embedding topology of the graph allows us to translate the rich temporal content into a metric space. This opens up new possibilities to explore, quantify, and visualize sequential data. Our approach has shown to greatly alleviate the curse of cardinality in challenging tasks of sequential pattern mining and clustering. Evaluation on a Business-to-Business (B2B) marketing application demonstrates that our approach can effectively discover critical buying paths from noisy customer event data.


Journal ArticleDOI
Mingxuan Yuan1, Ke Deng1, Jia Zeng1, Yanhua Li1, Bing Ni1, Xiuqiang He1, Fei Wang1, Wenyuan Dai1, Qiang Yang1 
01 Aug 2014
TL;DR: A new, MBB data tailored, distributed analytic system named OceanST which has addressed a series of problems and weaknesses of the existing systems, originally designed for more general purpose and capable to handleMBB data to some extent is presented.
Abstract: With the increasing prevalence of versatile mobile devices and the fast deployment of broadband mobile networks, a huge volume of Mobile Broadband (MBB) data has been generated over time. The MBB data naturally contain rich information of a large number of mobile users, covering a considerable fraction of whole population nowadays, including the mobile applications they are using at different locations and time; the MBB data may present the unprecedentedly large knowledge base of human behavior which has highly recognized commercial and social value. However, the storage, management and analysis of the huge and fast growing volume of MBB data post new and significant challenges to the industrial practitioners and research community. In this demonstration, we present a new, MBB data tailored, distributed analytic system named OceanST which has addressed a series of problems and weaknesses of the existing systems, originally designed for more general purpose and capable to handle MBB data to some extent. OceanST is featured by (i) efficiently loading of ever-growing MBB data, (ii) a bunch of spatiotemporal aggregate queries and basic analysis APIs frequently found in various MBB data application scenarios, and (iii) sampling-based approximate solution with provable accuracy bound to cope with huge volume of MBB data. The demonstration will show the advantage of OceanST in a cluster of 5 machines using 3TB data.

Proceedings ArticleDOI
22 Dec 2014
TL;DR: An energy scheduling algorithmic solution aiming to investigate the potential economical benefits of matching generation to the demand through load control at the domestic level and the real-time pricing scheme is considered.
Abstract: Due to the installation of different forms of small-scale distributed generators (DGs), e.g. wind turbines, photovoltaic (PV) panels in the domestic scope as well as the emergence of real-time electricity pricing, achieving a cost-effective energy management through appropriately scheduling the operation of domestic appliances becomes feasible. In this paper, we present an energy scheduling algorithmic solution aiming to investigate the potential economical benefits of matching generation to the demand through load control at the domestic level. Also the real-time pricing (RTP) scheme is considered in optimizing the scheduling events such that the electricity bill is minimized whilst meeting the operational constraints of domestic appliances. The proposed solution is evaluated through a set of simulation experiments and the numerical result demonstrates that the suggest approach could appropriately coordinate the DG sources and domestic loads with maximized economical benefits and improved DG resource utilization.

Proceedings ArticleDOI
22 Dec 2014
TL;DR: In this paper, the authors present the modeling and implementation of a simulation test-bed of AC/DC hybrid microgrid system considering the most available distributed generators (e.g. PV, micro turbines), typical storage facilities (i.e. battery) and different forms of power loads (linear and non-linear loads), based on the MATLAB/Simulink tool and investigate the coordinated energy management approach for improving the DG utilization efficiency as well as the power quality under different microgrid operational conditions.
Abstract: In most recent years, there is an increasing interest of promoting the proportion of power supply from the renewable energy sources through integrating various forms of small-scale Distributed Generators (DGs) and energy storage units with the capacity ranging from a number of kilowatts to a number of Megawatts, e.g. Fuel cells (FC), Photovoltaic cells (PV), Batteries, micro turbines, to the medium voltage power distribution networks. This is mainly driven by advances in renewable energy technology and the ambition of meeting the target of C02 emissions reduction and the demand of enhancing energy security and quality of supply. Currently the microgrid becomes a new energy provision paradigm to enhance the power supply reliability which has a collection of power loads, micro sources and energy storage systems (ESSs). In addition, the hybrid AC/DC microgrid becomes prevalent which is capable of direct interfacing different forms of renewable sources and loads, and hence eliminates the conventional multiple energy conversions in AC and DC energy systems. Such hybrid microgrid provides a cost-effective energy supply paradigm with significantly reduced operation and maintenance cost. In this paper, we present the modeling and implementation of a simulation test-bed of AC/DC hybrid microgrid system considering the most available distributed generators (e.g. PV, micro turbines), typical storage facilities (e.g. battery) and different forms of power loads (linear and non-linear loads), based on the MATLAB/Simulink tool and investigate the coordinated energy management approach for improving the DG utilization efficiency as well as the power quality under different microgrid operational conditions. The effectiveness and performance of the suggested coordinated energy management approach in the autonomous hybrid AC/DC microgrid is well assessed through carrying out a set of simulation experiments for typical microgrid operational scenarios.

Patent
Mingzhu Li1, Qiang Yang1, Yanlin Song1, Libin Wang1, Liang Yang1, Ping Wang1 
25 Aug 2014
TL;DR: A photonic crystal microsphere as discussed by the authors is composed of a plurality of mono-dispersed polymer particles in a closely-packed and regularly-ordered structure, with interstition there between, forming the photonic lattice.
Abstract: A photonic crystal microsphere, comprising: a plurality of mono-dispersed polymer particles in a closely-packed and regularly-ordered structure, with interstition therebetween, forming the photonic crystal microsphere; and a co-assembly material contained in the interstition. The photonic crystal microsphere provides a structure of enhanced strength and a good color effect.

Journal ArticleDOI
TL;DR: In this paper, a passively Q-switched thulium-doped fiber (TDF) laser based on the nonlinear polarization evolution technique was demonstrated with the central wavelength of 1898.4 nm.
Abstract: A passively Q-switched thulium-doped fiber (TDF) laser based on the nonlinear polarization evolution technique was demonstrated with the central wavelength of 1898.4 nm. With the increasing pump power, the pulse repetition frequency of the Q-switched TDF laser from 87.6 to 110.1 kHz was achieved, while the corresponding pulse duration was changed from 1171 to 785.7 ns. The power instability of the TDF laser was measured to be about ±1.5% during 8 h. In addition, the mode-locked phenomenon was also observed in our all-fiber TDF laser by carefully adjusting the polarization controllers.

Proceedings ArticleDOI
01 Jan 2014
TL;DR: This paper addresses a general setting where different tasks have heterogeneous input spaces, and proposes a general framework to leverage the pivot information and proposes an effective optimization algorithm to find both the mappings and the prediction model.
Abstract: Most existing works on multi-task learning (MTL) assume the same input space for different tasks. In this paper, we address a general setting where different tasks have heterogeneous input spaces. This setting has a lot of potential applications, yet it poses new algorithmic challenges how can we link seemingly uncorrelated tasks to mutually boost their learning performance? Our key observation is that in many real applications, there might exist some correspondence among the inputs of different tasks, which is referred to as pivots. For such applications, we first propose a learning scheme for multiple tasks and analyze its generalization performance. Then we focus on the problems where only a limited number of the pivots are available, and propose a general framework to leverage the pivot information. The idea is to map the heterogeneous input spaces to a common space, and construct a single prediction model in this space for all the tasks. We further propose an effective optimization algorithm to find both the mappings and the prediction model. Experimental results demonstrate its effectiveness, especially with very limited number of pivots.


Proceedings ArticleDOI
Boyang Zhou1, Wen Gao1, Zhao Shanshan1, Lu Xinjia1, Du Zhong1, Chunming Wu1, Qiang Yang1 
TL;DR: This work proposes a novel multi-domain mapping algorithm for SDN using a distributed architecture to achieve a better efficiency and flexibility than the traditional PolyViNE approach, meanwhile protecting the privacy.
Abstract: Software-Defined Networking (SDN) separates the control plane from the data plane to improve the control flexibility, supporting multiple services with their isolated physical resources. In SDN, the virtual network (VN) mapping is required by network services for allocating these resources in the multidomain SDN. Such mapping problem is challenged by the NP-Completeness of the mapping and business privacy to protect the domain topology. We propose a novel multi-domain mapping algorithm for SDN using a distributed architecture to achieve a better efficiency and flexibility than the traditional PolyViNE approach, meanwhile protecting the privacy. By simulating on a large synthesized topology with 10 to 40 domains, our approach shows 25% and 15% faster than the PolyViNE in time, and 30% better in balancing load on multiple controllers.


Posted Content
TL;DR: Empirical studies demonstrate that the proposed approach improves the mean average precision of friendship prediction over state-of-the-art baselines on nine real-world social networking datasets significantly.
Abstract: Friendship prediction is an important task in social network analysis (SNA). It can help users identify friends and improve their level of activity. Most previous approaches predict users' friendship based on their historical records, such as their existing friendship, social interactions, etc. However, in reality, most users have limited friends in a single network, and the data can be very sparse. The sparsity problem causes existing methods to overfit the rare observations and suffer from serious performance degradation. This is particularly true when a new social network just starts to form. We observe that many of today's social networks are composite in nature, where people are often engaged in multiple networks. In addition, users' friendships are always correlated, for example, they are both friends on Facebook and Google+. Thus, by considering those overlapping users as the bridge, the friendship knowledge in other networks can help predict their friendships in the current network. This can be achieved by exploiting the knowledge in different networks in a collective manner. However, as each individual network has its own properties that can be incompatible and inconsistent with other networks, the naive merging of all networks into a single one may not work well. The proposed solution is to extract the common behaviors between different networks via a hierarchical Bayesian model. It captures the common knowledge across networks, while avoiding negative impacts due to network differences. Empirical studies demonstrate that the proposed approach improves the mean average precision of friendship prediction over state-of-the-art baselines on nine real-world social networking datasets significantly.

Proceedings ArticleDOI
14 Jul 2014
TL;DR: The proposed partitioning solution is assessed through a set of numerical comparative study with the IEEE 69-bus test model by using two available solutions as the comparison benchmark and demonstrates that the proposed solution performs well in terms of guaranteeing the reliable supply of essential power loads as well as improving the utilization efficiency of distributed generations.
Abstract: Island operation of a fraction of power distribution network with distributed generators (DGs) is considered an efficient operational paradigm to enhance the security of power supply. This paper attempts to address the issue of island partitioning in distribution network with the penetration of small-scale DGs and present a two-stage algorithmic solution. In the suggested solution, the CSP-based method is firstly adopted to create a collection of network partitioning results in respect to individual DGs meeting the imposed constraints from the distribution network which can be carried out in an offline fashion assuming the fault occurrence at certain points. Aiming to identify the optimal partitioning solution, the heuristic simulate anneal arithmetic (SAA) algorithmic design was employed. Through such two-stage approach, the process of optimal island partitioning can be obtained with acceptable time complexity in large-scale power distribution networks. The proposed partitioning solution is assessed through a set of numerical comparative study with the IEEE 69-bus test model by using two available solutions as the comparison benchmark. The numerical result demonstrates that the proposed solution performs well in terms of guaranteeing the reliable supply of essential power loads as well as improving the utilization efficiency of distributed generations.

Proceedings ArticleDOI
19 May 2014
TL;DR: A nonuniform signal recovering algorithm based on sparseness-constrained least-squares inversion is proposed in recovering uniform sampling signal, after which the recovered signal is used to beamfoming and results demonstrate that slop lobe is suppressed successfully.
Abstract: In order to boost HF radar angle resolution and reduce the physical size of array, we adopt multi-carrier frequency in MIMO HFSWR system, which also lead to the benefit of the frequency diversity. Beamforming at receive array is studied in this paper, and thanks to the using of large spaced carrier frequency combination, better angle resolution is obtained by a nonuniform virtual array (a minimally redundant array) with extended aperture than the physical real array, but high slop lobe appears as a drawback. For the purpose of suppressing the slop lobe, a nonuniform signal recovering algorithm based on sparseness-constrained least-squares inversion is proposed in recovering uniform sampling signal, after which the recovered signal is used to beamfoming. Simulation results demonstrate that slop lobe is suppressed successfully.

Patent
16 Jul 2014
TL;DR: In this article, the authors provide a device and a method for joint evaluation of denitration and demercuration performances of a catalyst for a specific flue gas, which can be selected by replacing of different types of catalysts.
Abstract: The aim of the invention is to provide a device and a method for joint evaluation of denitration and demercuration performances of a catalyst. The device comprises pipelines I, II, III, IV, V and VII, a pressure reducing valve, a filter, a mass flowmeter, a one-way valve, a preheating mixer, an electric heating furnace, three-way ball valves I, II and III, a flue gas analyzer, a mercury analyzer, and a catalytic reactor with two gas inlet and a gas outlet. Actual conditions of a factory are simulated by production of flue gases with different ratios. The catalyst with good effects both on denitration and demercuration specific to a specific flue gas can be selected by replacing of different types of catalysts.

Posted Content
TL;DR: A similarity metric called pyramid-error distance (PED) is introduced to measure pairwise distances of RF's through summing up pyramid-like matching errors over a set of low-level features, and in consistent with the proposed PED, a simple nonparametric classifier is constructed for classification.
Abstract: The challenge of object categorization in images is largely due to arbitrary translations and scales of the foreground objects. To attack this difficulty, we propose a new approach called collaborative receptive field learning to extract specific receptive fields (RF's) or regions from multiple images, and the selected RF's are supposed to focus on the foreground objects of a common category. To this end, we solve the problem by maximizing a submodular function over a similarity graph constructed by a pool of RF candidates. However, measuring pairwise distance of RF's for building the similarity graph is a nontrivial problem. Hence, we introduce a similarity metric called pyramid-error distance (PED) to measure their pairwise distances through summing up pyramid-like matching errors over a set of low-level features. Besides, in consistent with the proposed PED, we construct a simple nonparametric classifier for classification. Experimental results show that our method effectively discovers the foreground objects in images, and improves classification performance.

Journal ArticleDOI
TL;DR: The research discussed here transforms the biological problem into a machine learning problem by converting wild bird migratory paths into graphs and H5N1 outbreak prediction is achieved by discovering weighted closed cliques from the graphs using the mining algorithm High-wEight cLosed cliquE miNing (HELEN).
Abstract: Advanced satellite tracking technologies have collected huge amounts of wild bird migration data Biologists use these data to understand dynamic migration patterns, study correlations between habitats, and predict global spreading trends of avian influenza The research discussed here transforms the biological problem into a machine learning problem by converting wild bird migratory paths into graphs H5N1 outbreak prediction is achieved by discovering weighted closed cliques from the graphs using the mining algorithm High-wEight cLosed cliquE miNing (HELEN) The learning algorithm HELEN-p then predicts potential H5N1 outbreaks at habitats This prediction method is more accurate than traditional methods used on a migration dataset obtained through a real satellite bird-tracking system Empirical analysis shows that H5N1 spreads in a manner of high-weight closed cliques and frequent cliques