Institution
Yahoo!
Company•London, United Kingdom•
About: Yahoo! is a company organization based out in London, United Kingdom. It is known for research contribution in the topics: Population & Web search query. The organization has 26749 authors who have published 29915 publications receiving 732583 citations. The organization is also known as: Yahoo! Inc. & Maudwen-Yahoo! Inc.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness is established.
Abstract: Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski’s homepage, and the results demonstrate the improved performance of the proposed approach.
140 citations
••
TL;DR: These two medications have many similarities with respect to mechanism of action, antimicrobial spectrum, clinical uses and toxicity, however, they also differ in several aspects, including chemical structure, formulation, potency, dosage and pharmacokinetic properties.
Abstract: Hospital-acquired infections due to multidrug-resistant gram-negative bacteria constitute major health problems, since the medical community is continuously running out of available effective antibiotics and no new agents are in the pipeline. Polymyxins, a group of antibacterials that were discovered during the late 1940s, represent some of the last treatment options for these infections. Only two polymyxins are available commercially, polymyxin E (colistin) and polymyxin B. Although several reviews have been published recently regarding colistin, no review has focused on the similarities and differences between polymyxin B and colistin. These two medications have many similarities with respect to mechanism of action, antimicrobial spectrum, clinical uses and toxicity. However, they also differ in several aspects, including chemical structure, formulation, potency, dosage and pharmacokinetic properties.
139 citations
••
TL;DR: This review of the current literature aims to study correlations between the chemical structure and gastric anti-ulcer activity of tannins.
Abstract: This review of the current literature aims to study correlations between the chemical structure and gastric anti-ulcer activity of tannins. Tannins are used in medicine primarily because of their astringent properties. These properties are due to the fact that tannins react with the tissue proteins with which they come into contact. In gastric ulcers, this tannin-protein complex layer protects the stomach by promoting greater resistance to chemical and mechanical injury or irritation. Moreover, in several experimental models of gastric ulcer, tannins have been shown to present antioxidant activity, promote tissue repair, exhibit anti Helicobacter pylori effects, and they are involved in gastrointestinal tract anti-inflammatory processes. The presence of tannins explains the anti-ulcer effects of many natural products.
139 citations
•
02 Jun 2010TL;DR: This work considers two main approaches to deriving simplification probabilities via an edit model that accounts for a mixture of different operations, and using metadata to focus on edits that are more likely to be simplification operations.
Abstract: We report on work in progress on extracting lexical simplifications (eg, "collaborate" → "work together"), focusing on utilizing edit histories in Simple English Wikipedia for this task We consider two main approaches: (1) deriving simplification probabilities via an edit model that accounts for a mixture of different operations, and (2) using metadata to focus on edits that are more likely to be simplification operations We find our methods to outperform a reasonable baseline and yield many high-quality lexical simplifications not included in an independently-created manually prepared list
139 citations
•
TL;DR: A novel loss function for pairwise ranking is proposed, which is smooth everywhere, and a label decision module is incorporated into the model, estimating the optimal confidence thresholds for each visual concept.
Abstract: Learning to rank has recently emerged as an attractive technique to train deep convolutional neural networks for various computer vision tasks. Pairwise ranking, in particular, has been successful in multi-label image classification, achieving state-of-the-art results on various benchmarks. However, most existing approaches use the hinge loss to train their models, which is non-smooth and thus is difficult to optimize especially with deep networks. Furthermore, they employ simple heuristics, such as top-k or thresholding, to determine which labels to include in the output from a ranked list of labels, which limits their use in the real-world setting. In this work, we propose two techniques to improve pairwise ranking based multi-label image classification: (1) we propose a novel loss function for pairwise ranking, which is smooth everywhere and thus is easier to optimize; and (2) we incorporate a label decision module into the model, estimating the optimal confidence thresholds for each visual concept. We provide theoretical analyses of our loss function in the Bayes consistency and risk minimization framework, and show its benefit over existing pairwise ranking formulations. We demonstrate the effectiveness of our approach on three large-scale datasets, VOC2007, NUS-WIDE and MS-COCO, achieving the best reported results in the literature.
139 citations
Authors
Showing all 26766 results
Name | H-index | Papers | Citations |
---|---|---|---|
Ashok Kumar | 151 | 5654 | 164086 |
Alexander J. Smola | 122 | 434 | 110222 |
Howard I. Maibach | 116 | 1821 | 60765 |
Sanjay Jain | 103 | 881 | 46880 |
Amirhossein Sahebkar | 100 | 1307 | 46132 |
Marc Davis | 99 | 412 | 50243 |
Wenjun Zhang | 96 | 976 | 38530 |
Jian Xu | 94 | 1366 | 52057 |
Fortunato Ciardiello | 94 | 695 | 47352 |
Tong Zhang | 93 | 414 | 36519 |
Michael E. J. Lean | 92 | 411 | 30939 |
Ashish K. Jha | 87 | 503 | 30020 |
Xin Zhang | 87 | 1714 | 40102 |
Theunis Piersma | 86 | 632 | 34201 |
George Varghese | 84 | 253 | 28598 |