scispace - formally typeset
Search or ask a question
Institution

Yahoo!

CompanyLondon, United Kingdom
About: Yahoo! is a company organization based out in London, United Kingdom. It is known for research contribution in the topics: Population & Web search query. The organization has 26749 authors who have published 29915 publications receiving 732583 citations. The organization is also known as: Yahoo! Inc. & Maudwen-Yahoo! Inc.


Papers
More filters
Journal ArticleDOI
TL;DR: A LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness is established.
Abstract: Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski’s homepage, and the results demonstrate the improved performance of the proposed approach.

140 citations

Journal ArticleDOI
TL;DR: These two medications have many similarities with respect to mechanism of action, antimicrobial spectrum, clinical uses and toxicity, however, they also differ in several aspects, including chemical structure, formulation, potency, dosage and pharmacokinetic properties.
Abstract: Hospital-acquired infections due to multidrug-resistant gram-negative bacteria constitute major health problems, since the medical community is continuously running out of available effective antibiotics and no new agents are in the pipeline. Polymyxins, a group of antibacterials that were discovered during the late 1940s, represent some of the last treatment options for these infections. Only two polymyxins are available commercially, polymyxin E (colistin) and polymyxin B. Although several reviews have been published recently regarding colistin, no review has focused on the similarities and differences between polymyxin B and colistin. These two medications have many similarities with respect to mechanism of action, antimicrobial spectrum, clinical uses and toxicity. However, they also differ in several aspects, including chemical structure, formulation, potency, dosage and pharmacokinetic properties.

139 citations

Journal ArticleDOI
TL;DR: This review of the current literature aims to study correlations between the chemical structure and gastric anti-ulcer activity of tannins.
Abstract: This review of the current literature aims to study correlations between the chemical structure and gastric anti-ulcer activity of tannins. Tannins are used in medicine primarily because of their astringent properties. These properties are due to the fact that tannins react with the tissue proteins with which they come into contact. In gastric ulcers, this tannin-protein complex layer protects the stomach by promoting greater resistance to chemical and mechanical injury or irritation. Moreover, in several experimental models of gastric ulcer, tannins have been shown to present antioxidant activity, promote tissue repair, exhibit anti Helicobacter pylori effects, and they are involved in gastrointestinal tract anti-inflammatory processes. The presence of tannins explains the anti-ulcer effects of many natural products.

139 citations

Proceedings Article
02 Jun 2010
TL;DR: This work considers two main approaches to deriving simplification probabilities via an edit model that accounts for a mixture of different operations, and using metadata to focus on edits that are more likely to be simplification operations.
Abstract: We report on work in progress on extracting lexical simplifications (eg, "collaborate" → "work together"), focusing on utilizing edit histories in Simple English Wikipedia for this task We consider two main approaches: (1) deriving simplification probabilities via an edit model that accounts for a mixture of different operations, and (2) using metadata to focus on edits that are more likely to be simplification operations We find our methods to outperform a reasonable baseline and yield many high-quality lexical simplifications not included in an independently-created manually prepared list

139 citations

Posted Content
TL;DR: A novel loss function for pairwise ranking is proposed, which is smooth everywhere, and a label decision module is incorporated into the model, estimating the optimal confidence thresholds for each visual concept.
Abstract: Learning to rank has recently emerged as an attractive technique to train deep convolutional neural networks for various computer vision tasks. Pairwise ranking, in particular, has been successful in multi-label image classification, achieving state-of-the-art results on various benchmarks. However, most existing approaches use the hinge loss to train their models, which is non-smooth and thus is difficult to optimize especially with deep networks. Furthermore, they employ simple heuristics, such as top-k or thresholding, to determine which labels to include in the output from a ranked list of labels, which limits their use in the real-world setting. In this work, we propose two techniques to improve pairwise ranking based multi-label image classification: (1) we propose a novel loss function for pairwise ranking, which is smooth everywhere and thus is easier to optimize; and (2) we incorporate a label decision module into the model, estimating the optimal confidence thresholds for each visual concept. We provide theoretical analyses of our loss function in the Bayes consistency and risk minimization framework, and show its benefit over existing pairwise ranking formulations. We demonstrate the effectiveness of our approach on three large-scale datasets, VOC2007, NUS-WIDE and MS-COCO, achieving the best reported results in the literature.

139 citations


Authors

Showing all 26766 results

NameH-indexPapersCitations
Ashok Kumar1515654164086
Alexander J. Smola122434110222
Howard I. Maibach116182160765
Sanjay Jain10388146880
Amirhossein Sahebkar100130746132
Marc Davis9941250243
Wenjun Zhang9697638530
Jian Xu94136652057
Fortunato Ciardiello9469547352
Tong Zhang9341436519
Michael E. J. Lean9241130939
Ashish K. Jha8750330020
Xin Zhang87171440102
Theunis Piersma8663234201
George Varghese8425328598
Network Information
Related Institutions (5)
University of Toronto
294.9K papers, 13.5M citations

85% related

University of California, San Diego
204.5K papers, 12.3M citations

85% related

University College London
210.6K papers, 9.8M citations

84% related

Cornell University
235.5K papers, 12.2M citations

84% related

University of Washington
305.5K papers, 17.7M citations

84% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20232
202247
20211,088
20201,074
20191,568
20181,352