scispace - formally typeset
Search or ask a question
Book

Linear complementarity, linear and nonlinear programming

01 Jan 1988-
About: The article was published on 1988-01-01 and is currently open access. It has received 1012 citations till now. The article focuses on the topics: Mixed complementarity problem & Complementarity theory.
Citations
More filters
Journal ArticleDOI
TL;DR: A new class of matrices is defined and it is established that in$$\bar Z$$ superfluous matrices of any ordern ⩾ 4 can easily be constructed.
Abstract: Superfluous matrices were introduced by Howe (1983) in linear complementarity. In general, producing examples of this class is tedious (a few examples can be found in Chapter 6 of Cottle, Pang and Stone (1992)). To overcome this problem, we define a new class of matrices\(\bar Z\) and establish that in\(\bar Z\) superfluous matrices of any ordern ⩾ 4 can easily be constructed. For every integerk, an example of a superfluous matrix of degreek is exhibited in the end.

1 citations

Posted Content
TL;DR: This work introduces and analyzes traffic equations for queueing networks with buffer overflows and presents a novel, efficient algorithm for solving these traffic equations together with a sufficient condition for existence and uniqueness of solutions.
Abstract: Many real life queueing networks have finite buffers with overflows. To understand the behavior of such networks, we consider traffic equations that generalize the traffic equations of classic open queueing networks where some nodes are potentially overloaded. We present a novel, efficient algorithm for solving the equations for overflow networks together with a sufficient condition for existence and uniqueness of solutions. Our analysis also sharpens results of traffic equations for classic open queueing networks.

1 citations

Dissertation
28 Jan 2013
TL;DR: A probabilistic model is developed that provides accurate relevance judgments with a smaller number of labels collected per document, and should assist research institutes and commercial search engines to construct test collections where there are large document collections and large query logs, but where economic constraints prohibit gathering comprehensive relevance judgments.
Abstract: We consider the problem of optimally allocating a limited budget to acquire relevance judgments when constructing an information retrieval test collection. We assume that there is a large set of test queries, for each of which a large number of documents need to be judged. However, the available budget only permits to judge a subset of them. We begin by developing a mathematical framework for query selection as a mechanism for reducing the cost of constructing information retrieval test collections. The mathematical framework provides valuable insights into properties of the optimal subset of queries. These are that the optimal subset of queries should be least correlated with one another, but have a strong correlation with the rest of queries. In contrast to previous work, which is mostly retrospective, our mathematical framework does not assume that relevance judgments are available a priori, and hence is designed to work in practice. The mathematical framework is then extended to accommodate both the query selection and document selection approaches to arrive at a unified budget allocation method that prioritizes query-document pairs and selects a subset of them with the highest priority scores to be judged. The unified budget allocation is formulated as a convex optimization, thereby permitting efficient solution and providing a flexible framework to incorporate various optimization constraints. Once a subset of query-document pairs are selected, crowdsourcing can be used to collect associated relevance judgments. While the labels provided by crowdsourcing are relatively inexpensive, they vary in quality, introducing noise into the relevance judgments. To deal with noisy relevance judgments, multiple labels for a document are collected from different assessors. It is common practice in information retrieval to use majority voting to aggregate multiple labels. In contrast, we develop a probabilistic model that provides accurate relevance judgments with a smaller number of labels collected per document. We demonstrate the effectiveness of our cost optimization approach on three experimental data, namely: (i) various TREC tracks, (ii) a web test collection of an online search engine, and (iii) crowdsourced data collected for the INEX 2010 Book Search track. Our approach should assist research institutes, e.g. National Institute and Standard Technology (NIST), and commercial search engines, e.g. Google and Bing, to construct test collections where there are large document collections and large query logs, but where economic constraints prohibit gathering comprehensive relevance judgments.

1 citations

Journal ArticleDOI
TL;DR: A Jacobian smoothing algorithm for the solution of the box constrained variational inequality problems VIP(l,u,F) is proposed based on a reformulation of a semismooth system of equations by using the Fischer-Burmeister function.

1 citations