scispace - formally typeset
Search or ask a question
Author

Vishwanath Raman

Bio: Vishwanath Raman is an academic researcher from University of California, Santa Cruz. The author has contributed to research in topics: Bisimulation & Combinatorial game theory. The author has an hindex of 14, co-authored 25 publications receiving 799 citations. Previous affiliations of Vishwanath Raman include Carnegie Mellon University & FireEye, Inc..

Papers
More filters
Proceedings ArticleDOI
08 Sep 2008
TL;DR: A system that computes quantitative values of trust for the text in Wikipedia articles; these trust values provide an indication of text reliability, and it is shown that text labeled as low-trust has a significantly higher probability of being edited in the future than text labeling as high-trust.
Abstract: The Wikipedia is a collaborative encyclopedia: anyone can contribute to its articles simply by clicking on an "edit" button. The open nature of the Wikipedia has been key to its success, but has also created a challenge: how can readers develop an informed opinion on its reliability? We propose a system that computes quantitative values of trust for the text in Wikipedia articles; these trust values provide an indication of text reliability.The system uses as input the revision history of each article, as well as information about the reputation of the contributing authors, as provided by a reputation system. The trust of a word in an article is computed on the basis of the reputation of the original author of the word, as well as the reputation of all authors who edited text near the word. The algorithm computes word trust values that vary smoothly across the text; the trust values can be visualized using varying text-background colors. The algorithm ensures that all changes to an article's text are reflected in the trust values, preventing surreptitious content changes.We have implemented the proposed system, and we have used it to compute and display the trust of the text of thousands of articles of the English Wikipedia. To validate our trust-computation algorithms, we show that text labeled as low-trust has a significantly higher probability of being edited in the future than text labeled as high-trust.

220 citations

Proceedings ArticleDOI
08 Sep 2008
TL;DR: The problem of measuring user contributions to versioned, collaborative bodies of information, such as wikis, is considered and various alternative criteria that take into account the quality of a contribution, in addition to the quantity are considered.
Abstract: We consider the problem of measuring user contributions to versioned, collaborative bodies of information, such as wikis. Measuring the contributions of individual authors can be used to divide revenue, to recognize merit, to award status promotions, and to choose the order of authors when citing the content. In the context of the Wikipedia, previous works on author contribution estimation have focused on two criteria: the total text created, and the total number of edits performed. We show that neither of these criteria work well: both techniques are vulnerable to manipulation, and the total-text criterion fails to reward people who polish or re-arrange the content.We consider and compare various alternative criteria that take into account the quality of a contribution, in addition to the quantity, and we analyze how the criteria differ in the way they rank authors according to their contributions. As an outcome of this study, we propose to adopt total edit longevity as a measure of author contribution. Edit longevity is resistant to simple attacks, since edits are counted towards an author's contribution only if other authors accept the contribution. Edit longevity equally rewards people who create content, and people who rearrange or polish the content. Finally, edit longevity distinguishes the people who contribute little (who have contribution close to zero) from spammers or vandals, whose contribution quickly grows negative.

130 citations

Book ChapterDOI
11 Sep 2012
TL;DR: The technique, named Psyco (Predicate-based SYmbolic COmpositional reasoning), employs a novel combination of the L* automata learning algorithm with symbolic execution, generating interfaces that capture whether a sequence of method invocations is safe, unsafe, or its effect on the component state is unresolved by the symbolic execution engine.
Abstract: Given a white-box component 𝒷 with specified unsafe states, we address the problem of automatically generating an interface that captures safe orderings of invocations of 𝒷's public methods. Method calls in the generated interface are guarded by constraints on their parameters. Unlike previous work, these constraints are generated automatically through an iterative refinement process. Our technique, named Psyco (Predicate-based SYmbolic COmpositional reasoning), employs a novel combination of the L* automata learning algorithm with symbolic execution. The generated interfaces are three-valued, capturing whether a sequence of method invocations is safe, unsafe, or its effect on the component state is unresolved by the symbolic execution engine. We have implemented Psyco as a new prototype tool in the JPF open-source software model checking platform, and we have successfully applied it to several examples.

65 citations

Book ChapterDOI
02 Apr 2016
TL;DR: JDart is described, a dynamic symbolic analysis framework for Java that is able to handle NASA software with constraints containing bit operations, floating point arithmetic, and complex arithmetic operations e.g., trigonometric and nonlinear.
Abstract: We describe JDart, a dynamic symbolic analysis framework for Java. A distinguishing feature of JDart is its modular architecture: the main component that performs dynamic exploration communicates with a component that efficiently constructs constraints and that interfaces with constraint solvers. These components can easily be extended or modified to support multiple constraint solvers or different exploration strategies. Moreover, JDart has been engineered for robustness, driven by the need to handle complex NASA software. These characteristics, together with its recent open sourcing, make JDart an ideal platform for research and experimentation. In the current release, JDart supports the CORAL, SMTInterpol, and Z3 solvers, and is able to handle NASA software with constraints containing bit operations, floating point arithmetic, and complex arithmetic operations e.g., trigonometric and nonlinear. We illustrate how JDart has been used to support other analysis techniques, such as automated interface generation and testing of libraries. Finally, we demonstrate the versatility and effectiveness of JDart, and compare it with state-of-the-art dynamic or pure symbolic execution engines through an extensive experimental evaluation.

56 citations

Proceedings ArticleDOI
10 Jul 2007
TL;DR: It is claimed that relations and metrics provide the canonical extensions to games, of the classical notion of bisimulation for transition systems, and this work introduces equivalences and metrics for two-player game structures, and shows that they characterize the difference in probability of winning games whose goals are expressed in the quantitative mu-calculus.
Abstract: We consider two-player games played over finite state spaces for an infinite number of rounds. At each state, the players simultaneously choose moves; the moves determine a successor state. It is often advantageous for players to choose probability distributions over moves, rather than single moves. Given a goal (e.g., "reach a target state"), the question of winning is thus a probabilistic one: "what is the maximal probability of winning from a given state?". On these game structures, two fundamental notions are those of equivalences and metrics. Given a set of winning conditions, two states are equivalent if the players can win the same games with the same probability from both states. Metrics provide a bound on the difference in the probabilities of winning across states, capturing a quantitative notion of state "similarity". We introduce equivalences and metrics for two-player game structures, and we show that they characterize the difference in probability of winning games whose goals are expressed in the quantitative mu-calculus. The quantitative mu- calculus can express a large set of goals, including reachability, safety, and omega-regular properties. Thus, we claim that our relations and metrics provide the canonical extensions to games, of the classical notion of bisimulation for transition systems. We develop our results both for equivalences and metrics, which generalize bisimulation, and for asymmetrical versions, which generalize simulation.

54 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A Glimpse at Set Theory: The Topology of Cartesian Spaces and the Functions of One Variable.
Abstract: A Glimpse at Set Theory. The Real Numbers. The Topology of Cartesian Spaces. Convergence. Continuous Functions. Functions of One Variable. Infinite Series. Differentiation in RP Integration in RP.

621 citations

Journal ArticleDOI
TL;DR: The alternating-time temporal logic (ATL) as discussed by the authors is a more general variant of temporal logic that allows selective quantification over those paths that are possible outcomes of games, such as the game in which the system and the environment alternate moves.

442 citations

Book ChapterDOI
01 Aug 2003

349 citations

Journal ArticleDOI
TL;DR: In this article, the authors mainly studied how to find proper splitting strategies of large-scale power systems using an ordered binary decision diagrams (OBDD)-based three-phase method and a time-based layered structure of the problem solving process was introduced to make this method more practical.
Abstract: System splitting problem (SS problem) is to determine proper splitting points (or called splitting strategies) to split the entire interconnected transmission network into islands ensuring generation/load balance and satisfaction of transmission capacity constraints when islanding operation of a system is unavoidable. For a large-scale power system, its SS problem is very complicated in general because a combinatorial explosion of strategy space happens. This paper mainly studies how to find proper splitting strategies of large-scale power systems using an ordered binary decision diagrams (OBDD)-based three-phase method. Then, a time-based layered structure of the problem solving process is introduced to make this method more practical. Simulation results on IEEE 30- and 118-bus networks show that by this method, proper splitting strategies can be given quickly. Further analyses indicate that this method is effective for larger-scale power systems.

296 citations

Proceedings ArticleDOI
25 Oct 2009
TL;DR: It is shown that recent editing activity suggests that Wikipedia growth has slowed, and perhaps plateaued, indicating that it may have come against its limits to growth.
Abstract: Prior research on Wikipedia has characterized the growth in content and editors as being fundamentally exponential in nature, extrapolating current trends into the future. We show that recent editing activity suggests that Wikipedia growth has slowed, and perhaps plateaued, indicating that it may have come against its limits to growth. We measure growth, population shifts, and patterns of editor and administrator activities, contrasting these against past results where possible. Both the rate of page growth and editor growth has declined. As growth has declined, there are indicators of increased coordination and overhead costs, exclusion of newcomers, and resistance to new edits. We discuss some possible explanations for these new developments in Wikipedia including decreased opportunities for sharing existing knowledge and increased bureaucratic stress on the socio-technical system itself.

262 citations