scispace - formally typeset
Search or ask a question
Author

Sharad Malik

Bio: Sharad Malik is an academic researcher from Imperial College London. The author has contributed to research in topics: Large Hadron Collider & Standard Model. The author has an hindex of 95, co-authored 615 publications receiving 37258 citations. Previous affiliations of Sharad Malik include University at Buffalo & University of California, Berkeley.


Papers
More filters
Proceedings ArticleDOI
22 Jun 2001
TL;DR: The development of a new complete solver, Chaff, is described which achieves significant performance gains through careful engineering of all aspects of the search-especially a particularly efficient implementation of Boolean constraint propagation (BCP) and a novel low overhead decision strategy.
Abstract: Boolean satisfiability is probably the most studied of the combinatorial optimization/search problems. Significant effort has been devoted to trying to provide practical solutions to this problem for problem instances encountered in a range of applications in electronic design automation (EDA), as well as in artificial intelligence (AI). This study has culminated in the development of several SAT packages, both proprietary and in the public domain (e.g. GRASP, SATO) which find significant use in both research and industry. Most existing complete solvers are variants of the Davis-Putnam (DP) search algorithm. In this paper we describe the development of a new complete solver, Chaff which achieves significant performance gains through careful engineering of all aspects of the search-especially a particularly efficient implementation of Boolean constraint propagation (BCP) and a novel low overhead decision strategy. Chaff has been able to obtain one to two orders of magnitude performance improvement on difficult SAT benchmarks in comparison with other solvers (DP or otherwise), including GRASP and SATO.

2,886 citations

Journal ArticleDOI
TL;DR: A power analysis technique is developed that has been applied to two commercial microprocessors and can be employed to evaluate the power cost of embedded software and can help in verifying if a design meets its specified power constraints.
Abstract: Embedded computer systems are characterized by the presence of a dedicated processor and the software that runs on it Power constraints are increasingly becoming the critical component of the design specification of these systems At present, however, power analysis tools can only be applied at the lower levels of the design-the circuit or gate level It is either impractical or impossible to use the lower level tools to estimate the power cost of the software component of the system This paper describes the first systematic attempt to model this power cost A power analysis technique is developed that has been applied to two commercial microprocessors-Intel 486DX2 and Fujitsu SPARClite 934 This technique can be employed to evaluate the power cost of embedded software This can help in verifying if a design meets its specified power constraints Further, it can also be used to search the design space in software power optimization Examples with power reduction of up to 40%, obtained by rewriting code using the information provided by the instruction level power model, illustrate the potential of this idea >

1,055 citations

Proceedings ArticleDOI
04 Nov 2001
TL;DR: This paper generalizes various conflict driven learning strategies in terms of different partitioning schemes of the implication graph to re-examine the learning techniques used in various SAT solvers and propose an array of new learning schemes.
Abstract: One of the most important features of current state-of-the-art SAT solvers is the use of conflict based backtracking and learning techniques. In this paper, we generalize various conflict driven learning strategies in terms of different partitioning schemes of the implication graph. We re-examine the learning techniques used in various SAT solvers and propose an array of new learning schemes. Extensive experiments with real world examples show that the best performing new learning scheme has at least a 2/spl times/ speedup compared with learning schemes employed in state-of-the-art SAT solvers.

848 citations

Journal ArticleDOI
S. Chatrchyan, Vardan Khachatryan, Albert M. Sirunyan, A. Tumasyan  +2268 moreInstitutions (158)
TL;DR: In this article, the transverse momentum balance in dijet and γ/Z+jets events is used to measure the jet energy response in the CMS detector, as well as the transversal momentum resolution.
Abstract: Measurements of the jet energy calibration and transverse momentum resolution in CMS are presented, performed with a data sample collected in proton-proton collisions at a centre-of-mass energy of 7TeV, corresponding to an integrated luminosity of 36pb−1. The transverse momentum balance in dijet and γ/Z+jets events is used to measure the jet energy response in the CMS detector, as well as the transverse momentum resolution. The results are presented for three different methods to reconstruct jets: a calorimeter-based approach, the ``Jet-Plus-Track'' approach, which improves the measurement of calorimeter jets by exploiting the associated tracks, and the ``Particle Flow'' approach, which attempts to reconstruct individually each particle in the event, prior to the jet clustering, based on information from all relevant subdetectors

750 citations

Proceedings ArticleDOI
18 Nov 2002
TL;DR: Orion is presented, a power-performance interconnection network simulator that is capable of providing detailed power characteristics, in addition to performance characteristics, to enable rapid power- performance trade-offs at the architectural-level.
Abstract: With the prevalence of server blades and systems-on-a-chip (SoCs), interconnection networks are becoming an important part of the microprocessor landscape. However, there is limited tool support available for their design. While performance simulators have been built that enable performance estimation while varying network parameters, these cover only one metric of interest in modern designs. System power consumption is increasingly becoming equally, if not more important than performance. It is now critical to get detailed power-performance tradeoff information early in the microarchitectural design cycle. This is especially so as interconnection networks consume a significant fraction of total system power. It is exactly this gap that the work presented in this paper aims to fill.We present Orion, a power-performance interconnection network simulator that is capable of providing detailed power characteristics, in addition to performance characteristics, to enable rapid power-performance trade-offs at the architectural-level. This capability is provided within a general framework that builds a simulator starting from a microarchitectural specification of the interconnection network. A key component of this construction is the architectural-level parameterized power models that we have derived as part of this effort. Using component power models and a synthesized efficient power (and performance) simulator, a microarchitect can rapidly explore the design space. As case studies, we demonstrate the use of Orion in determining optimal system parameters, in examining the effect of diverse traffic conditions, as well as evaluating new network microarchitectures. In each of the above, the ability to simultaneously monitor power and performance is key in determining suitable microarchitectures.

743 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, results from searches for the standard model Higgs boson in proton-proton collisions at 7 and 8 TeV in the CMS experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.8 standard deviations.

8,857 citations

Journal ArticleDOI
01 Jan 2015
TL;DR: This paper presents an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications, and presents the key building blocks of an SDN infrastructure using a bottom-up, layered approach.
Abstract: The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.

3,589 citations

Book ChapterDOI
05 May 2003
TL;DR: This article presents a small, complete, and efficient SAT-solver in the style of conflict-driven learning, as exemplified by Chaff, and includes among other things a mechanism for adding arbitrary boolean constraints.
Abstract: In this article, we present a small, complete, and efficient SAT-solver in the style of conflict-driven learning, as exemplified by Chaff. We aim to give sufficient details about implementation to enable the reader to construct his or her own solver in a very short time.This will allow users of SAT-solvers to make domain specific extensions or adaptions of current state-of-the-art SAT-techniques, to meet the needs of a particular application area. The presented solver is designed with this in mind, and includes among other things a mechanism for adding arbitrary boolean constraints. It also supports solving a series of related SAT-problems efficiently by an incremental SAT-interface.

2,985 citations

Journal Article
TL;DR: In this paper, the ATLAS experiment is described as installed in i ts experimental cavern at point 1 at CERN and a brief overview of the expec ted performance of the detector is given.
Abstract: This paper describes the ATLAS experiment as installed in i ts experimental cavern at point 1 at CERN. It also presents a brief overview of the expec ted performance of the detector.

2,798 citations