scispace - formally typeset
Search or ask a question
Institution

Worcester Polytechnic Institute

EducationWorcester, Massachusetts, United States
About: Worcester Polytechnic Institute is a education organization based out in Worcester, Massachusetts, United States. It is known for research contribution in the topics: Computer science & Population. The organization has 6270 authors who have published 12704 publications receiving 332081 citations. The organization is also known as: WPI.


Papers
More filters
Proceedings ArticleDOI
20 May 2007
TL;DR: These results show that Trojans that are 3-4 orders of magnitude smaller than the main circuit can be detected by signal processing techniques and provide a starting point to address this important problem.
Abstract: Hardware manufacturers are increasingly outsourcing their IC fabrication work overseas due to their much lower cost structure. This poses a significant security risk for ICs used for critical military and business applications. Attackers can exploit this loss of control to substitute Trojan ICs for genuine ones or insert a Trojan circuit into the design or mask used for fabrication. We show that a technique borrowed from side-channel cryptanalysis can be used to mitigate this problem. Our approach uses noise modeling to construct a set of fingerprints/or an IC family utilizing side- channel information such as power, temperature, and electromagnetic (EM) profiles. The set of fingerprints can be developed using a few ICs from a batch and only these ICs would have to be invasively tested to ensure that they were all authentic. The remaining ICs are verified using statistical tests against the fingerprints. We describe the theoretical framework and present preliminary experimental results to show that this approach is viable by presenting results obtained by using power simulations performed on representative circuits with several different Trojan circuitry. These results show that Trojans that are 3-4 orders of magnitude smaller than the main circuit can be detected by signal processing techniques. While scaling our technique to detect even smaller Trojans in complex ICs with tens or hundreds of millions of transistors would require certain modifications to the IC design process, our results provide a starting point to address this important problem.

741 citations

Proceedings ArticleDOI
13 Apr 2010
TL;DR: This paper uniquely integrates the technique of proxy re-encryption with CP-ABE, and enables the authority to delegate most of laborious tasks to proxy servers, and shows that the proposed scheme is provably secure against chosen ciphertext attacks.
Abstract: Ciphertext-Policy Attribute Based Encryption (CP-ABE) is a promising cryptographic primitive for fine-grained access control of shared data. In CP-ABE, each user is associated with a set of attributes and data are encrypted with access structures on attributes. A user is able to decrypt a ciphertext if and only if his attributes satisfy the ciphertext access structure. Beside this basic property, practical applications usually have other requirements. In this paper we focus on an important issue of attribute revocation which is cumbersome for CP-ABE schemes. In particular, we resolve this challenging issue by considering more practical scenarios in which semi-trustable on-line proxy servers are available. As compared to existing schemes, our proposed solution enables the authority to revoke user attributes with minimal effort. We achieve this by uniquely integrating the technique of proxy re-encryption with CP-ABE, and enable the authority to delegate most of laborious tasks to proxy servers. Formal analysis shows that our proposed scheme is provably secure against chosen ciphertext attacks. In addition, we show that our technique can also be applicable to the Key-Policy Attribute Based Encryption (KP-ABE) counterpart.

720 citations

Journal ArticleDOI
TL;DR: A new deep neural network called DeepONet can lean various mathematical operators with small generalization error and can learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations.
Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions. However, a less known but powerful result is that a NN with a single hidden layer can accurately approximate any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the structure and potential of deep neural networks (DNNs) in learning continuous operators or complex systems from streams of scattered data. Here, we thus extend this theorem to DNNs. We design a new network with small generalization error, the deep operator network (DeepONet), which consists of a DNN for encoding the discrete input function space (branch net) and another DNN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study different formulations of the input function space and its effect on the generalization error for 16 different diverse applications. Neural networks are known as universal approximators of continuous functions, but they can also approximate any mathematical operator (mapping a function to another function), which is an important capability for complex systems such as robotics control. A new deep neural network called DeepONet can lean various mathematical operators with small generalization error.

675 citations

Journal ArticleDOI
TL;DR: In this article, the authors address several issues related to the use of data envelopment analysis (DEA), including model orientation, input and output selection/definition, use of mixed and raw data, and number of inputs and outputs to use versus the number of DMUs.
Abstract: In this paper, we address several issues related to the use of data envelopment analysis (DEA). These issues include model orientation, input and output selection/definition, the use of mixed and raw data, and the number of inputs and outputs to use versus the number of decision making units (DMUs). We believe that within the DEA community, researchers, practitioners, and reviewers may have concerns and, in many cases, incorrect views about these issues. Some of the concerns stem from what is perceived as being the purpose of the DEA exercise. While the DEA frontier can rightly be viewed as a production frontier, it must be remembered that ultimately DEA is a method for performance evaluation and benchmarking against best-practice. DEA can be viewed as a tool for multiple-criteria evaluation problems where DMUs are alternatives and each DMU is represented by its performance in multiple criteria which are coined/classified as DEA inputs and outputs. The purpose of this paper is to offer some clarification and direction on these matters.

654 citations

Journal ArticleDOI
TL;DR: In this paper, the authors focus on a particular type of intrinsically soft, elastomeric robot powered via fluidic pressurization, and present a review of their use in soft robotics.
Abstract: The emerging field of soft robotics makes use of many classes of materials including metals, low glass transition temperature (Tg) plastics, and high Tg elastomers. Dependent on the specific design, all of these materials may result in extrinsically soft robots. Organic elastomers, however, have elastic moduli ranging from tens of megapascals down to kilopascals; robots composed of such materials are intrinsically soft − they are always compliant independent of their shape. This class of soft machines has been used to reduce control complexity and manufacturing cost of robots, while enabling sophisticated and novel functionalities often in direct contact with humans. This review focuses on a particular type of intrinsically soft, elastomeric robot − those powered via fluidic pressurization.

653 citations


Authors

Showing all 6336 results

NameH-indexPapersCitations
Andrew G. Clark140823123333
Ming Li103166962672
Joseph Sarkis10148245116
Arthur C. Graesser9561438549
Kevin J. Harrington8568233625
Kui Ren8350132490
Bart Preneel8284425572
Ming-Hui Chen8252529184
Yuguang Fang7957220715
Wenjing Lou7731129405
Bernard Lown7333020320
Joe Zhu7223119017
Y.S. Lin7130416100
Kevin Talbot7126815669
Christof Paar6939921790
Network Information
Related Institutions (5)
Georgia Institute of Technology
119K papers, 4.6M citations

94% related

Carnegie Mellon University
104.3K papers, 5.9M citations

93% related

Massachusetts Institute of Technology
268K papers, 18.2M citations

91% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

91% related

Purdue University
163.5K papers, 5.7M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202326
202295
2021763
2020836
2019761
2018703