scispace - formally typeset
Search or ask a question
Author

Yechiam Yemini

Bio: Yechiam Yemini is an academic researcher from Columbia University. The author has contributed to research in topics: Network management & Quality of service. The author has an hindex of 32, co-authored 86 publications receiving 4397 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The authors describe a network management system and illustrates its application to managing a distributed database application on a complex enterprise network.
Abstract: The authors describe a network management system and illustrate its application to managing a distributed database application on a complex enterprise network.

404 citations

Patent
27 Jun 2006
TL;DR: In this article, a proxy software is provided for user computers, one or more proxy computers, or both, for users to communicate with vendors anonymously over the network, provide for delivery of an ordered good and provide for electronic payment, while securing the user's private information.
Abstract: E-commerce which secures private and personal information of purchaser/users. E-commerce which may include delivery of a good ordered or purchased over a network (e.g., the Internet) to a purchaser/user, and/or arranging for electronic payment of the good is accomplished while securing private and personal information of purchaser/users, which may include the user's identity and address (and those of the user's computer), and financial information. E-commerce transactions include the purchasing or otherwise ordering of goods electronically by user, who may be a consumer or retail customer, and for delivery of goods to a shipping or electronic address designated by the user or to a physical or virtual depot for pick-up by the user, while providing complete anonymity of the user with respect to an electronic vendor, who may be a merchant or retailer. Proxy software is provided for user computers, one or more proxy computers, or both, for users to communicate with vendors anonymously over the network, provide for delivery of an ordered good and provide for electronic payment, while securing the user's private information. Delivery of a good includes shipping from a vendor to a depot using the depot name and address, and then either re-shipping from the depot to an address designated by the user which is withheld from the vendor, or held at the depot for anonymous pick-up. The proxy software provides for a proxy party to deal with a vendor and arrange payment from a bank or credit card company to the vendor based on an account that the proxy party has with the bank. The proxy party is not required where the purchaser/user is provided with a transaction identity by a third party bank, for example, which masks the true identity of the purchaser who, however, is known to the bank, as is his, her or its shipping address.

364 citations

Proceedings ArticleDOI
30 May 1995
TL;DR: MbD provides a paradigm for distributed, flexible, scalable and robust network management that overcomes the key limitations of current centralized management schemes.
Abstract: This paper introduces a novel approach to distributed computing based on delegation-agents, and describes its applications to decentralize network management. Delegation agents are programs that can be dispatched to remote processes, dynamically linked and executed under local or remote control. Unlike scripted agents, delegation agent programs may be written in arbitrary languages, interpreted or compiled. They can thus be more broadly applied to handle such tasks as real-time monitoring, analysis and control of network resources. Distributed management by delegation (MbD) uses this to manage remote elements and domains. MbD provides a paradigm for distributed, flexible, scalable and robust network management that overcomes the key limitations of current centralized management schemes.

283 citations

Book
01 Apr 1996
TL;DR: A macroscopic view of distributed computer systems reveals the complexity of the resources and services they provide and the satisfaction of users and the performance of applications is determined by the simultaneous allocation of multiple resources.
Abstract: With the advances in computer and networking technology thousands of heterogeneous com puters can be interconnected to provide a large collection of computing and communication resources These systems are used by a growing and increasingly heterogeneous set of users A macroscopic view of distributed computer systems reveals the complexity of the organi zation and management of the resources and services they provide This complexity arises from size e g no of systems no of users and heterogeneity in applications e g on line transaction processing multimedia intelligent information search and resources CPU memory bandwidth locks naming services The complexity of resource allocation is further increased by several factors First in many distributed systems the resources are in fact owned by multiple organizations Second the satisfaction of users and the performance of applications is determined by the simultane ous allocation of multiple resources A multimedia server application requires I O bandwidth to retrieve content CPU time to execute server logic and communication protocols and net working bandwidth to deliver the content to clients The performance of applications may also be altered by trading one resource for another For example the multimedia server ap plication may perform better by releasing memory and acquiring higher CPU priority This trade may result in smaller bu ers for I O and networking but improve the performance

239 citations

Book ChapterDOI
S. Klinger, S. Yemini, Yechiam Yemini1, D. Ohsie1, Salvatore J. Stolfo1 
01 Jan 1995
TL;DR: Preliminary benchmarks of the SEMS demonstrate that the coding approach provides a speedup at least two orders of magnitude over other published correlation systems, and scales well to very large domains involving thousands of problems.
Abstract: This paper describes a novel approach to event correlation in networks based on coding techniques. Observable symptom events are viewed as a code that identifies the problems that caused them; correlation is performed by decoding the set of observed symptoms. The coding approach has been implemented in SMARTS Event Management System (SEMS), as server running under Sun Solaris 2.3. Preliminary benchmarks of the SEMS demonstrate that the coding approach provides a speedup at least two orders of magnitude over other published correlation systems. In addition, it is resilient to high rates of symptom loss and false alarms. Finally, the coding approach scales well to very large domains involving thousands of problems.

239 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 2015
TL;DR: This paper presents an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications, and presents the key building blocks of an SDN infrastructure using a bottom-up, layered approach.
Abstract: The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.

3,589 citations

01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Posted Content
TL;DR: Software-Defined Networking (SDN) as discussed by the authors is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network.
Abstract: Software-Defined Networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound APIs, network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms -- with a focus on aspects such as resiliency, scalability, performance, security and dependability -- as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.

1,968 citations

Proceedings Article
01 Jan 2005
TL;DR: This paper outlines the basic design and functionality of Borealis, and presents a highly flexible and scalable QoS-based optimization model that operates across server and sensor networks and a new fault-tolerance model with flexible consistency-availability trade-offs.
Abstract: Borealis is a second-generation distributed stream processing engine that is being developed at Brandeis University, Brown University, and MIT. Borealis inherits core stream processing functionality from Aurora [14] and distribution functionality from Medusa [51]. Borealis modifies and extends both systems in non-trivial and critical ways to provide advanced capabilities that are commonly required by newly-emerging stream processing applications. In this paper, we outline the basic design and functionality of Borealis. Through sample real-world applications, we motivate the need for dynamically revising query results and modifying query specifications. We then describe how Borealis addresses these challenges through an innovative set of features, including revision records, time travel, and control lines. Finally, we present a highly flexible and scalable QoS-based optimization model that operates across server and sensor networks and a new fault-tolerance model with flexible consistency-availability trade-offs.

1,533 citations

Proceedings ArticleDOI
21 Oct 2001
TL;DR: Experimental results from a prototype confirm that the system adapts to offered load and resource availability, and can reduce server energy usage by 29% or more for a typical Web workload.
Abstract: Internet hosting centers serve multiple service sites from a common hardware base. This paper presents the design and implementation of an architecture for resource management in a hosting center operating system, with an emphasis on energy as a driving resource management issue for large server clusters. The goals are to provision server resources for co-hosted services in a way that automatically adapts to offered load, improve the energy efficiency of server clusters by dynamically resizing the active server set, and respond to power supply disruptions or thermal events by degrading service in accordance with negotiated Service Level Agreements (SLAs).Our system is based on an economic approach to managing shared server resources, in which services "bid" for resources as a function of delivered performance. The system continuously monitors load and plans resource allotments by estimating the value of their effects on service performance. A greedy resource allocation algorithm adjusts resource prices to balance supply and demand, allocating resources to their most efficient use. A reconfigurable server switching infrastructure directs request traffic to the servers assigned to each service. Experimental results from a prototype confirm that the system adapts to offered load and resource availability, and can reduce server energy usage by 29% or more for a typical Web workload.

1,492 citations