scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 2002"


Journal ArticleDOI
TL;DR: The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment and examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected.
Abstract: Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.

7,925 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the discounted utility (DU) model, its historical development, underlying assumptions, and "anomalies" -the empirical regularities that are inconsistent with its theoretical predictions.
Abstract: This paper discusses the discounted utility (DU) model: its historical development, underlying assumptions, and "anomalies" - the empirical regularities that are inconsistent with its theoretical predictions. We then summarize the alternate theoretical formulations that have been advanced to address these anomalies. We also review three decades of empirical research on intertemporal choice, and discuss reasons for the spectacular variation in implicit discount rates across studies. Throughout the paper, we stress the importance of distinguishing time preference, per se, from many other considerations that also influence intertemporal choices.

5,242 citations


Journal ArticleDOI
TL;DR: This paper introduces to the neuroscience literature statistical procedures for controlling the false discovery rate (FDR) and demonstrates this approach using both simulations and functional magnetic resonance imaging data from two simple experiments.

4,838 citations


Journal ArticleDOI
TL;DR: The Sloan Digital Sky Survey (SDSS) is an imaging and spectroscopic survey that will eventually cover approximately one-quarter of the celestial sphere and collect spectra of ≈106 galaxies, 100,000 quasars, 30,000 stars, and 30, 000 serendipity targets as discussed by the authors.
Abstract: The Sloan Digital Sky Survey (SDSS) is an imaging and spectroscopic survey that will eventually cover approximately one-quarter of the celestial sphere and collect spectra of ≈106 galaxies, 100,000 quasars, 30,000 stars, and 30,000 serendipity targets. In 2001 June, the SDSS released to the general astronomical community its early data release, roughly 462 deg2 of imaging data including almost 14 million detected objects and 54,008 follow-up spectra. The imaging data were collected in drift-scan mode in five bandpasses (u, g, r, i, and z); our 95% completeness limits for stars are 22.0, 22.2, 22.2, 21.3, and 20.5, respectively. The photometric calibration is reproducible to 5%, 3%, 3%, 3%, and 5%, respectively. The spectra are flux- and wavelength-calibrated, with 4096 pixels from 3800 to 9200 A at R ≈ 1800. We present the means by which these data are distributed to the astronomical community, descriptions of the hardware used to obtain the data, the software used for processing the data, the measured quantities for each observed object, and an overview of the properties of this data set.

2,422 citations


Book ChapterDOI
09 Jun 2002
TL;DR: In this article, the authors propose a solution based on DAML-S, a DAMLbased language for service description, and show how service capabilities are presented in the Profile section of a DAMl-S description and how a semantic match between advertisements and requests is performed.
Abstract: The Web is moving from being a collection of pages toward a collection of services that interoperate through the Internet. The first step toward this interoperation is the location of other services that can help toward the solution of a problem. In this paper we claim that location of web services should be based on the semantic match between a declarative description of the service being sought, and a description of the service being offered. Furthermore, we claim that this match is outside the representation capabilities of registries such as UDDI and languages such as WSDL.We propose a solution based on DAML-S, a DAML-based language for service description, and we show how service capabilities are presented in the Profile section of a DAML-S description and how a semantic match between advertisements and requests is performed.

2,412 citations


Proceedings ArticleDOI
22 Jul 2002
TL;DR: An extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure is developed, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures.
Abstract: In joint work with Peter O'Hearn and others, based on early ideas of Burstall, we have developed an extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure. The simple imperative programming language is extended with commands (not expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a "separating conjunction" that asserts that its subformulas hold for disjoint parts of the heap, and a closely related "separating implication". Coupled with the inductive definition of predicates on abstract data structures, this extension permits the concise and flexible description of structures with controlled sharing. In this paper, we survey the current development of this program logic, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures. We also discuss promising future directions.

2,348 citations


Journal ArticleDOI
TL;DR: Kraut et al. as discussed by the authors reported negative effects of using the Internet on social involvement and psychological well-being among new Internet users in 1995-96 and found that negative effects dissipated.
Abstract: Kraut et al. (1998) reported negative effects of using the Internet on social involvement and psychological well-being among new Internet users in 1995–96. We called the effects a “paradox” because participants used the Internet heavily for communication, which generally has positive effects. A 3-year follow-up of 208 of these respondents found that negative effects dissipated. We also report findings from a longitudinal survey in 1998–99 of 406 new computer and television purchasers. This sample generally experienced positive effects of using the Internet on communication, social involvement, and well-being. However, consistent with a “rich get richer” model, using the Internet predicted better outcomes for extraverts and those with more social support but worse outcomes for introverts and those with less support.

2,064 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the algorithm that selects the main sample of galaxies for spectroscopy in the Sloan Digital Sky Survey (SDSS) from the photometric data obtained by the imaging survey.
Abstract: We describe the algorithm that selects the main sample of galaxies for spectroscopy in the Sloan Digital Sky Survey (SDSS) from the photometric data obtained by the imaging survey. Galaxy photometric properties are measured using the Petrosian magnitude system, which measures flux in apertures determined by the shape of the surface brightness profile. The metric aperture used is essentially independent of cosmological surface brightness dimming, foreground extinction, sky brightness, and the galaxy central surface brightness. The main galaxy sample consists of galaxies with r-band Petrosian magnitudes r ≤ 17.77 and r-band Petrosian half-light surface brightnesses μ50 ≤ 24.5 mag arcsec-2. These cuts select about 90 galaxy targets per square degree, with a median redshift of 0.104. We carry out a number of tests to show that (1) our star-galaxy separation criterion is effective at eliminating nearly all stellar contamination while removing almost no genuine galaxies, (2) the fraction of galaxies eliminated by our surface brightness cut is very small (~0.1%), (3) the completeness of the sample is high, exceeding 99%, and (4) the reproducibility of target selection based on repeated imaging scans is consistent with the expected random photometric errors. The main cause of incompleteness is blending with saturated stars, which becomes more significant for brighter, larger galaxies. The SDSS spectra are of high enough signal-to-noise ratio (S/N > 4 per pixel) that essentially all targeted galaxies (99.9%) yield a reliable redshift (i.e., with statistical error less than 30 km s-1). About 6% of galaxies that satisfy the selection criteria are not observed because they have a companion closer than the 55'' minimum separation of spectroscopic fibers, but these galaxies can be accounted for in statistical analyses of clustering or galaxy properties. The uniformity and completeness of the galaxy sample make it ideal for studies of large-scale structure and the characteristics of the galaxy population in the local universe.

1,933 citations


Proceedings ArticleDOI
28 Jul 2002
TL;DR: FastSLAM as discussed by the authors is an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map.
Abstract: The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this problem scale up to handle the very large number of landmarks present in real environments. Kalman filter-based algorithms, for example, require time quadratic in the number of landmarks to incorporate each sensor observation. This paper presents FastSLAM, an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map. This algorithm is based on an exact factorization of the posterior into a product of conditional landmark distributions and a distribution over robot paths. The algorithm has been run successfully on as many as 50,000 landmarks, environments far beyond the reach of previous approaches. Experimental results demonstrate the advantages and limitations of the FastSLAM algorithm on both simulated and real-world data.

1,912 citations


Proceedings ArticleDOI
23 Sep 2002
TL;DR: a secure on-demand routing protocol for ad hoc networks that can be used to connect ad-hoc networks to each other without disrupting existing networks.
Abstract: An ad hoc network is a group of wireless mobile computers (or nodes), in which individual nodes cooperate by forwarding packets for each other to allow nodes to communicate beyond direct wireless transmission range. Prior research in ad hoc networking has generally studied the routing problem in a non-adversarial setting, assuming a trusted environment. In this paper, we present attacks against routing in ad hoc networks, and we present the design and performance evaluation of a new secure on-demand ad hoc network routing protocol, called Ariadne. Ariadne prevents attackers or compromised nodes from tampering with uncompromised routes consisting of uncompromised nodes, and also prevents a large number of types of Denial-of-Service attacks. In addition, Ariadne is efficient, using only highly efficient symmetric cryptographic primitives.

1,829 citations


Journal ArticleDOI
TL;DR: This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity and shows that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.
Abstract: Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.

Journal ArticleDOI
TL;DR: This work examines data from two major open source projects, the Apache web server and the Mozilla browser, and quantifies aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution intervals for these OSS projects.
Abstract: According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine data from two major open source projects, the Apache web server and the Mozilla browser. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution intervals for these OSS projects. We develop several hypotheses by comparing the Apache project with several commercial projects. We then test and refine several of these hypotheses, based on an analysis of Mozilla data. We conclude with thoughts about the prospects for high-performance commercial/open source process hybrids.

Journal ArticleDOI
TL;DR: In this article, the authors apply the axioms of revealed preference to the altruistic actions of subjects and find that over 98% of the subjects made choices that are consistent with utility maximization.
Abstract: Subjects in economic laboratory experiments have clearly expressed an interest in behaving unselfishly. They cooperate in prisoners’ dilemma games, they give to public goods, and they leave money on the table when bargaining. While some are tempted to call this behavior irrational, economists should ask if this unselfish and altruistic behavior is indeed self-interested. That is, can subjects’ concerns for altruism or fairness be expressed in the economists’ language of a well-behaved preference ordering? If so, then behavior is consistent and meets our definition of rationality. This paper explores this question by applying the axioms of revealed preference to the altruistic actions of subjects. If subjects adhere to these axioms, such as GARP, then we can infer that a continuous, convex, and monotonic utility function could have generated their choices. This means that an economic model is sufficient to understand the data and that, in fact, altruism is rational. We do this by offering subjects several opportunities to share a surplus with another anonymous subject. However, the costs of sharing and the surplus available vary across decisions. This price and income variation creates budgets for altruistic activity that allow us to test for an underlying preference ordering. We found that subjects exhibit a significant degree of rationally altruistic behavior. Over 98% of our subjects made choices that are consistent with utility maximization. Only a quarter of subjects are selfish money-maximizers, and the rest show varying degrees of altruism. Perhaps most strikingly, almost half of the subjects exhibited behavior that is exactly consistent with one of three standard CES utility functions: perfectly selfish, perfect substitutes, or Leontief. Those with Leontief preferences are always dividing the surplus equally, while those with perfect substitutes preferences give everything away when the price of giving is less than one, but keep everything when the price of giving is greater than one. Using the data on choices, we estimated a population of utility functions and applied these to predict the results of other studies. We found that our results could successfully characterize the outcomes of other studies, indicating still further that altruism can be captured in an economic model.

Proceedings ArticleDOI
20 May 2002
TL;DR: Between October 2000 and December 2000, a database of over 40,000 facial images of 68 people was collected, using the CMU 3D Room to imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions.
Abstract: Between October 2000 and December 2000, we collected a database of over 40,000 facial images of 68 people. Using the CMU (Carnegie Mellon University) 3D Room, we imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions. We call this database the CMU Pose, Illumination and Expression (PIE) database. In this paper, we describe the imaging hardware, the collection procedure, the organization of the database, several potential uses of the database, and how to obtain the database.

Journal ArticleDOI
TL;DR: Performance and cost models of an amine (MEA)-based CO2 absorption system for postcombustion flue gas applications have been developed and integrated with an existing power plant modeling framework that includes multipollutant control technologies for other regulated emissions.
Abstract: Capture and sequestration of CO2 from fossil fuel power plants is gaining widespread interest as a potential method of controlling greenhouse gas emissions. Performance and cost models of an amine (MEA)-based CO2 absorption system for postcombustion flue gas applications have been developed and integrated with an existing power plant modeling framework that includes multipollutant control technologies for other regulated emissions. The integrated model has been applied to study the feasibility and cost of carbon capture and sequestration at both new and existing coal-burning power plants. The cost of carbon avoidance was shown to depend strongly on assumptions about the reference plant design, details of the CO2 capture system design, interactions with other pollution control systems, and method of CO2 storage. The CO2 avoidance cost for retrofit systems was found to be generally higher than for new plants, mainly because of the higher energy penalty resulting from less efficient heat integration as well ...

Book
01 Sep 2002
TL;DR: This lecture maps the concepts and templates explored in this tutorial with well-known architectural prescriptions, including the 4+1 approach of the Rational Unified Process, the Siemens Four Views approach, and the ANSI/IEEE-1471-2000 recommended best practice for documenting architectures for software-intensive systems.
Abstract: This lecture maps the concepts and templates explored in this tutorial with well-known architectural prescriptions, including the 4+1 approach of the Rational Unified Process, the Siemens Four Views approach, and the ANSI/IEEE-1471-2000 recommended best practice for documenting architectures for software-intensive systems. The lecture concludes by re-capping the highlights of the tutorial, and asking for feedback.

Book ChapterDOI
27 Jul 2002
TL;DR: This paper describes version 2 of the NuSMV tool, a state-of-the-art symbolic model checker designed to be applicable in technology transfer projects and is robust and close to industrial systems standards.
Abstract: This paper describes version 2 of the NuSMV tool. NuSMV is a symbolic model checker originated from the reengineering, reimplementation and extension of SMV, the original BDD-based model checker developed at CMU [15]. The NuSMV project aims at the development of a state-of-the-art symbolic model checker, designed to be applicable in technology transfer projects: it is a well structured, open, flexible and documented platform for model checking, and is robust and close to industrial systems standards [6].

Journal ArticleDOI
TL;DR: This paper found that women were more likely than men to use strategies that involved verbal expressions to others or the self to seek emotional support, ruminate about problems, and use positive self-talk.
Abstract: We used meta-analysis to examine recent studies of sex differences in coping. Women were more likely than men to engage in most coping strategies. The strongest effects showed that women were more likely to use strategies that involved verbal expressions to others or the self—to seek emotional support, ruminate about problems, and use positive self-talk. These sex differences were consistent across studies, supporting a dispositional level hypothesis. Other sex differences were dependent on the nature of the stressor, supporting role constraint theory. We also examined whether stressor appraisal (i.e., women's tendencies to appraise stressors as more severe) accountedfor sex differences in coping. We found some support for this idea. To circumvent this issue, we provide some data on relative coping. These data demonstrate that sex differences in relative coping are more in line with our intuitions about the differences in the ways men and women cope with distress.

Journal ArticleDOI
TL;DR: This work derives a sequence of analytical results which show that the reconstruction constraints provide less and less useful information as the magnification factor increases, and proposes a super-resolution algorithm which attempts to recognize local features in the low-resolution images and then enhances their resolution in an appropriate manner.
Abstract: Nearly all super-resolution algorithms are based on the fundamental constraints that the super-resolution image should generate low resolution input images when appropriately warped and down-sampled to model the image formation process. (These reconstruction constraints are normally combined with some form of smoothness prior to regularize their solution.) We derive a sequence of analytical results which show that the reconstruction constraints provide less and less useful information as the magnification factor increases. We also validate these results empirically and show that, for large enough magnification factors, any smoothness prior leads to overly smooth results with very little high-frequency content. Next, we propose a super-resolution algorithm that uses a different kind of constraint in addition to the reconstruction constraints. The algorithm attempts to recognize local features in the low-resolution images and then enhances their resolution in an appropriate manner. We call such a super-resolution algorithm a hallucination or reconstruction algorithm. We tried our hallucination algorithm on two different data sets, frontal images of faces and printed Roman text. We obtained significantly better results than existing reconstruction-based algorithms, both qualitatively and in terms of RMS pixel error.

Journal Article
TL;DR: The NuSMV tool as mentioned in this paper is a symbolic model checker developed at CMU and designed to be applicable in technology transfer projects, it is a well structured, open, flexible and documented platform for model checking, and is robust and close to industrial systems standards.
Abstract: This paper describes version 2 of the NuSMV tool. NuSMV is a symbolic model checker originated from the reengineering, reimplementation and extension of SMV, the original BDD-based model checker developed at CMU [15]. The NuSMV project aims at the development of a state-of-the-art symbolic model checker, designed to be applicable in technology transfer projects: it is a well structured, open, flexible and documented platform for model checking, and is robust and close to industrial systems standards [6].

Proceedings ArticleDOI
12 May 2002
TL;DR: This paper presents an automated technique for generating and analyzing attack graphs, based on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently.
Abstract: An integral part of modeling the global view of network security is constructing attack graphs. Manual attack graph construction is tedious, error-prone, and impractical for attack graphs larger than a hundred nodes. In this paper we present an automated technique for generating and analyzing attack graphs. We base our technique on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently. We also describe two analyses to help decide which attacks would be most cost-effective to guard against. We implemented our technique in a tool suite and tested it on a small network example, which includes models of a firewall and an intrusion detection system.

Journal ArticleDOI
TL;DR: In this article, a practically motivated method for evaluating systems' abilities to handle external stress is proposed, which is designed to assess the potential contributions of various adaptation options to improving systems' coping capacities by focusing on the underlying determinants of adaptive capacity.
Abstract: This paper offers a practically motivated method for evaluating systems’ abilities to handle external stress. The method is designed to assess the potential contributions of various adaptation options to improving systems’ coping capacities by focusing attention directly on the underlying determinants of adaptive capacity. The method should be sufficiently flexible to accommodate diverse applications whose contexts are location specific and path dependent without imposing the straightjacket constraints of a “one size fits all” cookbook approach. Nonetheless, the method should produce unitless indicators that can be employed to judge the relative vulnerabilities of diverse systems to multiple stresses and to their potential interactions. An artificial application is employed to describe the development of the method and to illustrate how it might be applied. Some empirical evidence is offered to underscore the significance of the determinants of adaptive capacity in determining vulnerability; these are the determinants upon which the method is constructed. The method is, finally, applied directly to expert judgments of six different adaptations that could reduce vulnerability in the Netherlands to increased flooding along the Rhine River.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: This work proposes using coordinates-based mechanisms in a peer-to-peer architecture to predict Internet network distance (i.e. round-trip propagation and transmission delay), and proposes the GNP approach, based on absolute coordinates computed from modeling the Internet as a geometric space.
Abstract: We propose using coordinates-based mechanisms in a peer-to-peer architecture to predict Internet network distance (i.e. round-trip propagation and transmission delay). We study two mechanisms. The first is a previously proposed scheme, called the triangulated heuristic, which is based on relative coordinates that are simply the distances from a host to some special network nodes. We propose the second mechanism, called global network positioning (GNP), which is based on absolute coordinates computed from modeling the Internet as a geometric space. Since end hosts maintain their own coordinates, these approaches allow end hosts to compute their inter-host distances as soon as they discover each other. Moreover, coordinates are very efficient in summarizing inter-host distances, making these approaches very scalable. By performing experiments using measured Internet distance data, we show that both coordinates-based schemes are more accurate than the existing state of the art system IDMaps, and the GNP approach achieves the highest accuracy and robustness among them.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the economic geography of talent and explored the factors that attract talent and its effects on high-technology industry and regional incomes, including cultural and nightlife amenities, the coolness index, as well as employing conventional measures of amenities.
Abstract: The distribution of talent, or human capital, is an important factor in economic geography. This article examines the economic geography of talent, exploring the factors that attract talent and its effects on high-technology industry and regional incomes. Talent is defined as individuals with high levels of human capital, measured as the percentage of the population with a bachelor's degree and above. This article advances the hypothesis that talent is attracted by diversity, or what are referred to as low barriers to entry for human capital. To get at this, it introduces a new measure of diversity, referred to as the diversity index, measured as the proportion of gay households in a region. It also introduces a new measure of cultural and nightlife amenities, the coolness index, as well as employing conventional measures of amenities, high-technology industry, and regional income. Statistical research supported by the findings of interviews and focus groups is used to probe these issues. The findings con...

Journal ArticleDOI
TL;DR: Narada as discussed by the authors is an alternative architecture for end-to-end multicast, where end systems implement all multicast related functionality including membership management and packet replication, and self-organize into an overlay structure using a fully distributed protocol.
Abstract: The conventional wisdom has been that Internet protocol (IP) is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP multicast is still plagued with concerns pertaining to scalability, network management, deployment, and support for higher layer functionality such as error, flow, and congestion control. We explore an alternative architecture that we term end system multicast, where end systems implement all multicast related functionality including membership management and packet replication. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP multicast. However, the key concern is the performance penalty associated with such a model. In particular, end system multicast introduces duplicate packets on physical links and incurs larger end-to-end delays than IP multicast. We study these performance concerns in the context of the Narada protocol. In Narada, end systems self-organize into an overlay structure using a fully distributed protocol. Further, end systems attempt to optimize the efficiency of the overlay by adapting to network dynamics and by considering application level performance. We present details of Narada and evaluate it using both simulation and Internet experiments. Our results indicate that the performance penalties are low both from the application and the network perspectives. We believe the potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred.

Journal ArticleDOI
TL;DR: The algorithm allows combinatorial auctions to scale up to significantly larger numbers of items and bids than prior approaches to optimal winner determination by capitalizing on the fact that the space of bids is sparsely populated in practice.

Book ChapterDOI
09 Jun 2002
TL;DR: DAML-S is presented, a DAML+OIL ontology for describing the properties and capabilities of Web Services, and three aspects of the ontology are described: the service profile, the process model, and the service grounding.
Abstract: In this paper we present DAML-S, a DAML+OIL ontology for describing the properties and capabilities of Web Services. Web Services - Web-accessible programs and devices - are garnering a great deal of interest from industry, and standards are emerging for low-level descriptions of Web Services. DAML-S complements this effort by providing Web Service descriptions at the application layer, describing what a service can do, and not just how it does it. In this paper we describe three aspects of our ontology: the service profile, the process model, and the service grounding. The paper focuses on the grounding, which connects our ontology with low-level XML-based descriptions of Web Services.

Proceedings ArticleDOI
01 Jul 2002
TL;DR: This paper shows that a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control and demonstrates the flexibility of the approach through four different applications.
Abstract: Real-time control of three-dimensional avatars is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. One appealing approach to obtaining a rich set of avatar behaviors is to collect an extended, unlabeled sequence of motion data appropriate to the application. In this paper, we show that such a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Three interface techniques are demonstrated for controlling avatar motion using this data structure: the user selects from a set of available choices, sketches a path through an environment, or acts out a desired motion in front of a video camera. We demonstrate the flexibility of the approach through four different applications and compare the avatar motion to directly recorded human motion.

01 Jan 2002
TL;DR: The TESLA (Timed Efficient Stream Loss-tolerant Authentication) broadcast authentication protocol is presented, an efficient protocol with low communication and computation overhead, which scales to large numbers of receivers, and tolerates packet loss.
Abstract: One of the main challenges of securing broadcast communication is source authentication, or enabling receivers of broadcast data to verify that the received data really originates from the claimed source and was not modified en route. This problem is complicated by mutually untrusted receivers and unreliable communication environments where the sender does not retransmit lost packets. This article presents the TESLA (Timed Efficient Stream Loss-tolerant Authentication) broadcast authentication protocol, an efficient protocol with low communication and computation overhead, which scales to large numbers of receivers, and tolerates packet loss. TESLA is based on loose time synchronization between the sender and the receivers. Despite using purely symmetric cryptographic functions (MAC functions), TESLA achieves asymmetric properties. We discuss a PKI application based purely on TESLA, assuming that all network nodes are loosely time synchronized.

Journal ArticleDOI
TL;DR: Results for distributed model predictive control are presented, focusing on the coordination of the optimization computations using iterative exchange of information and the stability of the closed-loop system when information is exchanged only after each iteration.
Abstract: The article presents results for distributed model predictive control (MPC), focusing on i) the coordination of the optimization computations using iterative exchange of information and ii) the stability of the closed-loop system when information is exchanged only after each iteration. Current research is focusing on general methods for decomposing large-scale problems for distributed MPC and methods for guaranteeing stability when multiple agents are controlling systems subject to abrupt changes.